Human Error In Computer
Contents |
on Slideshare Security Intelligence Analysis and Insight for Information Security Professionals Toggle navigation Search for: Search Expand News Topics Industries X-Force Research Media Events & Webinars Home > Topics > CISO > The Role of Human Error in Successful Security Attacks The human error in information technology Role of Human Error in Successful Security Attacks September 2, 2014 | By Fran human error cyber security Howarth Share The Role of Human Error in Successful Security Attacks on Twitter Share The Role of Human Error in Successful Security human error in information security Attacks on Facebook Share The Role of Human Error in Successful Security Attacks on LinkedIn Share The Role of Human Error in Successful Security Attacks on Twitter Share The Role of Human Error in Successful Security examples of human error in information technology Attacks on Facebook Share The Role of Human Error in Successful Security Attacks on LinkedIn All humans make mistakes. One of the most intriguing findings from IBM's "2014 Cyber Security Intelligence Index" is that 95 percent of all security incidents involve human error. Many of these are successful security attacks from external attackers who prey on human weakness in order to lure insiders within organizations to unwittingly provide them with access to
Data Breach Human Error
sensitive information. These mistakes are costly since they involve insiders who often have access to the most sensitive information. According to research by Federal Computer Week cited in a recent Vormetric report, the greatest impacts of successful security attacks involving insiders are exposure of sensitive data, theft of intellectual property and the introduction of malware. The research also reported that 59 percent of respondents agree that most information technology security threats that directly result from insiders are the result of innocent mistakes rather than malicious abuse of privileges. The Threats of Inadvertent Human Error by Insider Mistakes One of the leading errors made by insiders is sending sensitive documents to unintended recipients. This is relatively easy to solve by deploying security controls to monitor sensitive information being leaked out of the organization. Once considered complex to deploy, these controls have been made considerably easier to implement by vendors in recent years. This has dramatically reduced the level of user involvement required and increased the use of such controls. These tools can also prevent users from engaging in inappropriate behavior, such as sending documents home via email or placing them on file-sharing sites or removable media such as USB sticks. Lost or stolen mobile devices are also a major concern that is exacerbated by
Reviews In-depth App Business travel Innovation at Work Leadership Whitepapers You are here:Home Security News Human error biggest threat to computer security Human error biggest threat to computer
Human Error Vs Computer Error
security Share twitter linkedin facebook google+ email Rene Millman News 19 Jun, 2007 "human error" cyber attack New DTI report finds that most people never change their password. A third write them down on paper. what are some basic guidelines for protecting your computer from security risk? The biggest risk to an organisation's network security is human error, according to a new report. The research by the Department of Trade and Industry found that over a third of https://securityintelligence.com/the-role-of-human-error-in-successful-security-attacks/ respondents either wrote down their password on a piece of paper or recorded it somewhere on their computer. The study also found that nearly two-thirds of the 1800 UK adults questioned said they never changed their passwords. Minister for Science and Innovation Malcolm Wicks said that the survey found that a large number of people were "careless with passwords, unwittingly exposing http://www.itpro.co.uk/115920/human-error-biggest-threat-to-computer-security themselves and their company to fraud and theft." He added that the UK lost £440 million to credit card fraud last year and that 62 per cent of companies experienced a network security incident. Wicks said that this was a problem that needed to be fixed. "Network security is also a major growth area where the UK has a good opportunity to become a global leader if we develop new technology to give us a competitive edge," said Wicks. The department has embarked on four projects aimed at increasing network security by cutting down the risk of human error. Each of the projects will use behavioural science to tackle human error. The DTI has given the projects £4 million in total. Among the successful projects are a project, run by BAE Systems and Loughborough University aimed at developing new ways of assessing an organisation's security risk and the human factors involved. Also, another project run by HP, Merrill Lynch, the University of Bath, the University of Newcastle and University College London will develop a predictive framework to assess the effectiveness security policies th
are attributed to a poorly designed human-computer interface (HCI). However, human beings are often needed to be the fail-safe in an otherwise automated system. Even the most highly trained and alert operators are prone to https://users.ece.cmu.edu/~koopman/des_s99/human/ boredom when they are usually not needed for normal operation, and panic when an http://www.jnd.org/dn.mss/commentary_human_er.html unusual situation occurs, stress levels are raised, and lives are at stake. The HCI must give appropriate feedback to the operator to allow him or her to make well informed decisions based on the most up to date information on the state of the system. High false alarm rates will make the operator ignore human error a real alarm condition. Methods for determining the effectiveness of an HCI, such as heuristic evaluation, cognitive walkthroughs, and empirical evaluations like protocol analysis, exist, but are often cumbersome and do not provide conclusive data on the safety and usability of an HCI. System designers must insure that the HCI is easy and intuitive for human operators to use, but not so simple that it lulls the operator human error in into a state of complacency and lowers his or her responsiveness to emergency situations. Contents: Introduction Key Concepts Sources of Human Error HCI Problems Available tools, techniques, and metrics HCI Design Heuristic Evaluation Cognitive Walkthrough Protocol Analysis MetriStation Relationship to other topics Conclusions Annotated Reference List & Further Reading Introduction In any complex system, most errors and failures in the system can be traced to a human source. Incomplete specifications, design defects, and implementation errors such as software bugs and manufacturing defects, are all caused by human beings making mistakes. However, when looking at human errors in the context of embedded systems, we tend to focus on operator errors and errors caused by a poor human-computer interface (HCI). Human beings have common failure modes and certain conditions will make it more likely for a human operator to make a mistake. A good HCI design can encourage the operator to perform correctly and protect the system from common operator errors. However, there is no well defined procedure for constructing an HCI for safety critical systems. In an embedded system, cost, size, power, and complexity are especially limited, so the interface must be relatively simple and easy to use without sacrificing system safety. Also, a
THE ACM1 In 1988, the Soviet Union's Phobos 1 satellite was lost on its way to Mars. Why? According to Science magazine, "not long after the launch, a ground controller omitted a single letter in a series of digital commands sent to the spacecraft. And by malignant bad luck, that omission caused the code to be mistranslated in such a way as to trigger the test sequence" (the test sequence was stored in ROM, but was intended to be used only during checkout of the spacecraft while on the ground) [7]. Phobos went into a tumble from which it never recovered. What a strange report. "Malignant bad luck"? Why bad luck: why not bad design? Wasn't the problem the design of the command language that allowed such a simple deviant event to have such serious consequences. The effects of electrical noise on signal detectability, identification, and reliability are well known. Designers are expected to use error-detecting and correcting codes. Suppose interference from known sources of electromagnetic noise had corrupted the signal to Phobos. We would not blame the ground controllers: we would say that the system designers did not follow standard engineering practice, and we would reconsider the design of the system so as to protect against this problem in the future. People err. That is a fact of life. People are not precision machinery designed for accuracy. In fact, we humans are a different kind of device entirely. Creativity, adaptability, and flexibility are our strengths. Continual alertness and precision in action or memory are our weaknesses. We are amazingly error tolerant, even when physically damaged. We are extremely flexible, robust, and creative, superb at finding explanations and meanings from partial and noisy evidence. The same properties that lead to such robustness and creativity also produce errors. The natural tendency to interpret partial information -- although often our prime virtue -- can cause operators to misinterpret system behavior in such a plausible way that the misinterpretation can be difficult to discover. Quite a lot is known about human performance and the way it applies to system interaction [1]. Several classes of human error have been identified and studied and conditions that increase the likelihood of error can be specified in advance [3, 4, 5]. Communication systems can be designed to be error-to