Human Error Analysis Wikipedia
Contents |
Island accident), aviation (see pilot error), space exploration (e.g., the Space Shuttle Challenger Disaster and Space Shuttle Columbia disaster), and medicine (see medical error). Prevention of human error is generally seen human error percentage as a major contributor to reliability and safety of (complex) systems. Contents 1 human error definition Definition 2 Performance 3 Categories 4 Sources 5 Controversies 6 See also 7 References Definition[edit] Human error means that something has human error synonym been done that was "not intended by the actor; not desired by a set of rules or an external observer; or that led the task or system outside its acceptable limits".[1] In short, it human error in experiments is a deviation from intention, expectation or desirability.[1] Logically, human actions can fail to achieve their goal in two different ways: the actions can go as planned, but the plan can be inadequate (leading to mistakes); or, the plan can be satisfactory, but the performance can be deficient (leading to slips and lapses).[2][3] However, a mere failure is not an error if there had been no plan to
Types Of Human Error
accomplish something in particular.[1] Performance[edit] Human error and performance are two sides of the same coin: "human error" mechanisms are the same as "human performance" mechanisms; performance later categorized as 'error' is done so in hindsight:[4][5] therefore actions later termed "human error" are actually part of the ordinary spectrum of human behaviour. The study of absent-mindedness in everyday life provides ample documentation and categorization of such aspects of behavior. While human error is firmly entrenched in the classical approaches to accident investigation and risk assessment, it has no role in newer approaches such as resilience engineering.[6] Categories[edit] There are many ways to categorize human error.[7][8] exogenous versus endogenous (i.e., originating outside versus inside the individual)[9] situation assessment versus response planning[10] and related distinctions in errors in problem detection (also see signal detection theory) errors in problem diagnosis (also see problem solving) errors in action planning and execution[11] (for example: slips or errors of execution versus mistakes or errors of intention[12][13]) By level of analysis; for example, perceptual (e.g., optical illusions) versus cognitive versus communication versus organizational. Sources[edit] The cognitive study of human error is a very active research field, including work related to limits of memory and attention and also to dec
the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore
Human Error Prevention
lead to an improvement in the overall levels of safety. There exist three primary types of human error at workplace reasons for conducting an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for human error in aviation such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching https://en.wikipedia.org/wiki/Human_error of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. ‘HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines. THERP models human error probabilities (HEPs) using a fault-tree approach, in a similar way to an engineering risk https://en.wikipedia.org/wiki/Technique_for_human_error-rate_prediction assessment, but also accounts for performance shaping factors (PSFs) that may influence these probabilities. The probabilities for the human reliability analysis event tree (HRAET), which is the primary tool for assessment, are nominally calculated from the database developed by the authors Swain and Guttman; local data e.g. from simulators or accident reports may however be used instead. The resultant tree portrays a step by step account of the stages involved in a task, in a logical order. The technique is known as a total methodology [1] as it simultaneously manages a number of different activities including task analysis, error identification, representation in form of HRAET and HEP quantification. Contents 1 Background 2 THERP methodology 3 Worked example 3.1 Context 3.2 Assumptions 3.3 Method 3.4 Results 4 Advantages of THERP 5 Disadvantages of THERP 6 References Background[edit] The technique for human error rate prediction (THERP) is a first generation methodology, which means that its procedures follow the way conventional reliability analysis models a machine. [7] The technique was developed in the Sandia Laboratories for the US Nuclear Regulatory Commission [2]. Its primary author is Swain, who developed the THERP methodology gradually over a lengthy period of time. [1]. THERP relies on a large human reliability database that contains HEPs, and is base
its sources remain unclear because it lacks inline citations. Please help to improve this article by introducing more precise citations. (May 2014) (Learn how and when to remove this template message) CREAM (Cognitive https://en.wikipedia.org/wiki/CREAM Reliability Error Analysis Method) is a human reliability analysis technique developed by Erik https://en.wikipedia.org/wiki/Failure_mode_and_effects_analysis Hollnagel. It is a bi-directional analysis method, meant to be used for both performance prediction and accident analysis. Unlike first generation error analysis methods like THERP, CREAM represents a second generation tool allowing for better analysis by abandoning the hierarchical structure of previous methods and providing better separation between objective and subjective error. CREAM human error is a technique used in the field of human reliability assessment (HRA), for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. From such analyses measures can then be taken to reduce the likelihood of errors occurring within a system and therefore lead to an improvement in the overall levels of safety. There exist three primary reasons for conducting human error in an HRA; error identification, error quantification and error reduction. As there exist a number of techniques used for such purposes, they can be split into one of two classifications; first generation techniques and second generation techniques. First generation techniques work on the basis of the simple dichotomy of ‘fits/doesn’t fit’ in the matching of the error situation in context with related error identification and quantification and second generation techniques are more theory based in their assessment and quantification of errors. HRA techniques have been utilised in a range of industries including healthcare, engineering, nuclear, transportation and business sector; each technique has varying uses within different disciplines. CREAM is a second generation HRA method. However compared to many other such methods, it takes a very different approach to modelling human reliability. There are two versions of the technique, the basic and the extended version, both of which have in common two primary features; ability to identify the importance of human performance in a given context and a helpful cognitive model and associated framework, usable for both prospective and retrospective analysis. Prospective analysis allows likely human errors to be identified while retrospective analysis quantifies errors that have already occurred. The concept of cogni
techniques for failure analysis. It was developed by reliability engineers in the late 1950s to study problems that might arise from malfunctions of military systems. A FMEA is often the first step of a system reliability study. It involves reviewing as many components, assemblies, and subsystems as possible to identify failure modes, and their causes and effects. For each component, the failure modes and their resulting effects on the rest of the system are recorded in a specific FMEA worksheet. There are numerous variations of such worksheets. A FMEA can be a qualitative analysis,[1] but may be put on a quantitative basis when mathematical failure rate models are combined with a statistical failure mode ratio database. A few different types of FMEA analyses exist, such as Functional Design Process FMEA. Control plan PFME. Sometimes FMEA is extended to FMECA (Failure mode, effects, and criticality analysis) to indicate that criticality analysis is performed too. FMEA is an inductive reasoning (forward logic) single point of failure analysis and is a core task in reliability engineering, safety engineering and quality engineering. Quality engineering is specially concerned with the "Process" (Manufacturing and Assembly) type of FMEA. A successful FMEA activity helps to identify potential failure modes based on experience with similar products and processes—or based on common physics of failure logic. It is widely used in development and manufacturing industries in various phases of the product life cycle. Effects analysis refers to studying the consequences of those failures on different system levels. Functional analyses are needed as an input to determine correct failure modes, at all system levels, both for functional FMEA or Piece-Part (hardware) FMEA. An FMEA is used to structure Mitigation for Risk reduction based on either failure (mode) effect severity reduction or based on lowering the probability of failure or both. The FMEA is in principle a full inductive (forward logic) analysis, however the failure probability can only be estimated or reduced by understanding the failure mechanism. Ideally this probability shall be lowered to "impossible to occur" by eliminating the (root) causes. It is therefore important to include in the FMEA an appropriate depth of information on the causes of failure (deductive analysi