HUMAN FACTORS
CONTINUATION TRAINING
Module 1 of 4
Rev2 Dec 22 Page 1 of 10
CONTENTS:
1 Introduction
1.1 Need to address human factors
1.2 Statistics
1.3 Incidents-Biocide overdose in A321 dual engine incident
2 Safety Culture/Organisational factors
3 Human Error
3.1. Error models and theories
3.2. Types of errors in maintenance tasks
3.3. Violations
3.4. Implications of errors
3.5. Avoiding and managing errors
3.6. Human reliability
Rev2 Dec 22 Page 2 of 10
1. Introduction to Human Factors
1.1 Need to address Human Factors
Human Factors is the application of what we know about human capabilities and limitations in order to
maximize overall system performance. By giving careful consideration to the interactions between
humans and technological and organisational elements of a system it is possible to significantly increase
the system’s productivity and reliability.
Human Factors addresses the interaction of people with other people, with facilities and with
management systems in the workplace. These factors have been shown to have an impact on human
performance and safe operations. Human Factors provide practical solutions to reduce incidents while
improving productivity.
In the aviation industry Human Factors is an essential component in the effort to operate in a safe and
efficient manner. Areas where Human Factors has a key role include:
• Design of tools, equipment and user interfaces in a way that augments the user’s work
performance
• Human and organizational factors in risk assessments and emergency preparedness planning
• Human behaviour and cognition in accident causes
• Efficient decision making and teamwork in stressful or critical situations
• Safety culture and safety behaviour improvement programmes
• Organisational reliability
The aviation industry has a major accident potential. All aspects rely on advanced human-machine
interfaces, and are activities with a complex organisational structure. Increasingly, the work is
performed by distributed teams all around our network. Human Factors has become an important and
integral part of the industry’s approach to safe and efficient operations.
As per Part-145. A. 30(e) the Human Factors syllabus is changing to include an understanding of the
application of safety management principals. These changes will be reflected in future
recurrent/continuation training.
Rev2 Dec 22 Page 3 of 10
1.2 Statistics
In the early days of flight, approximately 80 percent of accidents were caused by the machine and 20
percent were caused by human error. Today that statistic has reversed. Approximately 80 percent of
airplane accidents are due to human error (pilots, air traffic controllers, mechanics, etc.)
1.3 Incidents
Biocide overdose in A321 dual-engine incident
Synopsis
UK investigators probing a serious dual engine problem
on a departing Airbus A321 have discovered its fuel
system had previously been overdosed with biocide,
after a maintenance engineer misunderstood a
measurement term.
The engineer was confused by the term ‘ppm’ –
meaning ‘parts per million’ – while conducting a
biocidal shock treatment of an A321’s fuel tanks, as the
jet neared the end of a month-long maintenance check.
While the anti-contamination treatment required a
biocide concentration of 100ppm, no explanation or
instruction for calculating the overall quantity featured
in the aircraft maintenance manual task, and the
engineer resorted to an internet calculator.
Rev2 Dec 22 Page 4 of 10
But he miscalculated the dose and, instead of the correct quantity of just under 0.8kg, he poured 30kg
of biocide into each of the right- and left-hand wing tanks – equivalent to more than 37 times the
maximum permitted dosage, given that the tanks both held 6,200kg of fuel.
The task had not been designated as a critical task, and therefore no additional measures were used to
check that it was performed in accordance with the [manual].
No control measures were in place at the
[maintenance organisation’s] stores or
planning departments to prevent unusually
large quantities of chemicals being issued to
[engineering] staff.
The pouring of the biocide directly into the
tanks was not an approved process, and it
should have been pre-mixed with fuel.
The excessive level of Kathon FP 1.5 biocide caused the subsequent engine problems. The A321
experienced a number of engine start-up and performance difficulties while conducting flights, two days
after the aircraft returned to operations post-maintenance.
The most serious of these occurred as it took off from London Gatwick on a crew-only ferry to London
Stansted. The aircraft’s left-hand CFM International CFM56 powerplant began “banging and surging” at
500ft, the engine’s N1 dropped to less than 40% for 25s, and the jet started to fishtail.
As the crew declared an emergency and requested a return to Gatwick, instruments indicated an
intermittent stall of the right-hand engine. The crew reduce the thrust and flew high on the approach,
preparing for a possible glide if the engine situation deteriorated. The A321 touched down safely about
11min after departure, with no injuries to the seven occupants.
2. Safety Culture/Organisational factors
There are several of definitions of safety culture, but the UK Advisory Committee on the Safety of
Nuclear Installations covers the key elements:
“The safety culture of an organisation is the product of individual and group values, attitudes, perceptions,
competencies, and patterns of behaviour that determine the commitment to, and the style and proficiency of,
an organisation’s health and safety management. Organisations with a positive safety culture are
characterised by communications founded on mutual trust, by shared perceptions of the importance of safety
and by confidence in the efficacy of preventive measures”
A simple definition of safety culture is “the way things get done around here”. It is also described as “how
people behave when they think that no-one is looking”.
If an organisation has the right culture in place, it will find the right people and the right technology to
deliver safe and effective performance. The need for a positive safety culture in an organisation is
fundamental.
The advantages often associated with a strong safety culture include few at-risk behaviours, low
incident rates, low turn-over of personnel, low absenteeism rates and high productivity. Such
organisations usually excel in all aspects of their business.
Rev2 Dec 22 Page 5 of 10
The largest influences on safety culture are:
• management commitment and style.
• employee involvement.
• training and competence.
• communication.
• compliance with procedures.
• organisational learning.
Organisations that have a blame culture focus on individual culpability for human errors, rather than
correcting defective systems, processes and equipment. It is becoming increasingly recognised that
there should be a shift from a blame culture to a just or fair culture.
A just culture has been defined as an environment in which staff members are not punished for any
actions, omissions or decisions taken by them, that are commensurate with their experience and
training. However, gross negligence, wilful violations and destructive acts are not tolerated. Part 5 of
TP025 – The SMS Manual details further Altitude approach to safety culture. How Altitude deals with
the investigation of incidents/accidents and decides on culpability is detailed in TP04 – MEMS and
MEDA.
3. Human Error
3.1 Error Models & theories
Human error can be viewed in two
ways: the person approach and
the system approach. Each has its
model of error causation, and each
model gives rise to quite different
philosophies of error
management. Understanding
these differences has important
practical implications for coping
with the risk of an incident.
• The person approach focuses on the errors of individuals, blaming them for forgetfulness,
inattention, or moral weakness
• The system approach concentrates on the conditions under which individuals work and tries to
build defences to avert errors or mitigate their effect.
Rev2 Dec 22 Page 6 of 10
Below is the famous Swiss cheese or Reason model.
Organisational problems
Local problems
Unsafe acts
Safeguards
Organisational problems
Example: The decision not to keep a stock of rarely used spare parts, due to costs.
Local problems
Example: No arrangement has been made for the provision of covered hangar space during
adverse weather.
Unsafe acts
Example: Re-using parts due to shortages in stock, where the official procedure prescribes
replacement.
Safeguards
Example: ETOPS checks, duplicate or second inspections, check sheets and simultaneous
inspection prevention.
The Pear model takes a slightly different approach:
The mnemonic PEAR is used to recall the four considerations for assessing and mitigating
human factors in aviation maintenance:
• People who do the job.
• Environment in which they work.
• Actions they perform.
• Resources necessary to complete the job.
Rev2 Dec 22 Page 7 of 10
Finally, we have the Dirty Dozen:
The Dirty Dozen refers to twelve of the most common human error preconditions, or conditions that
can act as precursors, to accidents or incidents. These twelve elements influence people to make
mistakes.
1. Lack of communication 5. Complacency 9. Lack of knowledge
2. Distraction 6. Lack of teamwork 10. Fatigue
3. Lack of resources 7. Pressure 11. Lack of assertiveness
4. Stress 8. Lack of awareness 12. Norms
Whilst the Dirty Dozen list of human factors has increased awareness of how humans can contribute
towards accidents and incidents, the aim of the concept was to focus attention and resources towards
reducing and capturing human error. Therefore, for each element on The Dirty Dozen list there are
examples of typical countermeasures designed to reduce the possibility of any human error from
causing a problem.
3.2. Types of errors in maintenance tasks
The unintended failure to carry out a maintenance task in accordance with the requirements of that
task and/or not working in accordance with the principles of good maintenance practice.
Aviation industry studies have found that the origin of as many as 20% of all in-flight engine shutdowns
can be traced to maintenance error.
Typical maintenance errors include:
• Electrical wiring discrepancies.
• Loose objects left in airplane.
• Incorrect installation of components.
• Fitting of wrong parts.
• Inadequate lubrication.
• Access panels, fairings, or cowlings not secured.
• Fuel or oil caps and fuel panels not secured.
Analysis of maintenance error data collected by a group of UK Maintenance Organisations found that
when the type of error was classified, four categories accounted for 78% of the errors. These were
Installation error - 39%, Inattention (damage) - 16%, Poor inspection standards - 12% and Approved
data not followed - 11%.
Some Error Management Techniques
Prevention.
e.g., Invest in Training, Good Ergonomics, Easy to use Work Cards & Manuals
Capture.
Checking Work, Written Handovers, Use Comprehensive Functional Checks and Safeguards.
Tolerance.
Not carrying out simultaneous maintenance.
Rev2 Dec 22 Page 8 of 10
3.3. Violations
Violations can be broken into two situations:
Routine Violations: Violations which are a habitual action on the part of the operator and are tolerated
by the governing authority.
Exceptional Violations: Violations
which are an isolated departure
from authority, neither typical of
the individual nor condoned by
management.
Violations (non-compliances,
circumventions, shortcuts and
workarounds) are intentional but
usually well-meaning failures
where the person deliberately
does not carry out the procedure
correctly. They are rarely malicious
(sabotage) and usually result from
an intention to get the job done as
efficiently as possible. They often occur where the equipment or task has been poorly designed and/or
maintained. Mistakes resulting from poor training (i.e. people have not been properly trained in the safe
working procedure) are often mistaken for violations. Understanding that violations are occurring and
the reason for them is necessary if effective means for avoiding them are to be introduced. Peer
pressure, unworkable rules and incomplete understanding can give rise to violations.
3.4. Implications of errors
Everyone can make errors no matter how well trained and motivated they are. However, in the
workplace, the consequences of such human failure can be severe. Many major accidents e.g., Piper
Alpha, Chernobyl, were initiated by human failure.
The impact of errors varies enormously ranging from simple incidents and accidents to full blown
disasters. It can lead to damage to equipment and injury of personnel.
3.5. Avoiding and managing errors
Understanding these types of human failure can help identify control measures. In some cases, it can be
difficult to place an error in a single category – it may result from a slip or a mistake, for example. There
may be a combination of underlying causes requiring a combination of preventative measures. It’s
useful to think about whether the failure is an error of omission (forgetting or missing out a key step) or
an error of commission (e.g., doing something out of sequence or using the wrong control), and taking
action to prevent that type of error.
Rev2 Dec 22 Page 9 of 10
The challenge is to develop error tolerant systems and to prevent errors from initiating; to manage
human error proactively it should be addressed as part of the risk assessment process, where:
• Human failure is normal and predictable, but it can be managed.
• Significant potential human errors need to be identified,
• Those factors that make errors more or less likely are identified (such as poor design,
distraction, time pressure, workload, competence, morale, noise levels and communication
systems)
• Control measures are devised and implemented, preferably by redesign of the task or
equipment
• Managing human reliability should be integral to any safety management system.
• A poorly designed activity might be prone to a combination of errors and more than one
solution may be necessary.
• Workers should be involved in design of tasks and procedures.
• Risk Assessment should identify where human failure can occur in safety critical tasks, the
performance influencing factors which might make it more likely, and the control measures
necessary to prevent it.
• Incident investigations should seek to identify why individuals have failed rather than stopping
at ‘operator error’.
Effective risk management depends crucially on establishing a reporting culture. Without a detailed
analysis of mishaps, incidents, near misses, and “free lessons,” we have no way of uncovering recurrent
error traps or of knowing where the “edge” is until we fall over it.
3.6. Human reliability
Human reliability is the probability of humans conducting specific tasks with satisfactory performance.
Tasks may be related to equipment repair, equipment or system operation, safety actions, analysis, and
other kinds of human actions that influence system performance.
Human reliability is determined by the condition of a finite number of performance limiting factors,
such as design of interfaces, distraction, time pressure, workload, competence, morale, noise levels and
communication systems.
Human performance can be affected by many factors such as age, state of mind
physical health, attitude, emotions, propensity for certain common mistakes, errors and cognitive
biases, etc.
Human reliability is very important due to the contributions of humans to the resilience of systems and
to possible adverse consequences of human errors or oversights, especially when the human is a crucial
part of the large technical systems . User-centred design and error-tolerant design are just two of many
terms used to describe efforts to make technology better suited to operation by humans.
END
Rev2 Dec 22 Page 10 of 10