Threat and Error Management

Threat and Error Management is based on work done at the University of Texas. Unlike previous approaches it is based on observations by researchers riding along with crew is aircraft cockpits. As a result it takes into account threats and errors that were handled successfully, not just the ones that led to incidents as with traditional research methods.

Errors in flight were found to be common:

One common false assumption is that errors and violations are limited to incidents and accidents. Recent data from Flight Operations Monitoring (e.g. LOSA) indicate that errors and violations are quite common in flight operations. According to the University of Texas LOSA database, in around 60% of the flights at least one error or violation was observed, the average per flight being 1.5.

It uses a system approach rather than seeking a person or persons to blame. The problem with the blame approach is that it stops short of understanding deeper causes.

Here’s an explanation of the system approach and the person approach:

Summary points

Two approaches to the problem of human fallibility exist: the person and the system approaches
The person approach focuses on the errors of individuals, blaming them for forgetfulness, inattention, or moral weakness
The system approach concentrates on the conditions under which individuals work and tries to build defences to avert errors or mitigate their effects
High reliability organisations—which have less than their fair share of accidents—recognise that human variability is a force to harness in averting errors, but they work hard to focus that variability and are constantly preoccupied with the possibility of failure

In this system Threats are external factors such as adverse weather and Errors are caused by human actions or inactions.

Undesired State

Examples of undesired aircraft states include: off altitude, off airspeed, off course, and the wrong place at the wrong time. Managed effectively, pilots can restore margins of safety or if mismanaged could likely create additional error, an incident, or accident.

More terms are defined here;Flight Operations Briefing Notes Human Performance Error Management(pdf)

Flight Operations Briefing Notes Human Performance Error Management(pdf)

Looking at some incidents using the above source:

The Bounty, the ship was put into an Undesirable State as soon as it left port, or at best when it changed plans from passing East of Sandy to passing west in the Gulf Stream. The error was a mistake:

Mistakes are failures in the plan of action. Even if execution of the plan was
correct, it would not be possible to achieve the intended outcome.
Plans that lead to mistakes can be deficient (not good for anything), inappropriate
good plans (good for another situation), clumsy (with side-effects) or dangerous
(with increased risks).

Mistakes are results of conscious decision making, so they occur at rule and knowledge based
performance levels. In both cases, the two typical problem areas are:
• Identifying the situation correctly
• Knowing the correct solution (“rule”) to apply

In this case: Situation not identified correctly.

In the case of the Costa Concordia the ship was put in an Undesirable State when it left the regular track-line for the sail-past without a formal plan. This was again a Mistake and a Violation of SOP.

The Sewol, the ship was in an Undesirable State before it left the berth.

No report on the Fennica yet but looks like he ship was put an Undesirable State as soon as the plan was made to sail over the shoal. A mistake.

The car carrier Hoegh Osaka is another interesting case in point; it was apparently unstable before it left the dock in Southampton, because of faulty load calculations.

[QUOTE=Mat;181226]The car carrier Hoegh Osaka is another interesting case in point; it was apparently unstable before it left the dock in Southampton, because of faulty load calculations.[/QUOTE]

Could be a slip or a lapse. Things can get hectic during cargo ops / sailing prep.

[QUOTE=Kennebec Captain;181227]Could be a slip or a lapse. Things can get hectic during cargo ops / sailing prep.[/QUOTE]

I respectfully disagree, it is a system failure if cargo is not loaded correctly. There are rules known by mates/masters and the knowledge exists industry wide. ‘Hectic’ is an excuse for not following known rules and not a slip or lapse. A slip or lapse would involve one item of cargo being loaded incorrectly. The aircraft industry has addressed the cargo loading and flight preparation already. Airplanes are just sky ships, which have a much better safety record than their ocean going counterparts. But it took them a long time and a lot of regulations. Industry is loath to regulate themselves but once they have no choice they do so efficiently.
Mistakes are results of conscious decision making, so they occur at rule and knowledge based
performance levels. In both cases, the two typical problem areas are:
• Identifying the situation correctly
• Knowing the correct solution (“rule”) to apply

Thanks for passing this study along. From the board room to the galley we all need to get past the " just git 'er done" mind set.
I have often wondered if we would even have SOLAS if rich folks like John Jacob Astor, Benjamin Guggenheim and Isidor Straus had not perished on the Titanic.

[QUOTE=tengineer1;181242]I respectfully disagree, it is a system failure if cargo is not loaded correctly. There are rules known by mates/masters and the knowledge exists industry wide. ‘Hectic’ is an excuse for not following known rules and not a slip or lapse. A slip or lapse would involve one item of cargo being loaded incorrectly. The aircraft industry has addressed the cargo loading and flight preparation already. Airplanes are just sky ships, which have a much better safety record than their ocean going counterparts. But it took them a long time and a lot of regulations. Industry is loath to regulate themselves but once they have no choice they do so efficiently.
Mistakes are results of conscious decision making, so they occur at rule and knowledge based
performance levels. In both cases, the two typical problem areas are:
• Identifying the situation correctly
• Knowing the correct solution (“rule”) to apply

Thanks for passing this study along. From the board room to the galley we all need to get past the " just git 'er done" mind set.
I have often wondered if we would even have SOLAS if rich folks like John Jacob Astor, Benjamin Guggenheim and Isidor Straus had not perished on the Titanic.

[/QUOTE]

I agree, the linked article stalled loading so I didn’t read it till now. This is the first time I’ve seen the info on the cargo load error. I"ve loaded at Southampton many times, load plans have always good (so far). It’s one of the better ports. I was thinking it was just a ballast problem.

The threat and error management framework is, from my point of view, the most useful one I’ve come across. I’m glad that you took the time to have more then a quick look.

One thing, pointing out that operations can get hectic, high workload, was meant as an observation, not an excuse.

As with errors, it is important to look for the root causes of violations in the organization. Therefore, the solutions must also be implemented at that level. This also explains why violations are not necessarily always punishable.
It is in no way the intention to undermine the importance of individual responsibility for one’s own actions. Dangerous and reckless behavior should never be tolerated. However, some routine or situational violations may have been imposed on the individual by deficient organization or planning, and any individual put in the same situation might find it difficult not to violate.

[QUOTE=Kennebec Captain;181248]I agree, the linked article stalled loading so I didn’t read it till now. This is the first time I’ve seen the info on the cargo load error. I"ve loaded at Southampton many times, load plans have always good (so far). It’s one of the better ports. I was thinking it was just a ballast problem.

The threat and error management framework is, from my point of view, the most useful one I’ve come across. I’m glad that you took the time to have more then a quick look.

One thing, pointing out that operations can get hectic, high workload, was meant as an observation, not an excuse.[/QUOTE]

Full MAIB report here
https://www.gov.uk/maib-reports/listing-flooding-and-grounding-of-vehicle-carrier-hoegh-osaka