The relaxation of physical constraints also impacts human supervision and control of automated
systems and the design of interfaces between operators and controlled processes (Cook, 1996). Cook
argues that when controls were primarily mechanical and were operated by people located close to
the operating process, proximity allowed sensory perception of the status of the process via direct
physical feedback such as vibration, sound, and temperature. Displays were directly linked to the
process and thus were essentially a physical extension of it. For example, the flicker of a gauge
needle in the cab of a train indicated (1) the engine valves were opening and closing in response to
slight pressure fluctuations, (2) the gauge was connected to the engine, (3) the pointing indicator
was free, etc. In this way, the displays provided a rich source of information about the controlled
process and the state of the displays themselves.
[QUOTE=Kennebec Captain;187756]Speaking of gauges, this is from the essay.[/QUOTE]
I’ll read this essay. That quote says what I was trying to say, much better than I did. There’s certain places in the engine room of SS Master (1922) that have clearly been touched many times. When the engine is running, these are exactly where you need to put your hand to get to know what normal is, and to monitor and respond to its needs. And it’s where her crews, the men you’ll never get to meet, the reason why the ship is still going, put their hands. So you feel a bit connected to them.
Most accident models view accidents as resulting from a chain or sequence of events. Such models work well for losses caused by failures of physical components and for relatively simple systems.
Technology is changing faster than the engineering techniques to cope with the new technology are being created. Lessons learned over centuries about designing to prevent accidents may be lost or become ineffective when older technologies are replaced with new ones.
The essay talks about safety as being an emergent property, it’s a characteristic of the system as a whole. Making changes to the system may decrease safety without it it being apparent to anyone. Safety is maintained by trial and error.
[QUOTE=Kennebec Captain;187783]The essay talks about safety as being an emergent property, it’s a characteristic of the system as a whole. Making changes to the system may decrease safety without it it being apparent to anyone. Safety is maintained by trial and error.[/QUOTE]
That’s the most depressing thing I’ve heard today. Maybe marine architects need to hire full-time historians.
I think she’s on the right track when she states “[I]It seems unavoidable that accident analysis involves both frequency-based probabilities (e.g. the frequency of pump failure) and expert-based estimates of the likelihood of a particular kind of failure (e.g. the likelihood that a train operator will slacken attention to track warnings in response to company pressure on timetable).[/I]”
But there is no mention of what I’ve come to see as the most insidious factor in large scale incidents: general apathy. Basically I don’t see a lot of ‘experts’ making poor estimates about specific failures. I do see a lot of experts (who are often overworked and stressed and multitasking) not realizing (or simply not caring to prioritize a safety issue over the routine tasks and paperwork piling up on his desk) that a situation requires them to take a step back and make a qualified estimate.
[QUOTE=Emrobu;187828]And sometimes the investigation, itself is at fault. Here’s a view I never heard before about the Concorde accident that happened 16 years ago today.
All kinds of juicy accident investigation details.[/QUOTE]
Here’s a quote from the concord article you shared which illustrates my point about apathy: “[I]Meanwhile, in the interval between Concorde’s leaving the terminal and reaching the start of the runway, something very important had changed: the wind. It had been still. Now, as the control tower told Marty, he had an eight-knot tailwind. The first thing pilots learn is that one takes off against the wind. Yet as the voice record makes clear, Marty and his crew seemed not to react to this information at all.[/I]”
[QUOTE=john;187860]I do see a lot of experts (who are often overworked and stressed and multitasking) not realizing (or simply not caring to prioritize a safety issue over the routine tasks and paperwork piling up on his desk) that a situation requires them to take a step back and make a qualified estimate.[/QUOTE]
Because safety requires attention and experience. A lot of bureaucratic overhead obviously lessens the attention which the guys with the most experience can give. Used to be that ships had clerks. Now there’s more paperwork than ever, but no clerks. We can educate them in all the conventions, treaties, customs, and ships business stuff. Maybe a one or two year certificate program with an at-sea apprenticeship. Give them their STCW and SOLAS stuff, plus a medical. No real career progression at sea, but it would be a good place for a shore rep or an office-type to start.
[QUOTE=john;187860]I think she’s on the right track when she states “It seems unavoidable that accident analysis involves both frequency-based probabilities (e.g. the frequency of pump failure) and expert-based estimates of the likelihood of a particular kind of failure (e.g. the likelihood that a train operator will slacken attention to track warnings in response to company pressure on timetable).”
But there is no mention of what I’ve come to see as the most insidious factor in large scale incidents: general apathy. Basically I don’t see a lot of ‘experts’ making poor estimates about specific failures. I do see a lot of experts (who are often overworked and stressed and multitasking) not realizing (or simply not caring to prioritize a safety issue over the routine tasks and paperwork piling up on his desk) that a situation requires them to take a step back and make a qualified estimate.[/QUOTE]
The text is dense and not an easy read but sections 2.3 and 2.4 are worth a careful read.
People bend the rules to get the job done,
from 2.3
Human error is usually defined as any deviation from the performance of a specified or prescribed
sequence of actions. However, instructions and written procedures are almost never followed exactly
as operators strive to become more efficient and productive and to deal with time pressures
Instead of decomposing systems and accident explanations into structural components and a flow
of events as do most event-based models, STAMP describes systems and accidents in terms of a
hierarchy of control based on adaptive feedback mechanisms
It’s not as complicated as it sounds, controls are related to levels of crew training, inspections, crew credentials etc.
I thought this was interesting.
Maritime transportation has been referred as an ‘error-inducing system’ (Perrow, 1999,
Rijpma, 2003). It has been considered as a profit-oriented, authoritarian, poorly organized, and
weakly unionized industry (Linstone, Mitroff, & Hoos, 1994; Burke, Cooper, & Clarke, 2011), in
which multiple errors might bring out unexpected interaction that can defeat a safety system
(Perrow, 1999). In such a system, operator error is prominently given as an explanation for an
accident as failures and consequences of actions appear immediately at the level of proximate
personnel. This argument has put pressure on the identification and elimination of human errors,
which has long been considered activities of critical importance for maritime accident
investigation. This traditional view of safety has been criticized by many researchers (e.g., Woods
et al., 1994; Amalberti, 2001; Leveson, 2004; Dekker, 2006; Hollnagel, 2008), asit confusessafety
with reliability (Besnard & Hollnagel, 2014).
Maritime transportation has been referred as an ‘error-inducing system’ (Perrow, 1999,
Rijpma, 2003). It has been considered as a profit-oriented, authoritarian, poorly organized, and
weakly unionized industry (Linstone, Mitroff, & Hoos, 1994; Burke, Cooper, & Clarke, 2011), in
which multiple errors might bring out unexpected interaction that can defeat a safety system
(Perrow, 1999).
I’ve argued with old-timers who have this view. I think that a company that seems to put profit ahead of safety is going to actually be less profitable than if safety was a priority. My argument is that accidents are usually costly. Even if it’s something simple that wouldn’t concern anyone except the statistics minders(say a bad fall down the stairs), it’s still quite expensive (potential medivac, finding a short-notice replacement, ect). Old-timers say to me, “Kid, I told you not to drink the koolaid at the safety meeting.”
The essay is saying that complex systems tend to “drift” over time to a less safe condition and that when systems are designed they need to take that “drift” into account.
One factor in this drift is regulatory capture in which the regulators and the regulated find their intrests and view points become more and more aligned over time.
A second way that systems tend to drift towards a more unsafe condition is as operators explore the limits of the system to increase efficiency. Some rules can be bent or broken without significantly increasing risk.
However in some cases safety is compromised. For example in the case of the Concord crash likely it’s the case that the pilots believed they could get away with the change in wind direction as they had gotten away with it in the past. Only in this case the wind direction combined with other elements to cause the crash.
One point of the essay is that it does little good to isolate the one factor and claim that removing that one factor would have prevented the crash, even though logically it is true.
Another point is that the people involved often get blamed for the incident when, to some degree at least, they may have just had the bad luck to be in the wrong place at the wrong time. Consider the practice of taking shortcuts, an example being taking off downwind, when the system is robust, before it has drifted too far, it can (presumably) absorb the error without incident, but later, when the system has drifted further, and combined with other seemingly unrelated factors (the cg too far aft, the shutting down of an engine at low altitude) it cannot and disaster strikes.
As is said in the essay, operators are almost never playing by the rules 100% so in the event of an incident the violation is seen as a cause when in fact the rule may bave been broken regularly in normal operations.
A third reason for drift is when changes to the system have unintended and unforeseen consequences. An example being adding of the lifeboats to comply with regulations intended to make the vessel more safe makes the ship unstable in some situations.
So is the answer to have a wholistic view of the system at all times? In my conception the person best-placed to see the system wholly (operators, technology, and conditions) is a captain or chief. I don’t think a smarter alarm panel is the answer, because under certain conditions an operator may come up with a benign reason for the alarm and ignore it. Indeed, an overly “smart” alarm system (one which is too sophisticated to be understood as anything but a magic box), could make things worse, because the cause of an alarm may be difficult to track down.
Regulatory capture is an interesting idea. I’ve worked with safety guys who become part of the team because they’re frequently the medic, and they don’t have much technical overlap with the rest of us, they feel isolated and out of their depth, and so they dive right in with us, assuming we know best in our areas and that the way we do things is fine. They depend on us to write safety cards. Then I worked with one guy who was a safety bird dog from the client. What a miserable cuss! Nobody liked him. I don’t know how he went on year after year with that level of animosity directed at him. I’m wondering now if he did it on purpose, an attempt to prevent alignment between himself and the crew. He was very good at it. Years later and the thought of him still raises my hackles. But I don’t know if he was any more effective than the medics.
[Self-Deception and Regulatory Compliance](Self-Deception and Regulatory Compliance)
This might be one of the ways that the drift over time towards less safe operations occurs, How the culture at a company or shipboard slowly allows a higher level of risk.
Self-deception comes in a variety of forms. Overconfidence, the best known, provides a good explanation for the wide variety of positive economic outcomes that accompany outsized risks, such as regulatory violations. When regulatory enforcement becomes lax, overconfidence pays off greatly. In-house risk takers gain status and power, becoming evangelists for taking even more risk.
[QUOTE=Emrobu;187863]Because safety requires attention and experience. A lot of bureaucratic overhead obviously lessens the attention which the guys with the most experience can give. Used to be that ships had clerks. Now there’s more paperwork than ever, but no clerks. We can educate them in all the conventions, treaties, customs, and ships business stuff. Maybe a one or two year certificate program with an at-sea apprenticeship. Give them their STCW and SOLAS stuff, plus a medical. No real career progression at sea, but it would be a good place for a shore rep or an office-type to start.[/QUOTE]
I expect that’s the desire of ever ship master but it will never happen until regulators require it.
This is not worth a new thread so I’ll stick it here.
This is about how people look at incidents in general but specifically I was browsing thorough the thread about the second hearing and I realized that in places it’s two counter-threads. One sub-thread is an attempt to create a plausible narrative. The second sub-thread is (an attempt at least) to collect and sort information to see how well it fits with other known facts (or assumptions) to come up with a possible scenario.
The disscussion about the anemometer may illustrate the two different approaches. The statement that it’s [B]possible[/B] that an anemometer [B]may[/B] have led to increased situational awareness that [B]possibly may[/B] have changed the outcome is a statement made in probabilistic terms. The probabilistic counter-argument is that while the statement is (or might be) true the probability is too low to be worth considering.
The narrative counter-argument is not probabilistic but sequential, story-like. The objection was based upon how poorly this fits into a story, if the mate wonders about the wind, the next step in the story is the mate would have a look outside.
Another example in the thread about the second hearing is the contrast between the first post and the subsequent ones, the first post establishes a narrative, that the pertinent facts are all known or can be easily learned, subsequent posts are evaluating information as the hearing proceed.
One good clue as to what approach is being used it the degree certainty expressed in the language used. In the narrative point of view terms like “there’s no way” or “I’m 100% convinced” etc are often used.
That article is genius. In addition to helping us tease out what kind of narrative we are telling ourselves and remembering why it’s better to keep an open mind, it also explains our fascination with figuring things out and convincing people by story-telling/story-hearing. The article also points out that there’s not much difference between story-telling and story-hearing which is profound.
Adios, ladies and gentlemen. Now that I’ve been given the keys to false narrative, I’m going to become a great mystery writer.