That interpretation doesn’t work for me. They were already aware that a part of the oil terminal was anomalous, and even noticed that the aspect was changing. If the object was indeed stationary, this would have indicated that their initial position estimate was way off, which should have set all kinds of alarm bells ringing, prompting another couple of bearing fixes or just a quick glance at the radar screen. Instead, the OOW didn’t think too hard about it:
The OOW on HNoMS Helge Ingstad eventually noticed that the ‘object’ on the starboard side seemed to be closer to the frigate’s course line than first assumed, leaving less distance to the closest point of approach. The OOW has stated that the ‘object’ was primarily observed visually and that the OOW did not check the radar for details.
I don’t know about that. I’d expect our yachtie to navigate primarily by visual means, and be fatigued enough to fail to analyze the situation diligently. I have a whole different set of expectations for the crew of one of our warships, or at least I used to. The OOW’s defined duty was to monitor the situation by all available means and intervene if it got out of hand. It doesn’t look like he even tried.
Now you’re touching on logical omniscience as it pertains to navigation, and interesting subject that I wanted to make a thread about. In the end I couldn’t put together a post that stood on its own legs, and deleted the draft, but it may fit here. One of my favorite AI pundits did this little presentation on how having limited information and time to analyze a subject changes our perception. The first half is relevant here, the second half where he gets into mitigation schemes based on financial models doesn’t really translate to the subject at hand:
In this context, the navigator is an agent that seeks to deduce his position with the highest possible precision, up to a level where further precision is not useful, which doesn’t really happen in practice. A navigator performing ded reckoning during passage is logically omniscient; He has all the time in the world to make his calculations, examine the data, cross-check and refine the calculation. Still, it gets to a point where he decides “that’ll do”, because he’s pushing diminishing returns. Herein lies an important difference between computers and humans, but I digress.
Inshore night navigation by visual means is a good example of a task in which you’re far from logically omniscient, but rather bound by observation and processing capacity. In fact, the cognitive load is such that I strongly prefer to have a second person in the wheel house for this task, even in situations of solid margins (as opposed to barreling down the Hjeltefjord at 17 knots). Doing it safely requires a constant regimen of cross checking every fix in sequence, both for positioning the vessel and estimating the movement of others, and there is considerable learned skill involved, beyond understanding the theory and executing it on paper.
The relative difficulty of visual inshore navigation could explain some aspects of this accident, such as how they kept thinking of the Sola as a stationary object in the face of obvious evidence to the contrary. This only works if we assume that the OOW forgot what he was supposed to be doing and got deeply involved in the training task. Even then, this is such an epic, multi layered fuckup that I struggle to bend my head around it.