"your instrumentation and outside visual cues"

This is why CTI went to all the trouble of inventing its piloting training program, shown in this series of videos. Unless you strip the trainees of all recourse to electronics their development of the skills will happen at a very slow pace. Conversely, if you throw physical discomfort and hazard into the equation, learning will occur all the faster.

1 Like

’I know exactly where we are’ has been the last words before many a grounding.

Eyes lie to everyone and instruments lie to those unskilled in using them.

I don’t think it a good reason for junior offers to neglect improving their skills but over-reliance (failure to cross-check) on visual does happen.

The pilot checked the settings on the PPU and found that there was an 18-metre offset to starboard.

The pilot was unable to remove the offset so decided to discontinue using the PPU for monitoring the ship’s progress. Instead the pilot conned the ship visually and used the ship’s radar as an aid. The pilot did not tell the rest of the bridge team that he had stopped using the PPU.

From the report:

The bridge team were all primarily navigating by eye and not verifying that what they were seeing correlated with the information in the ship’s electronic navigation systems. All of the electronic navigation aids showed that the Leda Maersk was off-track and nearing the limit of safe water to port.

Report here
Grounding of container ship Leda Maersk Otago Lower Harbour, 10 June 2018

gCaptain forum thread here: Visual Navigation Implicated in Container Ship Grounding

1 Like

The title of this Norwegian poem, written back in the 19th century, could fit well as a reminder to young Navigators today?:

Source: Norwegian Folk - Løft ditt hode, du raske gutt! (English translation)

The usual method is to observe instruments and confirm visually. In high-workload situations, this approach can overload a single watch officer, requiring a second officer on the bridge. In most situations a more cognitively efficient method is to observe visually and confirm or refine with instruments; training watch officers in this approach can reduce the time two officers are needed on the bridge.

1 Like

The importance of instruments such as ARPA, AIS, ECDIS, Radar etc, in marine navigation has been well established. No reasonable mariner would doubt their value.

That said, my experience at sea with visual observation is supported by science. It’s been studied formally by psychologist J.J. Gibson (also supported by Endsley’s situational awareness model and Dreyfus, Kahneman)

Gibson’s two key concepts are “Optic Flow” and “Motion Parallax”

Here’s a ChatGPT summary of those two concepts in the context of watchkeeping.

ChatGPT Summary

1. Visual-First = Direct Perception

When a navigator watches the scene outside, they’re not consciously measuring ranges or doing vector math. Instead, they’re reading changes in the visual field in real time.

  • Optic Flow:
    As own ship and other vessels move, the entire outside view is in structured motion. Threat vessels have a distinct flow pattern—remaining on a constant relative bearing while increasing in size. Non-threat vessels “slide” across the field of view.
  • Motion Parallax:
    Closer vessels, landmarks or buoys shift position more rapidly against the background than distant ones. Even small head or body movements on the bridge give subtle but immediate cues about which objects are nearer and how fast they’re closing.

2. Why This Beats Instrument-First in High Traffic

  • Instruments present processed, symbolic data—a delayed and simplified version of the scene.
  • Visual scanning taps into deeply trained perceptual systems that can track multiple moving objects in parallel without conscious calculation.
  • By maintaining an outside scan, the watch officer is physically engaged with the dynamic optic flow, constantly updating the mental model without having to switch between “raw world” and “instrument world.”

3. Cognitive Efficiency

  • Once the mental model is visually established, verifying with an instrument is a quick, low-load check (e.g., “that ship’s bearing is steady".
  • This is System 0/System 1 territory—automatic, fast, and low-effort once learned.
  • In contrast, starting with instruments requires mentally reconstructing the scene from symbols, then cross-checking outside, which is slower and more prone to overload in high-traffic situations.

Bottom line: Gibson’s optic flow + motion parallax explain why the “visual first, instruments to confirm” approach works—because it uses the brain’s built-in capacity for direct perception of motion and distance, turning the outside view into a live, self-updating collision-risk display.

The use of optic flow and motion parallax to navigate is not some esoteric skill but is built into how humans navigate every day, whether on the water, walking or driving.

It’s not a binary choice of instruments vs visual observations but, in the appropriate circumstances, a matter reducing workload by using visual observations to filter or prioritize what elements require the use of instruments

An example of how folks can believe their screens over their eyes -

I was sent by an oil co to observe a ship that was aground at anchor in Stapleton Anchorage in NY with their oil on it.

When I got up to the bridge and spoke to the Master he was insistent that his vessel was not aground - pointing at his nav plotter. The conversation went something like this:

“ I understand Captain - thanks. Just a quick question - why do you think you are pointing in a different direction than these other 5 ships anchored around us ? “

3 Likes

That’s funny. What was the outcome for that captain?

not sure - my only job there was to watch - and do nothing. Specific orders from the lawyer. Interesting instructions - so if they do something really stupid, i am supposed to just watch it happen? Yep that’s it. Ship swung off on the next tide - no issues - went in and discharged.

1 Like

I had a similar order when attending as MWS on rig moves; “observe and report but no “hands on” action”.

On one rig move in the Java Sea in the mid-1970s the Superintendent, who was giving instructions to the towing vessels, lost total control of the situation.

We were heading toward a fixed platform at speed.I grabbed his walkie-talkie and got the rig stopped before hitting the platform.
I brought the rig to a stop and pinned 100 ft. away, in position to run anchors for final approach.

The Superintendent didn’t protest but also didn’t thank me.
When I got back to Singapore I was called to the office and told that the drilling contractor had cancelled the contract because the Superintendent had reported that I had “overstepped my authority”.

Should I just have stood there and watched as the rig run into a live gas platform, note the time he lost control and the time we hit?
Not in my nature.

5 Likes

So, what’s the big deal about retinal optic flow? Imagine you’re on the bridge of a ship, looking out at the horizon. As you turn the wheel, the world outside your window seems to flow in a certain way. That’s retinal optic flow – the pattern of motion on your retina as you move through the environment. Nguyen’s study shows that this flow isn’t just a pretty sight; it’s a crucial part of how we navigate.

Link to the paper:Intermittent control and retinal optic flow when maintaining a curvilinear path | Scientific Reports

From the paper.

Vision is arguably one of the most complex perception senses enabling animals to navigate and exploit the ever-dynamic surrounding world as we understand it1,2,3. Although there is a scientific consensus that vision perception plays a critical role, it is not yet fully understood how humans use it to guide, propel, or control their egomotion on foot4,5 or in vehicles.

how do these people all miss each other ?? well mostly miss each other anyway.

1 Like

“Miss” in what context?

not bump into each other - in support of your point above

1 Like

How we process the information from each eye and to a lessor extent each ear and any stimulus to our skin is what makes humans capable of complex tasks.

1 Like

At 18 kts, including the approach to the turns, this scenario is about 30 minutes. The OOW needs to be aware of where the ship is in relationship to the track-line.

In that 30 minutes I’d say an inexperienced OOW making a minimum 4 trips to the ECDIS would be prudent, once sometime before and once after each of the two turns. Making a trip every 5 minutes to check the ECDIS for position would be acceptable.

The typical academy-trained OOW believes watching ahead is the lookout’s job and the entire concept of conning visually for any amount of time as an unnecessary absurdity.

From the article:

But what does this mean for maritime professionals? Well, understanding how we process visual information for navigation could lead to better training programs for mariners

We were inbound on the St Johns River once and the ship entered a bank of heavy fog, visibility maybe 50-100 meters. The pilot’s easygoing manner abruptly changed. He became very focused, giving the radar his full attention.

At one point, without looking up from the radar, the pilot, somewhat brusquely, told me where to expect to see the next buoy. When I saw and reported the buoy the pilot gave it a quick glance and then returned his full attention to the radar.

The only mention of the use of visual information in this manner that I’ve found is in the narrative about Nathaniel Bowditch in the first section of The American Practical Navigator/Bowditch

Bowditch made a total of five trips to sea, over a period of about nine years, his last as master and part owner of the three-masted Putnam . Homeward bound from a 13-month voyage to Sumatra and the Ile de France (now called Mauritius) the Putnam approached Salem harbor on December 25, 1803, during a thick fog without having had a celestial observation since noon on the 24th. Relying upon his dead reckoning, Bowditch conned his wooden-hulled ship to the entrance of the rocky harbor, where he had the good fortune to get a momentary glimpse of Eastern Point, Cape Ann, enough to confirm his position. The Putnam proceeded in, past such hazards as “Bowditch’s Ledge” (named after a great-grandfather who had wrecked his ship on the rock more than a century before) and anchored safely at 1900 that evening. Word of the daring feat, performed when other masters were hove-to outside the harbor, spread along the coast and added greatly to Bowditch’s reputation. He was, indeed, the “practical navigator.”

In both cases, with the river pilot and Bowditch, even a single visual cue reduces uncertainty and increases confidence.

1 Like

The Bowditch story is quite interesting, particularly if you look at the chart of the area. Eastern Point is actually at Gloucester Harbor, and it lies about 9nm from Salem Harbor, with a LOT of rocks, ledges and islands in the way. Tidal currents in the area are not trivial, so a very challenging dead-reckoning exercise, even with that one sighting.

2 Likes

Here’s a recent article about how motion from visual information is processed in the brain.

Visual thalamus reshapes information beyond simple relay function, study finds

What this means, says Liang, is that inputs from the superior colliculus contribute to the computation of motion in the thalamus.

“There’s substantial computation going on to enrich and selectively enhance visual information before it even gets to the cortex,” she says.

The information shown on screens (ARPA, ECIDS) in the wheelhouse has already been processed and simplified so it can be understood quickly (the ship is right of track) whereas visual information (out the window) requires watching the motion over a longer period of time while it’s processed by the brain.

Here’s a recent video that discuses the article:

Transcript

Rethinking How the Brain Processes Vision

Everything we thought we knew about how the brain processes vision may be fundamentally wrong.

For decades, neuroscientists believed that a crucial brain region called the visual thalamus functioned like a simple postal service—collecting visual data from the eyes and passing it along to other brain areas without much processing.

But groundbreaking new research published in Neuron has overturned this assumption, revealing that this so-called relay station actually performs sophisticated computations that could reshape our understanding of vision.


A New Look at the Visual Pathway

Imagine watching your child play in the backyard. They pick up a toy and bring it to you. In that instant, visual information begins an incredible journey from the retina through the brain’s complex networks.

Rather than moving along a single track, the process is more like a busy subway system, where information makes strategic stops at different neural “stations” and gets processed and refined before reaching its final destination.

Dr. Liang Liang, assistant professor of neuroscience at Yale School of Medicine and lead researcher of the study, suspected that something important was being overlooked. Previous studies had shown that input from the retina accounts for only about 10% of the information entering the visual thalamus. That raised a major question: what is the remaining 90% doing?


Clues From the Superior Colliculus

Liang’s curiosity focused on inputs from the superior colliculus, a midbrain structure responsible for lightning-fast reflexive responses to visual threats, such as instinctively dodging when something flies toward your face.

This suggested that the visual system isn’t just one straight pathway, but a complex highway interchange where multiple streams of information converge, interact, and influence one another.


Watching the Brain in Action

To investigate, Liang’s team used a remarkable technique. They genetically modified mouse brain cells to act like tiny biological lighthouses: cells receiving information from the retina glowed green, while those connected to the superior colliculus glowed red.

This allowed the researchers to watch both information streams in real time as the mice viewed moving images—a live neural light show of brain activity.

What they saw was striking. Instead of random or chaotic wiring, the connections were highly organized and purposeful. The brain clearly invests enormous energy during development to build these circuits precisely. “This level of organization doesn’t happen by accident,” Liang explains. “It’s serving a critical function.”


Silencing the Pathways

The team then tested that function. By selectively silencing the inputs from the superior colliculus while leaving the retinal inputs intact, they discovered that thalamic cells became far less responsive to visual information—particularly losing their ability to detect and process motion in specific directions.

This finding challenges the long-held assumption that vision is simply the eye capturing images like a camera. Instead, vision emerges from multiple brain systems working together to construct, enhance, and interpret visual reality before we are even consciously aware of it.

“The thalamus isn’t just a passive relay station,” Liang emphasizes. “It’s an active participant in creating our visual experience of the world.”


Bigger Questions About Perception

The study’s lead author, a graduate student in Liang’s lab, adds: “We want to understand how thalamic cells actually use this integrated information to shape what we ultimately perceive.” The team is now exploring other inputs to the visual thalamus, searching for further layers of complexity.

This discovery raises profound questions. If the brain is actively constructing visual information before we’re even aware of seeing something, how much of what we perceive as reality is actually interpretation rather than objective truth? And if this complexity exists in vision, what other supposedly “simple” brain functions might also involve hidden computation?


Implications

The implications go far beyond neuroscience theory. A deeper understanding of how the brain processes vision could revolutionize treatments for vision disorders, inspire advances in artificial intelligence, and shed light on conditions such as hallucinations or perception disorders that affect millions worldwide.


From the transcipt:

The team then tested that function. By selectively silencing the inputs from the superior colliculus while leaving the retinal inputs intact, they discovered that thalamic cells became far less responsive to visual information—particularly losing their ability to detect and process motion in specific directions.

This finding challenges the long-held assumption that vision is simply the eye capturing images like a camera. Instead, vision emerges from multiple brain systems working together to construct, enhance, and interpret visual reality before we are even consciously aware of it.

“The thalamus isn’t just a passive relay station,” Liang emphasizes. “It’s an active participant in creating our visual experience of the world.”

This is not all that new. From Andy Clark’s book The Experience Machine

This and similar research supports the practice, when appropriate, of watching out the window over a longer period (minutes not seconds) and using the instruments to verify the visual information (seconds not minutes) - give the brain time to build a model.

1 Like

The basic skills of traditional piloting (navigate vessel close inshore sans electronics) are precisely the same as used by a hunter: pattern recognition, gauging relative motion, and keeping your mouth shut.

1 Like