I tried to point you guys to this before but shipowners have the ability to specify the kind of bridge you speak of.
Or are you saying the time has come to make it mandatory rather than an extra class notation? Now that would be interesting.
I tried to point you guys to this before but shipowners have the ability to specify the kind of bridge you speak of.
Or are you saying the time has come to make it mandatory rather than an extra class notation? Now that would be interesting.
A post was merged into an existing topic: BAM - Bridge Alarm Management
Agreed
The captain also, in this situation, has to trust, (but verify) that the pilot is doing their job. @244 posted about this dynamic.
Captains and pilots are accustomed to splitting duties so presumably that would reduce the chaos a bit.
Texas, KP and KC
I agree entirely with your recent posts above.
I just think our collective jobs could be made much easier if the use of deafening alarms was controlled .
I got distracted last night into a discussion on aviation which is totally off topic for which I apologise, but while on the subject of accident reports I think it would be beneficial to look at The Emma Maersk accident approaching the Suez Canal.
I’m afraid I can’t link it, I am sure someone can.
What I took from it was.
The stern thruster shed one or more blades which punched through the tunnel. This should have been contained by watertight doors but the bulkhead penetrations failed and the generators were in imminent danger of being flooded.
The vessel berthed at an adjacent container terminal and grounded.
From my interpretation of the accident the crew performed magnificently during the entire event.
One of the notes from the crew was that communication was hampered by the constant sounding of loud alarms.
This incident occurred sometime in the 2000’s
It seems to me a shame that lessons have not been learnt with regards to alarms.
There is something very odd in the PR.
Referring to PR page 9, it is clearly mentioned that “Most bridge equipment also lost power,…”. This would only be possible if both DC UPS (Uninterruptible Power Supply) and AC UPS would have both failed as the 1st blackout occurred.
DC central UPS and AC central UPS are powered by batteries, they supply without any interruption mission-critical devices and those include various bridge equipment.
Referring to Page 11 there must be inaccuracies, emergency bridge equipment is supplied by UPS power and unless there were very serious issue with the UPS’s, a failure of both DC and AC uninterruptible power is quite unlikely and would have caused a whole series of problems and also there would be no records of RADAR nor ECDIS or other displays without local battery.
The VDR captures either vectorial screen image data or frame-grabber-acquired screen image data.
Typically the VDR will record for both X-band and S-band RADAR’s as well as the ECDIS one image every 15 seconds and, if there are multiple displays for the same function, the acquisition will be interleaved though X-band and S-band are counted as separate (i.e. one image sampled every 15 s for both frequency bands).
The VDR records a lot of various data but many devices stopped working as the 1st blackout occurred until either emergency or regular power was restored.
Some critical controls remain powered by local batteries. The partially redundant ME controls must be able to manage the emergency lubrication as the ME will be windmilling for some time before stopping though here the speed was low (the main ME controllers are redundant but not the cylinder control units though the engine can be run without all cylinders operating).
Without UPS power, only data from devices with internal batteries could have been recorded by the VDR but overall I still doubt that the UPS’s failed. Sure is that the wording of the PR is not clear.
I am not referring to the wording about the EG, as the NTSB has not formally established if and when the EG came online, the conditional form must be used in the text. It simply means that for now no details are confirmed about the EG; the crew mentioned that the EG came online but the NSTB is not confirming that for now as the investigation is obviousy not complete.
The VDR itself never stopped working as it remained powered by the internal 12 V 24 Ah sealed lead-acid battery of the Recording Control Unit (lasts about 2 hours depending on the load).
There is still the question about the black smoke but unfortunately no one who knows well that type of engine replied.
Several previous replies in this thread do not make much sense as they refer to much smaller ships. Unless I specified it otherwise, I was specifically referring to either M/S Dali or other large container ships. For example MV and LV breakers are often not operated the same way, also C/O (Closing/Opening) cycles with a single spring charge may vary, also delays within a cycle depend on the used breaker.
The retroffited scrubber can require enough power to require a 2nd DG to be run for the base load (reefer container excluded).
Another point is the sequencing of DOL-started motors. Many motors are started DOL (and yes, I know softstarters, I designed enough controls using them, especially pumps, including also designs with emergency DOL start of larger pumps).
On a large container ship there are many larger pumps as well as some larger fans, those motors are often DOL and if they are not sequenced they can overload the power distribution.
Oh jeez, silly me, I thought the AI thread abuse was finished.
Maybe you have something constructive to add. For example where the windmilling of the ME would have allowed to reverse the ME or something about the black smoke. And please make sure to refer to the type of ME and DG’s in use, some made generic comments which do not apply to the installed ME, DG’s or steering gear.
The PR is not clear about the consequences of the 1st blackout: Either the AC UPS and DC UPS did not fail in which case most bridge instruments would not have stopped working, or both UPS failed which would mean that nearly all data from external sources would have been lost and one emeergency power available there whold have been some reboot delay (and some devices require a heat up time to stabilize or must be realigned).
The bridge and wing audio ist acquired directly by several microphones though depending on their number and locations the recordings will be more or less valuable. It also depends on the background noise.
ME data has been recorded by the redundant engine control system powered by local batteries.
Hi 244
I just downloaded the incident report prepared the Danish Maritime board. Will give it a good read later. The section on the alarm distraction is spot on. In the O&G industry the alarm and monitoring system has about 25 to 35k I/O points as opposed to about 2-5k in vessels.
Alarm rationalization and prioritization exercise (not a minor exercise) is usually conducted with the vessel crew and engineers. Perhaps shipowners should consider this as part of the delivery package.
PS: Had an unfortunate experience with incorrect setting on the main engine slow down. Slow down was set to Full Ahead rather than DS. Piston cooling ‘no flow’ alarm, ME slow down from sea speed to Full, all engineers turn to in the CR, 2 engineers go down to see if oil was flowing into the glass cover and a crankcase explosion with both engineers getting hurt. One much more the other - he was wearing a polyester coverall and it just melted onto his body with the hot oil.
Thar she BLOWS … again … in its words ‘after a re-boot delay’.
If all external power is lost, without internal battery (like e.g. the VDR Recording Control Unit), or supercapacitor (only for short backup times)) a device shuts down and when external power is restored if it’s based on microprocessors it must reboot, this can take from a fraction of a second to several minutes though in most cases it’s just a few seconds for small devices and somewhat more for more complex microprocessor-based systems (like a PC and many devices are based on embedded or usualyPC’s).
In some cases you can chose to skip some internal self tests to reboot faster and perform some sequential tests while already running.
For now it is not fully clear if DC UPS power and AC UPS power both failed as statements are contradictory if you read carefully. It would be surprising and mostly the result of a poor design but OTOH I doubt the UPS’s are modular and redundant without any single point of failure.
I know the GMDSS and Gyro have battery back-up, what other bridge instruments are on battery back-up?
Modern integrated bridge pretty much everything except steering.
Even Standalone, Radars ECDIS ect everything except steering.
This particular ship is relatively new so my guess probably all the bridge equipment except steering gear. I don’t actually know.
Even the Nav lights usually have at least two circuits one of which is designate emergency and has a 24 volt battery back up. The other just regular supply.
( Have a long running difference of opinion with my oppo, I always run on E JIK we have a blackout so we don’t have to remember to switch over. He always runs on regular JIK he wears out the back up, The lights don’t care they are on both circuits)
And yep the Nav light panel has an alarm.
As do all the UPS.
UPS nice to have, You don’t have to go reset everything like the old days, it just keeps going
There’s not a 6 kt limit in that area. Specifically in what document is that found?
Indeed even GMDSS, VDR, radios and gyro compasses do not necessarily integrate a backup battery, often the system integrator must provide secured power as required by rules himself though some devices integrate a battery charger for an external battery, a few like the M/V Dali VDR come with an integrated battery while other VDR central units do not feature any internal UPS (see also below).
The whole critical power supply system very widely varies. Some ships have very advanced and robust redundant UPS (Uninterruptible Power Supply) architectures, especially DPS-3 vessels and higher risk ships like some modern tankers, while others have rather poorly designed power distributions systems and UPS’s.
It even happens that batteries die silently without any alarm.
A low-end UPS can even be worse than no UPS as in the worst case a failing UPS can damage the powered devices. Where I live (a small expensive country in the news this week-end) the grid is very reliable, so using a bad UPS is indeed riskier than no UPS, good UPS’s are not cheap simply because they must be well designed and reliable, personally I only use true online (double-conversion) UPS in projects.
Many bridge devices do not feature internal batteries. The VDR Recording Control Unit of the M/V Dali relies on an internal non-proprietary user-replaceable 12 V 24 Ah sealed lead-acid battery but not all VDR central units feature an internal battery.
Indeed most devices, including ECDIS, gyro compass, GPS, etc. do not include any power backup battery though some can be supplied by both 24 V DC (usually with a wider voltage as usual non-marine and non-mobile automation devices) and about 110/220 V AC 50/60 Hz, in which case the DC supply is only used when the AC supply fails (switching over and reverting without interruption), if no device-sided alarm is available, the redundant supply voltages of critical devices should be monitored. It is not great if a device fails when the 2nd power supply fails while the 1st one did already fail months before but without anyone noticing it.
Some devices can be powered by optional external batteries (often one or, in series, two identical 12 V lead-acid batteries) without sparate charger.
In some cases the device manufacturer can provide optional power supply modules with or without UPS functionality.
Personally I would rather chose stabilized redundant PSU’s (Power Supply Units) I know well rather than non-mandatory manufacturer options. It will usually be more reliable and less expensive.
Surprisingly it happens that critical bridge equipment is locally supplied by low-end consumer-grade AC UPS. Some navigation equipment manufacturer UPS options are indeed overpriced consumer-grade UPS’s.
I would have to look up formal details about some battery requirements (SOLAS and classification societies rules), there are some specific requirements for critical devices like e.g. VDR and GMDSS but IIRC correctly the overall required backup durations are surprisingly low (also for emergency lighting, etc.), some formal requirements are sort of outdated and some even reduce reliability and increase risks if applied textually.
In the case of M/V Dali I expect a basic UPS architecture, mostly a centralized AC UPS and a centralized DC UPS to power most navigation equipment, typically located in a small electrical room in the rear part of the bridge. Larger batteries can be in a separate room accessible from outside but that is more common for larger higher end UPS systems.
The fully electronically controlled camshaft-less ME has its own redundant DC UPS.
Typically the ME Control Room has own UPS supplies but also here details vary.
Very surprising is that a ME Control Room can go dark during a blackout. That is just outrageously odd, I have never seen any serious industrial control room without battery-powered emergency lighting which guarantees a sufficient lighting. A momentarily somewhat reduced lighting is fine as it catches everyone’s attention when a major blackout occurs.
Also there should be small emergency light fixtures with internal battery in various locations to always keep a minimal lighting level.
A good practice would be to supply all but very secondary electronic devices with UPS power, the power quality of ship grids is poor compared to land-based power. Reliable high quality true online double-conversion UPS’s not only protect against power outages but also against all sorts of power-quality-related issues which can damage electronics, not necessarily immediately, it can also reduce component life expectancies.
Ideally local proprietary electronic controls should feature separate connections for the control part which can be fed by UPS (ideally 24 V DC), the main power, including for non-critical solenoids being fed normally, also they should provide a standardized serial communication interface (now mostly Ethernet even if standard RJ connectors are reliability-wise a major PIT*) but that is another story (some do provide such interfaces)).
The wording of the PR is not clear about AC UPS and/or DC UPS failures. Overall I would be surprised if the bridge instrumentation would have lost UPS power but it is still not formally excluded.
It can be expected that the 2nd blackout did not have any influence on the equipment supplied by emergency power as the EBUS was still powered by the EG as the 2nd blackout occurred, which means that everything supplied by the EBUS remained powered without any interruption since the EG came online (i.e. “probably” not later than 45 seconds after the beginning of the 1st blackout, the conditional form refers to the fact that the NTSB does not formally confirm that the EG came online as the investigation is still in progress, indeed it is likely that the EG came online even sooner than 45 seconds which is a very long delay to come online, as comparison NFPA requires 100 % load in a single step in 10 seconds or less for the most stringent requirement, which I consider as too demanding for large engines, it only stresses the engine and generator, the delay should be defined in steps depending on the power, starting a small 300 kVA 1800 RPM ship EG is not the same thing as a 3’200 kVA 1800 RPM genset, even only 5 s or 10 s more reduce the stress on the engine).
Overall I still consider it as a major functional failure of the PMS to not have brougt automatically online TR2 as TR1 was automatically disconnected for (for now) unknown reasons.
As both DG3 and DG4 remained connected to the HV BUS (i.e. DGR3 and DGR4 both remained closed during the whole duration of the 1st blackout), it can be assumed that both voltage and frequency (both limits and, respectively, rates of change) were not out of tolerance (it is unknown if the breakers of the reefer container transformers were tripped or not).
Therefore as TR2 was not locked out (it was brought online manually, which ended the 2nd blackout), the PMS should have immediately automatically brought online TR1, therefore reducing the duration of the 1st blackout to a few seconds at most.
Also, as DG2, which was on stand-by (ready to start), i.e. preheated and normally with lube oil being permanently circulated, started automatically as the HV BUS lost power and supplied the HV BUS as DGR2 was closed automatically but for unknown reasons the PMS failed to bring online automatically either TR1 or TR2.
I also suspect the PMS to be fairly basic, for example I am not sure if the loads of the DG’s and the transformers are modelized in real-time, which means that if a DG or transformer is warm due to a high load during the previous hours, the short-term overload before tripping the corresponding breaker will be lower than if operated previously under a much lower load.
Overall there was problem with the power supply and distribution automation, either there were operator errors, wrong manual settings or the automation failed (various possibilities including never discovered design and/or commissioning errors).
There could be a couple of other causes but a in-depth evaluation would require the detailed wiring diagrams “as built” including the exact configuration of the LV and HV breaker solenoids.
Unfortunately the Hyundai ACONIS is a very closed system with limited configuration depth (e.g. running PLC-like programs to handle specific conditions). Further Hyundai was both supplier of the hardware and system integrator. Also a lot is labeled Hyundai but not really from Hyundai (IIRC I’ve even seen a Hyundai EPIRB, I bet they have even relabeled the hydrostatic release!).
Of course there are major diferences compared to the industry, when running complex systems 24/7 often all usual maintenance including automation and process control programming is done in-house (even if originally provided by a supplier).
On ships the mechanical maintenance reaches a high level but not the electrical part, a lot is done by suppliers and other external specialists while the same work would be done internally in other industries.
Of course it cannot be compared directly, on ships the crew does not stay long enough to allow engineers to know in-depth the equipment, imagine a nuclear power plant or refinery with the staff changing every couple of months.
Also there are reglementary restrictions, in the industry we could get rid of a whole bunch of oddities without having anyone to ask, e.g. we could just modifiy the steering gear and thruster controls (and also change the ME crankshaft encoder mount system and opt for a more EMC-resistant pulse signal transmission and possibly add a controller-sided interface), many non-OEM parts would be purchased directly, also overall most would just use operator panels and reliable redundant PLC’s instead of a lot of expensive proprietary electronics, for example for the whole bridge engine, steering and thruster control; a lot can be done more reliably and with much lower hardware costs and also with a better long-term spare parts availability using general automation COTS together with some bespoke control devices like telegraph control lever system or helm.
.
The 3rd pump of the steering gear, the only one which is supplied from the EBUS, is not started DOL (Direct On-Line) when the EBUS is supplied by the EG, the sizing of the EG is anyway surprisingly tight. 0.05 % of the cost of the ship would allow a significant increase of the EG power. Ideally there should be a small EG as well as a 2nd larger one but that is another discussion.
All 3 steering gear main pumps are started DOL when the EBUS is supplied by the LV BUS (i.e. EG not running).
The detail about reduced RPM of Steering Gear Main Hydraulic Pump #3 when the EBUS is supplied by the EG mentioned in the PR is possibly correct though from an electrical engineering POV such design makes absolutely no sense and also decreases reliability while uselessly increases cost and complexity.
As the displacement of the main hydraulic pumps of the steering gear is symmetrically variable (in theory zero displacement in the middle position) the required power can be limited based on the motor current and the effective hydraulic oil pressure.
Some other design details like for example the supply tank level controls for the equally questionable (non-mandatory) electromechanical safematic system or the rudder stock angular position acquisition as well as the torque motor principle are just totally outdated from a modern engineering POV and present various unnecessary weaknesses which have lead to many incidents.
Potentiometer feedback systems (as well as for control inputs, even for tillers and helms!) are still used in marine applications while they are considered as very unreliable and no longer used in serious industrial applications since over a decade or two.
Some rules about steering gear motor protection are fundamentally flawed. The correct modern way to handle motor protection of larger and/or mission-critical motors is to use more or less advanced electronic protection relays and let the automation handle the sequencing. If for a very large steering gear there are 4 pumps of 250 kW each, if there is an overcurrent problem with one of the running motors, the automation can shut it down and automatically start another main pump but as there is no hardwired motor protection the fault can be overruled in case of extreme emergency.
Also the rotation of each pump motor (including the auxiliary pumps) should be monitored by an incremental encoder and each phase of each motor measured individually by the local steering gear control.
In case of leak and/or pump failure (and some other problems) the reconfiguration of the steering gear hydraulics should be performed automatically.
Obviously the electrical manual local control with automation, manual electrical emergency control without automation and manual mechanical local control should be maintained.
The general idea being that well-designed redundant automation should handle most common faults without manual intervention, this massively reduces the risk of human error.
The overall current ship system approach is based on partial redundancy but low emphasis on common cause error mitigation. Also for container ships the level of integration is poor excepted for the bridge navigation systems, also the whole HMI (Human Machine Interface) design ist outdated and an ergonomical nightmare.
Even worse, lots of parts are not even considered as reliable, especially some switch and pilot light types are no longer used elsewhere in modern countries.
Many small design details of typical ram-type steering gears decrease the overall reliability and are hard to understand both from a mechanical and electrical engineering POV.
It is both funny and ridiculous that various manufacturers copy/paste obvious design flaws.
The same applies to the bow thruster, a 3’000 kW 6.6 kV bow thruster motor is expensive but typically it not even feature bearing monitoring (condition monitoring or at least Pt100 RTD temperature sensors) and to add insult to injury often the propeller blade angle feedback is based on a single not even redundant potentiometer. In any other serious industry there would be a redundant contactless absolute position acquisition.
The bow thruster is started electromechanically with an autotransformer, the asynchronous motor runs at 6.6 kV (power 3000 kW) but thermal monitoring is minimalistic.
Chief Makoi has issued a video about M/V Dali’s bow thruster some time ago.
Above some speed, the bow thruster cannot be used as beyond not being effective it could also uselessly stress its own mechanical parts as well as structural parts of the hull.
I would understand it for a small ship but here we are talking about hundreds of millions US$ at stake and high daily operating costs.
Also 4 MVA diesel generators are quite large, land-based emergency power generators beyond around 2.5-3.2 MVA are not that common and mostly used in nuclear power plants and some prime power diesel engine power plants (i.e. running 24/7 as base supply like on some islands).
The overall power generation of a large container ship corresponds to a nice data center and the one of a large cruise vessel corresponds to a very large data center.
Also electrical protections and power management are very basic compared to land-based power plants of similar power.
For a 5 MVA medium voltage generator there will likely be an advanced digital potection relay, an advanced synchronization relay like e.g. an ABB Synchrotact including a synchrocheck and and some ABB Unitrol for the excitation control (just as example as I know them, but of course one can use Basler, Woodward, etc.).
As ships become more complex, if automation does not follow the general industrial practice there will be increasingly problems as it will become more and more challenging for crews to handle unusual situations.
Automation will never solve all problems but it can help reducing risks massively.
Also there is a lot of greenwashing. To save let’s say 1 to 1.5 % of the DG fuel one only has to simply invest in some copper bars to increase the busbar sections and having a closer look at the efficiency of the main service transformers or some thermal insulations could also be interesting. The problem is that those who generate high amounts of electrical energy often do not really care about saving small amounts.
I have included pdfs in one of my comments under Dali thread. I do not remember which. I have no access to my laptop now but can repost 17052024
OK have found it. My comment 226 und 227
TLDR
Again
Too many words.
And are certainly AI generated and/or plagiarism at best. Come on mods, block this shit.
There is no 6 kt speed limit in that area. It’s a major USEC port, been there many times.
Here’s a ChatGPT summary:
The key points addressed are: