WOW. !!!
Glad to see the crowd of realists growing. You made my day with this comment.
Under 1.4 in the very first message it should read DG2 instead of DG3.
As hint about the crew, sister ship M/V Cézanne was manned by a crew of 25 Indian nationals incl. Master about 5 years ago.
Here some addditional thoughts, not structured at all.
Could the blackouts been have kept shorter? Good question…
How long can the ME (Main Engine) ride through a power outage at the here relevant speeds?
How is the windmilling effect on the propeller when propulsion is lost vs. speed?
Especially referring to no longer powered lubrication pumps and cooling water pumps, assuming that all controls remain powered.
Note that the publicly available engine designation doest not reflect further option details.
Can the ME tolerate let’s say 3 seconds of power loss, about the time required to reconfigure a modern power distribution e.g. based on a segmented double busbar ring. Advanced digital protection systems allow a high selectivity, i.e. reducing the risk that a single fault causes other subsequent issues.
For example, if a generator is lost, non-critical loads are immediately shed and the most critical loads are sequentially brought back online according to process requirements it they lost power.
Typically Dynamic Positioning may require a very high availability which can only be achieved by fault-tolerant architectures and therefore no single point of failure is acceptable.
Could the 1st blackout have been avoided?
We cannot know but its duration could have been reduced to at most a few seconds if performed by automation, the time required to bring back online TR1.
Anyway, HR2 and LR2 were closed manually by crew intervention.
As the 1st blackout happened, simply closing manually HR2 and LR2 would have fully restored power. An automated system would have exactly done that, the rationale being:
There is a problem with TR1 as it was disconnected automatically by tripping HR1 and LR1, so let’s use TR2 which is exactly there to replace TR1 when it fails (only TR1 or TR2 is in use at any time, they are not operated in parallel according to the PR).
Duration of the blackout? Just a few seconds with automation though it depends on how TR2 is magnetized as it is fairly large transformer sized to supply alone the whole LV BUS any time.
Without a detailed power distribution diagram it is impossible to know exactly how large (in kVA) TR1 and TR2 are, logically they should be identical.
The 3000 kW bow thruster BT is fed by the HV BUS (in the PR diagram it corresponds only to the two horizontal segments left and right of the bust tie breaker HVR, the gray shaded rectange has been choses sort of arbitrarily and does include much more than the bus itself).
As the BT propeller pitch is variable, the 3000 kW only represent the maximun power, also when manoeuvering reefer container load can be shed a few minutes if required.
The quite high maximum lpower required for the reefer containers is supplied by an unknown number of 6600 V/440 V transformers not part of the diagram. The effectively required power can widely vary depending on the refrigerated goods transported and the ambient temperature as reefer containers are basically insulated ISO containers with a refrigeration compressor.
If HR1 tripped due to an overload of TR1 (and LR1 to avoid TR1 being energized in reverse, i.e. from the LV BUS and LR1), once brought online TR2 would also trip if ratings are identical and the overload remains identical but the delay until the digital overload protection function or direct thermal protection based on transformer temperature measurement will trip HR2 will be longer as TR2 was cold when put online. In such case it is very possible that TR2 would have occurred after the allision.
Of course we don’t know if there was an overload, it is just given as example.
As DG3 and DG4 both remained connected to the HV BUS, the reason why HR1 tripped is not related to a HV problem of that bus at that specific time.
If not too severe, too long or too frequent, overloads are most often not a huge issue, most important is how the transformer can be cooled as increased temperatures reduce its lifetime.
Some main service ship transformaters are tightly sized which explains why occasional overload during normal use can occur as cost constraints are high.
Possibly harmonics may become a problem as there are more non-linear loads than in the past.
The other cause would be a short circuit which is simply speaking a very massive sudden overload. This is considered as fault and will damage the transformer very quicky if power is not removed. There are zillions of standards which specify the requirements of nearly all electrical equipment and they also depend on regions.
Currently we don’t know why TR1 tripped, overload would not be very surprising as it can happen, short circuit (internal or load-side) and other causes like insulation fault are less common.
Very plausible are human errors as well as malfunctions of protection relays and/or power management systems.
I forgot to mention that to avoid reverse powering the transformers, if the supply-side breaker (HR1 or HR2) trips, the corresponding load-side breaker also trips (LR1 or LR2), at least it is typically handled that way although the PR does not mention it.
As the 2nd blackout happened there were 2 possible scenarios:
- DG3 and DG4 were still available
It is unknown if DG3 and DG4 were shut down after DGR3 and DGR4 tripped, tripping a generator does not necessarily require its driving diesel engine to be shut down. Tripping several parallel running generators at the same time can happen due to overload.
Overload of a single generator operated in parallel with one or more other generators (of similar or different sizes (in kVA)) can occur e.g. if there is a load sharing problem. The power each generator produces when running in parallel is managed by the load sharing control function of the digital generator control (here rather crude Hyundai devices, I didn’t see any Deif or Woodward). If for any reason one generator does no longer provide enough power, other generators can end overloaded and ultimately all generators may trip.
Technically speaking an overload could have first tripped TR1 causing the first blackout and later cause the trip of DG3 and DG4 though TR1 could also have tripped again, so it would depend on how exactly the protections have been set though TR1 and TR2 should both be each sized in order to step down the power generated by somewhat more than two generators, maybe 2.5 or 3 but here I’m guessing (one part of the generated power is not stepped down by TR1 or TR2 when the bow thruster BT is operating and/or when reefer containers require power).
If the fault can be cleared it can be immediately tried to reconnect DG3 and/or DG4 as the generator controller will not close the generator breaker if some conditions are not met (HV BUS is dead + Voltage and Frequency within limits and stable; or ready to complete synchronization and, if applicbale, enable control signal from additional synchrocheck is present).
In such case the delay would have been a few seconds once the reconnection is initiated manually for one DG (no sync required).
- Nor DG3 nor DG4 is available
DG2 started automatically anyway as it was in stand-by and was automatically connected to the HV BUS as DGR2 was closed automatically which powered the HV BUS. It is totally illogical that TR2 was not automatically put into service by closing automatically HR2 and closing automatically LR2 not more than a few seconds after DGR2 was closed automatically.
In any case a good power automation system would have been able to either reconnect DG3 and DG4 if still running correctly (which would have shortened the 2nd blackout to a few seconds) or to close DGR2 automatically shortly after DG2 reached stable 720 RPM (60 Hz) and 6600 V. Possibe DG warm-up delay should be overridable in case of emergency, similar to the ME where some protections are disabled by pushing a special emergency button.
As DG2 was in stand-by it was kept warm anyway, ready to be started any time.
As example, some critical power emergency generators with large high-speed diesel engines are fully loaded in a single step within 10 seconds after being started.
Although the PR is not clear about some details, the provided information clearly proves that the first blackout could have been ended in a few seconds by closing manually HR2 and LR2 (by remote order, no operating the breakers manually locally). Maybe taking in account some reaction delay of some 30 seconds.
As said, automation could have reduced the 1st blackout to a few seconds.
The second blackout was handled very well by the crew. The transformer TR2, which should have been connected automatically, was very quickly powered manually by the crew who closed both HR2 and LR2 manually only 31 seconds after DG2 had begun its start sequence which was initiated automatically as HV BUS went dead.
It’s not even sure that automation would have been faster as possibly the crew was monitoring the RPM and voltage of the starting DG2 and just waited until conditions were met to energize TR2 manually. Inadvertendly prematurely closing HR2 would normally have been prevented by a logical safety interlock,
Would the blackouts have beem avoidable? Maybe, mabye not but at least the first one could have been kept much shorter with automation. Unfortunately the power management system is rather primitive and unable to reconfigure the power distribution within seconds.
Merchant ships are antique when it comes to integration, automation and SCADA compared to top industries, only single devices have modern mostly local controls and small Operator Panels but the layout of the engine control room is not really modern and makes it difficult to react quickly, also it does not allow a good overview, there are lots of very different displays, indicators and controls for many separate systems. Not to mention possibly confusing audible alarms (applies also to the bridge (wheelshouse)), a buzzer is not very useful if no one knows what it refers to. Maybe a voice alarm message like in cockpits would be better.
The engine control room ergonomy is poor and makes running such complex systems less easy than it could be.
More integrated up-to-date very reliable engine control room designs are possible, also some less basic automation will reduce operator errors and enhance safety. While many local operator panels are inavoidable as many local control systems will remain, interfacing them in order to be able to operate them, or at least access read-only data centrally will become more and more important. Ideally it should be relied on existing interface standards. Some dedicated displays like for the ME (MOP’s) will still remain but a myriad of small displays can be replaced by screens but of course all important controls must remain discrete (pushbuttons and other switches, rotary encoders, important pilot light,…).
Many SCADA and process control systems are not well designed because those who engineered them has no idea about how they will be used by real people in a real environment.
For the 2nd blackout there is not enough information to know if DG3 and/or DG4 could have been reconnected immediately or not. If not, the crew did about the best possible in reconnecting TR2 within only 31 seconds.
That is extremely short when stressed in a control room of a ship which has lost propulsion.
I do not know if the crew did some mistakes which caused the first blackout but, considering the timeline, kudos to the crew who handled the power generation and distribution issues, especially in such a critical situation.
It is easy to comment the incident watching a YouTube video, while indeed the elapsed time between the first power outage and the allision was very short and also.
I know I shouldn’t address it here but consider that the first ones to blame are those who are reluctant to invest maybe just even one additional % of the new ship price (maybe rougly around 150 to 190 miillions US dollars for large container ship) as it would make a HUGE difference.
Overall I consider that this was an unfortunate accident.
Hello Secos,
Might I also add that, as opposed to the aviation industry, merchant ships are absolutely “antique” with regards to equipment standardisation. This inherently creates potential for incidents from an operational perspective.
Clearly, the majority of commercial airliners are sourced from primarily two nations……America and France ……which makes the achievement of standardisation a great deal easier. Whereas, commercial vessels are produced globally making the achievement of any form of standardisation a pipe dream.
Sercos,
An excellent suggestion.
Generally the engine room is better served than the bridge in that the nature of the alarms are displayed on a screen in the control room. But I guess this went black along with everything else.
Especially for system failures, I don’t think audio voice warnings are the best choice. In aviation we generally only use voice warnings for situations requiring immediate and memorized/instinctive action. For example the famous “PULL UP” means you execute the terrain escape maneuver without pausing to decide if it’s real or not. It’s a memorized procedure, on the level of monkey see monkey do.
That being said creating distinctive levels of alarm so that the crew knows if it’s an emergency or just a minor problem is very important in my opinion.
We also have the option to cancel specific nuisance alarms depending on the warning level. For example, let’s say a gps unit fails and comes back intermittently due to jamming, well you can cancel that so it doesn’t keep distracting you.
Anyways how we distinguish different alarms is by color and buzzer/chime.
Warnings are red on the computer screen, and activate a red light, and have a continuous buzzer/chime until acknowledged.
Level 2 cautions have a single beep, yellow light.
Level 3 cautions, no beep, yellow light.
Advisories(system or component on the edge of tolerances) no beep, no light, just a flag in the computer.
Maintenance messages not effecting operations, again, a minor flag in the computer or even no flag and requires you to go looking for them basically.
Although I don’t know if a more robust alarm system would have make a difference here, it seems like the crew was aware of what was happening. The system design seems inadequate in itself.
rectified5846,
Now there is an understatement. Additionally, crew competencies and cultural compositions are many and varied.
As a professional aviator, you would be absolutely gobsmacked at the lack of standardisation allied with a level of poor design which is endemic within the commercial maritime arena.
As opposed to a fully endorsed Pilot entering the cockpit of a standardised A380 where you know the systems because they never change………every time a marine professional joins a different ship, be they engineers or deck officers, the systems and electronics are invariably different to their previous vessel.
We could learn many design lessons from the aviation sector.
EDIT: I would have to disagree with you regarding the application of voice alarms on the bridge and engine room. I believe that it would be a step in the right direction. Even under UMS conditions for the engineroom, the duty engineer could receive the voice warning in his/her cabin prior to making their way down. At present, all we are confronted with is a myriad of flashing lights and audible alarms……initially a recipe for confusion.
We can do better.
The standart of safety onboard merchant ships comes with ISM code implemented abt 30 years ago. Each ship and each company gets certificate of comply
Voice message alerts would be very easy to develop if there were not all those expensive approvals required by classification societies. You can even use an already approved PLC to start, not a big deal if you only have to interface existing signals for example from buzzers and major pilot lights. Could also be multilingual, there are many options.
Of course I was only referring to very critical attention seeking. If not carefully implemented it will end comparable to those user manuals with the 30 first pages listing 99 % of useless warnings like to not dry the cat in the microvawe oven but as everyone skips those pages, the 1 % really important warnings about not well known nor intuitive dangers go unnoticed.
For example VSD (Variable Speed Drive)-fed motors still under dangerous voltage even if not rotating or Permanent Magnet Synchronous Motors where dangerous voltages can be present at the stator winding terminals when the shaft ist rotated fast enough even if phase conductors are all disconnected.
Also non-electric risks like for eample explosion risks as well as suffocation risks due low oxygen level (e.g. wood pellets as dangerous cargo?) and toxic gases among others. If someons is to be rescued from a confined space, not strictly following the rules can lead to additional casualties.
It is extremely important to mute secondary alarms which trigger conditions tied to a single common cause. For example, if a small 24 V DC distribution breaker trips, causing 25 sensors to no longer be powered, you don’t want to be flooded by 25 useless messages, all you need to know is that that specifc breaker tripped, the other "sub-"events can be muted but should still be accessible if you want to check if all 25 expected sensors failures were logged correctly and maybe you are curious to find out in which order signals were lost (why event timestamps should always include fraction of seconds, no one cares if milliseconds are not correct but they allow to establish the chronological order as, depending on systems, alarms are not necessarily listed in true chronological order if the seconds of the timestamp are identical for several messages, especially with networked automation).
Of course, when designing seriously you will power each of the 25 sensors individually and use a digital input to monitor each breaker, so if one sensor trips a breaker it will not affect any other sensor and you will also immediately know which breaker tripped and additional useless alarms related to that single invalid measurement values can be muted.
If you get one single error messge if any single one of 50 small circuit breakers which auxiliary contacts are all wired in series you may have to check locally which breaker tripped, if each one is monitored individually you know exactly what happened.
In critical situations knowing precisely what happened can save seconds or minutes which will decide if you end with a harmless incident or a billion US $ losses.
Surprisingly, for many even very complex control systems, alarm handling has never been optimized. Ending flooded with useless alarm messages makes it impossible to react quickly to the few really important messages.
One can also configure displays so that some displays show only important alarms while other show differently filtered event lists. In advanced ME (Main Engine) control rooms there should be quite a number of SCADA or proces control displays in order to not have to constantly… navigate (!) between images.
Major systems should be run using several displays (several for main process control views, 2 ot 3 displays for alarms and other events, maybe one or two for trends, the idea being that each display can display everything.
Nothing special indeed, it can be done that way since decades and it is extremely reliable (and was even more reliable under older MS-Windows versions, NT 3.51 and 4.0 were quite solid, up to 8.1 one can live with but 10 and above are merely Babel Tower architectures with a growing potential for disasters, most HMI (Human-Machine Interface, also MMI, Man-Machine Interface) wolrdwide run under some more or less bitter MS-Windows flavor) if designed correctly, multiple redundant client/server architectures where every component, including servers, can be replaced without any interruption (will go unnoticed by the control room staff, and if a display is lost there are plenty of remaining ones so you can replace a display or a client computer without causing problems).
Very critical sysrtems should rely on diversity, especially as it becomes more an more complex to verify and validate software. Just my personal opinion, it’s a complex subject.
More generally speaking, software will still often remain more challenging than the hardware design, also because most hardware engineers are… engineers.
Displays with redundant automation systems are more reliable than zillions of hardwired lamps and switches. Let’s keep the required discrete controls but not the ones which can easily be replaced by displays, especially lamp matrixes. For ciritical entries a hardware validation while clicking with a mouse, for example with a hardwired pushbutton (which does not need to be pressed individually for less critical entries), should be kept as touchscreens, keyboards and mouse/trackball entries cannot be safety oriented, also often used manual controls as well as those related to safety/emergency and all those which mor generally need to be actuated manually very quicly snd/or often in case of emergency must not be handled by computer input.
No one would ever replace control sticks (industrial joy sticks) by touch screens to manually control a crane.
There could be written a lot about process control images but overall the ones used in merchant marine (e.g. ECDIS, RADAR, conning displays,…) are quit well designed compared to average SCADA where fancy looking totally overloaded 3D-style colorfoul images have become common, there you first need a couple of seconds to identify what you’re looking for and the next bad surprises are all those icons you can’t remember because you’ve 50 to use machines made by different manufactuteres each using their other icons.
With text labels, as long as one can read and understand their text, the risk of accidental confusion is much lower.
Fortunately day/night readability requirements imposes more stringent and rarional requirements when it comes to HMI screen images.
Sorry, I’m old-school here, I llike large high resolution screens with a lot of clearly presented information, both as graphics as well as text. The new tablet-style based graphics are not great when it comes to process control.
Considering the overall cost of large container ships, investing in appropriate automation and SCADA or process control (see the integrated controls discussion, currently mostly focused on bridge (wheelhouse) integration) would be wise, especially as subsystems become more and more complex.
Also crews cannot spend enough time on the same on the same ship which makes it close to impossible for the eletrical engineer to thourougly know details.
As comparison, let us magine the whole control room crew of a nuclear powerplant or petrochemical plant being replaced by a new one every couple of months.
Technical solutions exsit and are not that complex, the major anoyance are all those rules and standards which increase costs and delays. Many regulations and inspections would not even be required if one designs seriously as he would meet the rules just following sound design practices even without having read dozens of prescription binders.
In my domain I don’t read much standards, with most of them I just waste time finding out that what I do already meets the requirements of those standards anyway.
And don’t ever believe that a CE (or UKCA) marking is automatically an undisputable proof that all requirements of the standards listed in the Certificate of Conformity are really met.
Bureaucracy has never made quality, it just tries to document it and maybe also cover some a****.
For every project a lot of money gets wasted because someone without any hands-on experience had too much time to create new useless rules… and also rules about how to enforce those new rules. New inspectors are required who will also propose new rules to keep everyone busy. And I have not even mentioned the lawyers…
Sorry for my rant.
Sercos,
This is not a rant……it is practical, realistic bordering on brilliant.
I have read it three times and just kept nodding my head in agreement….even down to the commentary on bureaucratic ineptitude and Classification Societies.
If KC reads it then hopefully he archives the post. No doubt there are many contributors on these boards who also agree with you.
A chasm has formed between the exponential increase in vessel size and the aged technology which is part and parcel of this profession.
I thank you again for your input. Once again, I have learnt.
The comparisons between commercial aviation and shipping are interesting but what has not been mentioned is that aside from a level of standardization that will never exist in the commercial shipping world (and probably hasn’t since WW2 Libertys,Victorys, and T-2s) there is the matter of record keeping and regulatory reporting requirements.
The aircraft manuals dictate in great detail exactly what is allowable in which conditions, the corrective action required, and the records which must document any action or deferral. That documentation becomes part of the aircraft history, its DNA so to speak and follows it and its parts to the boneyard. Failures or aberrations with safety implications quickly show up and are not (usually) kept secret. Of course there are exceptions, it is human nature, but there exist the means to share critical information between operators and manufacturers.
In shipping, if the ship goes out and comes back nobody knows or cares what happened along the way unless it makes the evening news. And that suits the industry quite well.
In this message I posted here:
under “2 FIRST BLACKOUT” please ignore the timestamp “StreamTime LIVE Video: 2024-03-26/01:25:01 EDT” about the Emergency Generator (EG) supplying power after its start sequence was initiated automatically as the 1st blackout occurred.
I thought faint lights could be noticed but I was wrong, those were probably reflections on some windows.
Some faint lights are possibly battery-powered emergency lights and maybe also even some portable battery lamps but I can’t tell for sure.
During night time the bridge (wheelhouse) is kept dark with a part behind a curtain sometimes being lit to be able to read printed documents so it will remain dark anyway.
Als those are reflections on windows far away, it’s hard to interpret small pixel blocks on the screen.
My mistake, sorry.
The expected delay for the EG (Emergency generator) to come online should be less than 45 seconds IIRC but depending on the emergency generator controller setup which also depends on the emergency diesel engine, critical loads can be supplied within as early as about 10 seconds up to the allowed 45 seconds.
As emergency diesel engines are preheated they can be loaded quickly once the required RPM’s are met (here 1800 RPM as it’s a 60 Hz 2-pole [EDITED: Read “4-pole”] synchronous generator). Some large fast 4-stroke emergency diesel gensets can be loaded 100 % about 10 seconds after the engine starts rotating. See the YouTube video showing a large MTU genset starting (just search NFPA MTU).
The emergency generator controller is local, either on a panel part of the genset but more commonly part of the control switchgear located next to the generator. Typically it will be a simpler controller than those used for the large about 10 times more powerful semi-rapid (here 720 RPM) 4-stroke auxiliary diesel engine generators DG1 to DG4, though the used Hyundai genset controllers are rather basic ones.
Well designed electronic controllers are very reliable, the real-life MTBF ist not necessarily related to the number of electronic components of such controllers, it’s way more important to get a good one to begin and than avoid exessive vibrations (e.g. correctly sized wire rope isolators), high temperatures and ingress of humidity, water and dirt and ensure clean stable auxiliary power (for example 24 V DC) for the electronics. Best practice would probably be to not have any single ventilated electrical enclosure in the engine room, only at least IP 65 sealed ones, with, where required, water heat exchangers.
EDITED 2024-05-25:
As retdmarineengineer noticed, I made a silly mistake about the pole number, of course at 1800 RPM 60 Hz the generator needs to be of 4-pole type. Large 2-pole synchronous generators (3000 RPM at 50 Hz and 3600 RPM at 60 Hz) are used in high-speed applications, mostly driven by steam or gas turbines.
Up to around 2500 kVA (usual) or even somewhat above, diese-engine-driven generator sets (gensets) are generally rapid (1500 RPM at 50 Hz and 1800 RPM at 60 Hz), above nearly all but ancient ones engines are semi-rapid (various RPM).
Generators driven by large propulsion diesel engines are rare, genators require a large number of poles which is not convenient to manufacture with a horizontal axis. Diesel engine power plants with many semi-rapid gensets are more common.
Very nice ‘rant’ 244! And now with the push for green fuels with Methanol and Ammonia is disaster in the making.
Hi Sercos
The transformers as you mentioned earlier in the message will be sized for the full load of the LV board. Rules requires 100% redundancy. So 2 x-formers. All this load is what is termed as ‘essential loads’. (Rules couldn’t care less about BT and reefers except that the eqpt installed to service these meet the rules and standards)
Rules also require all the essential loads to be able to serviced by 1 generator with a 100% redundancy.
I cannot see the ‘essential loads’ being anymore than 2000 - 2500 kW including the large, probably around 400kW hydraulic oil pump that is connected directly to the HV board. So even the smallest generator at 3840kW is generously sized.
The others are there for the BT and the significant reefer (1400) outlets.
I am still get my head wrapped around why only 2 generators on departure? BT takes up almost a generator (you have to be prepared for full load draw). Even if there was less than 100 reefers on board, running 3 would make more sense. Wonder if this the ‘saving fuel’ and avoid unnecessary running hours mentality. I am also curious if the HV board design allows any combination of generators to run (including all 4) simultaneously or is there a restriction due the short circuit rating of the board.
Minor typo. 1800rpm/60hz will be a 4 pole genie.
MTU? Busted 3 large 1600kW V16s (2 brand new EDGs on a rig during sea trials and 1 'essential on a FPSO. Problem is zero to hero in a few seconds for high speed engine that needs a lot more consideration.
In my experience it’d be more or less standard practice to run 3 generators for unmooring and reduce down to two generators at about the same time the tugs are let go.
Depending upon specific circumstances of course. Thinking being not much point in raking up hours on a generator at speeds too fast for the BT to be effective.
I don’t profess to understand all the functionality of the PMS although I would have assumed that one of the aims was control of distribution in the event of unplanned interruption. Yet, all the breaker resets were undertaken manually which no doubt absorbed more time than that required by the PMS. Had the PMS provided instantaneous redirection then perhaps steering and propulsion integrity may have not been compromised.
My other query relates to diagnostic monitoring of the PMS. If a fault arises in the system will it flag codes?
Hi Aus
Difficult to say if above is true. I would have thought on blackout recovery (first on the HV board) the ‘essential’ service x-formers would be switched on without any time delay. Loading up the LV board would also be a sequential program so makes sense for the transformer breakers to be switched on immediately.
Possible in this case after they had the first blackout due to transformer breakers opening and restoring by closing the same, the second blackout happened with DG 3/4 being knocked out.
So with DG2 starting, they possibly decided to take the other transformer in service manually before the automation kicked in (maybe had to disable TR1 coming on line as DG2 was starting up). Stands to reason that TR1 breakers would have been selected as the ‘master’ and they had to disable this before it closed automatically. Can only imagine the chaos in the engine room. Of course not to forget no engineer would have had the time to run to the SG flat.
There should be alarms (and flags) for example if one DG is locked out for maintenance, etc. Will the system do a self diagnostic and ‘say’ all systems healthy? Don’t know.
The one thing that can be said with certainty is the way the power distribution was configured at the time of the blackout only had one single path of current from the HV buss to the LV buss thus had a single point of failure which is what occurred. Had the second HV/LV transformer been connected that morning with the HV busstry open and two gensets feeding each side of it, then losing any single generator or transformer then this would not have occurred however it has not been discussed why the engineers did not configure the current distribution in that manner? Was there some operational reason not to have been operating the switchboard in a fully redundant manner?
If you have the ability to, then why on earth not use it?
Hi KC
Indeed. Which is why the original conjecture was the timing of the first blackout was related to the crew securing the BT creating a ‘disturbance’ on the HV board.
Also when you say run 3 DGs … you actually mean all 3 … correct? Here with 4 … could have … should have …