The problem, in my experience, has often been the failure to produce a Concept of Operations (Conops) prior to deployment. The two things a conops gives you is the performance envelope (the system does this, it doesn’t do that) and, more importantly, the doctrine. Doctrine is the description of what is expected of the operator in various situations, and writing it down explicitly uncovers situations where unreasonable expectations exist or unwarranted confidence is likely.
A good example of the consequences of failing to produce a conops is the so-called “level 3 fallacy” in autonomous vehicles. Level 3 is the “semi-autonomous” level in the SAE scale: robot primary, human backup. Then they built some and discovered that the first thing many human backups did when the robot took over was fall asleep. Even when they were awake, tests showed it took as long as 26 seconds for an “average” human to regain sufficient situational awareness to safely drive the car. I am firmly convinced that writing a conops before they started would have saved a lot of time and money, and at least one fatality.
Another example is the blowout preventer on the Deepwater Horizon. A great deal of time was wasted after the event by experts arguing what the performance envelope of that thing should have been, as well as the doctrine associated with its operation.
I introduced the notion of a conops to a senior drilling engineer and his response was “this reads like the first draft of the user manual.” My response was, “Exactly. The point is you write it first and the rest of the documentation tree follows.”