When you write a Concept of Operations for a system that gives humans full authority (see https://www.mitre.org/publications/systems-engineering-guide/se-lifecycle-building-blocks/concept-development/concept-of-operations if you are unfamiliar with ConOps) you inevitably end up with a section called “Doctrine” which is directed at the human operator and boils down to a list of statements of the form “keep alert and don’t let the system get into such and such a state.” E.g. “don’t run aground.” And then there’s always a written or unwritten footnote “unless the nature of the emergency is such that running aground is the best option.”
The problem with fully autonomous systems is you have to program in all that doctrine, much of which rests on human intuition and tacit knowledge. Robots aren’t that smart yet. The problem with semi-autonomous systems, like the Tesla “autopilot” is that they breed complacency (people treat them as if they were fully autonomous) and the vendor’s “doctrine” that the operator should be able to tell when the robot is confused and take over control is just a marketing cop-out.
The bottom line is that control of autonomous vehicles, large or small, fast or slow, in real-world conditions is a problem that is not solved by wishful thinking and marketing brochures. And that’s without taking into account the existence of competent organizations that wish to do you harm.