
There is an irony, and there is also a situation in which you crash Tesla on a system of full autonomous driving (Full Self-Driving) after you have led Uber’s department for the development of just such vehicles for years. That happened Raffi Krikorian in his Model X, but his story isn’t really about a single car accident.
Instead, it perfectly captures the awkward stage we’re in with modern automation, where technology can do most of the work but still expects a human to step in instantly when something goes wrong. While this is the reality of today’s driver assistance systems, cases like this show how fragile that balance really is.
What exactly happened?
According to his essay published in The Atlantic magazine, Krikorian drove his children in an urban environment through residential streets with Tesla’s Full Self-Driving (Supervised) system activated. Over time, he became comfortable using it off highways, after months of driving without a single incident. And then the car started to turn, the steering wheel jerked unexpectedly, and in just a few seconds, the Model X crashed into a concrete wall.
Everyone came out unscathed, but the experience shook him deeply. Not just because of the crash, but because of how familiar the pattern seemed to him. It describes what researcher Madeleine Clare Elish calls the “moral zone of responsibility,” the idea that when complex automated systems fail, the human operator absorbs the blame in the same way that a car’s safety belt absorbs the energy of a crash. Although the system does most of the work, the driver is still legally responsible.
The driver’s responsibility is unquestionable
Based on this principle, Tesla won numerous court cases, and it is easy to see why. The company, like other car manufacturers, constantly warns drivers that autonomous features are not perfect and that they must be ready to take control at all times. But the most interesting part of Krikorian’s essay is not legal, but psychological and physiological.
On the psychological side, semi-autonomous systems create a dangerous gray zone where they work so well that drivers stop actively driving. But they don’t drive well enough to eliminate the need for human intervention. Tesla’s data backs that up: FSD vehicles have one accident every 5.3 million miles driven, compared to one accident every 2.2 million miles for manuals. Paradoxically, it is precisely the statistical superiority of the system that encourages relaxation.
Researchers call this phenomenon “vigilance decrement”. When people monitor a system that almost never makes mistakes, their attention naturally wanders. It’s a familiar problem that is often overlooked in the shadow of bombastic headlines about accidents involving any form of autonomy.
When attention wanders, we come to the second part of the problem: physiology. Even the most prepared people often need a few seconds to refocus, decide how to act, and then physically react. The same pattern appears everywhere humans oversee automation, from cockpits to AI chatbots. Technology builds trust by working flawlessly most of the time, then relies on a human to save the day when something unexpected happens. And when that rescue fails, it is the man who is held responsible, as confirmed by numerous reports and court rulings.
The most difficult part of the whole situation is that this middle stage of technology development may be inevitable. Technology must be used in the real world to improve, and that means living with systems that can do most of the work, but still need a human ready to take control at a moment’s notice.
The problem is that the better these systems get, the easier it is to forget that you’re still in charge. Until the moment when the traffic accident report reminds you of it.