It is crazy to think how far technology has come in such a short amount of time. In just thirty years there have been countless innovations and developments that have integrated technology into nearly every aspect of our daily lives. Cell phones that can call people anywhere in the world have become the norm. The internet allows you to find the answer to practically any question in a matter of seconds. There are even refrigerators that have a camera inside so that you can see what groceries you need to pick up.
Among the many, many automated products available nowadays are cars. Several companies, such as Tesla and Google, have been testing self-driving motor vehicles. These vehicles would eventually replace the need for a driver, preventing accidents and saving lives. However, as with the rise of any new technology, there are new legal questions that will at some point need to be answered.
One such question that the courts may soon have to grapple with is: what happens when an intoxicated driver is using a self-driving vehicle with the self-driving feature turned on? Is that person in fact, driving under the influence?
According to Fortune, the incident that gave rise to this legal question occurred in January 2018. On the Bay Bridge in San Francisco, the California Highway Patrol found a man passed out behind the wheel of his Tesla and pursued a DUI investigation. The driver claimed that his vehicle was on autopilot. The autopilot feature available in a Tesla is “designed to get a driver's attention if it detects a challenging situation and brings the car to a stop if a driver does not respond.” The feature is “not fully autonomous driving, though it can look like it for short stretches and under specific conditions,” and Tesla specifies that drivers should remain alert when using it. Clearly, this autopilot feature is not the same as an autonomous driving vehicle, thus, the California man was in full control of his vehicle. The police arrested him for DUI.
Flowing from the California man's argument is the real legal issue that arises from the use of a fully autonomous vehicle: who is in control of the vehicle, and therefore, who is to blame when something goes wrong?
Who is Responsible for Self-Driving Cars When Accidents Happen?
In May 2017, Governor Nathan Deal signed a bill that allows self-driving cars on public roads. According to AJC, Senate Majority Whip Steve Gooch said, "These cars are going to save lives; they're going to reduce DUIs and reduce fatalities on our state and local roads."
But that's not the case for a woman who was struck and killed in Tempe, Arizona this month. Uber is testing out self-driving cars -- unlike the semi-autonomous car the California man was driving -- and, unfortunately, one test vehicle may have caused the death of 49-year-old Elaine Herzberg. AZ Central reported that she was walking her bike across the street when she was struck by one of Uber's self-driving SUVs. Ms. Herzberg now has the morbid distinction of being the first person killed by a fully autonomous test vehicle.
So who is to blame for this unfortunate accident? There was a driver behind the wheel who wasn't paying attention, but the technology was supposed to detect things in the road and brake in time to avoid the collision. The Tempe Police Chief stated that the vehicle wouldn't likely take the blame for the collision. But should it shoulder some of the responsibility? Should malfunctioning technology be a mitigating factor in an accident of this type?
Can Self-Driving Vehicles be a Defense When Something Goes Wrong?
Likely, all of these questions will be answered in the years to come as self-driving technology becomes more widespread. Perhaps later on, malfunctioning technology will be a DUI defense or a defense that could be used in an accident like the one that happened in Arizona. Today, however, it is not a defense. So keep in mind: it is important to keep your wits about you-–and remain sober-–when behind the wheel, even if you now have an autopilot feature in your vehicle.