This week’s question comes from Adrian H. in San Francisco who asks: “I read something in the paper about self-driving vehicles the other day. Did the government recently state that these cars can be operated without someone being able to take control in an emergency? What does that mean: will we be seeing cars on the road without someone in the driver’s seat?”
Adrian, there is a rapid development in the area of the self-driving (autonomous) vehicle. Some, including me, think that things are moving too fast without proper consideration for public safety.
The federal government and the states, including California, are approaching the future of transportation from different directions. While the State of California is approaching the future with an eye towards safety and security, the federal government appears to be bending to the whim of the major automakers who want to push ahead in a radical new direction.
From the beginning of the history of transportation the driver of a vehicle has been understood to be the one who was sitting in the saddle, holding the reins, or sitting behind the wheel of a car.
Last month, the National Highway and Transportation Administration (NHTSA) took a radical departure when it published a letter to the Director of Google’s self-driving project stating that the “driver” of an autonomous vehicle is the Artificial Intelligence (AI) system directing the vehicle’s movements.
NHTSA Definition of Driver In An Autonomous Vehicle: “Dangerous and Unacceptable”
In my opinion NHTSA’s radical definition is a dangerous and unacceptable move. Ironically, the very same day NHTSA published on its website its letter to Google, the company revealed that its driverless vehicle caused an accident on February 14, 2016, with a municipal transit bus on in California.
The collision happened when the vehicle, in autonomous mode, with a human on board, drove out its lane and then merged back into the path of travel striking the side of the bus. Although the impact was minimal (with the bus traveling at 15 mph, and the car traveling approximately 3 miles per hour) it caused damage to both vehicles.
In a DMV report Google stated that the vehicle’s movements were made “more complex” by the presence of some sandbags in the roadway. Google stated that the AI believed that the bus would slow or stop yielding to the vehicle to merge back into the lane of travel.
Google stated: “From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future.”
If the AI is so unsophisticated that it cannot navigate a “complex” decision regarding sandbags in the roadway, and a vehicle as large as a bus was alongside of it, then what will it do when it encounters a truly complex situation such as a child running into the roadway?
Google’s “hopes” that the technology will act more “gracefully in the future” are unacceptable. When it comes to safety, the public has a right to certainty.
NHTSA should rescind its determination and adopt the regulations which have already been established in a number of states, including California (where Google is based and the most AI driven miles have been clocked), declaring that a human operator is responsible for assuring that the vehicle is driven safely.
Under the current California Vehicle Code, Section 38750 (4), An “operator” of an autonomous vehicle is the person who is seated in the driver’s seat, or if there is no person in the driver’s seat, causes the autonomous technology to engage.
Subsection (5)(b)(2) states that “the driver shall be seated in the driver’s seat, monitoring the safe operation of the autonomous vehicle, and capable of taking over
immediate manual control of the autonomous vehicle in the event of an
AI failure or other emergency.”
Unlike the proposed federal regulation, California law places responsibility on the individual who is in the position of control of the vehicle and/or the initiation of
the AI. This allows for an analysis of the decision maker’s conduct in utilizing and/or controlling the technology allowing a determination of who is responsible when an accident occurs.
As the Feds delegate responsibility to AI, the DMV is utilizing human intelligence to craft sound policy balancing safety, security and privacy rights. The proposed regulations, to be employed during a three year testing program, require manufactures to meet independently verified safety standards, and to have a licensed operator inside the vehicle capable of taking control in the event of a technology failure or other emergency.
The DMV regulations also are designed to protect privacy requiring manufacturers to disclose to the operator if information is collected, other than the information needed to safely operate the vehicle.
As a trial lawyer volunteer who handles personal injury actions, I am helping to influence these regulations before someone gets injured or killed by reviewing and commenting on these proposals to make sure that consumer rights and safety are protected.
This article was written by Chris Dolan and published by The San Francisco Examiner. To read all of Chris’ articles on the law published by the Examiner click here.