Automotive

Driverless Cars Crash Into the Trolley Problem

Greg Walden of McGuireWoods looks to the 19th century to explain the 21st-century issues confronting the standards and risks surrounding autonomous vehicles.

CCBJ: As driverless cars become a reality, what top legal concerns are keeping corporate counsel up at night?

Greg Walden: They may actually be more concerned right now than they will be in 10 years. I say that because we’re not yet at a point where we have driverless cars, where a person just sits there and has no ability to control. Drivers are relying on technology, but until complete autonomy, no matter how technologically advanced the driving experience is, there’s still a role for the driver to take over if something is wrong.

That’s where the uneasiness is, for both manufacturers and employees using a company car with technical features. Employees using cars that are not fully driverless need to understand the limits of the technology and that they are still responsible for taking control of the vehicle when necessary – and when that would be.

Corporate counsel must ensure that insurance policies adequately cover these vehicles. If the company is sued, is it able to go after the manufacturer, where there may be a flaw in the design or manufacturer of the driverless or almost-driverless vehicle?

A major concern is that motor vehicles are not yet fully automated, but we’re getting there. Cars with various levels of automation are on the road. Even if your car is driverless, the car going the other way probably isn’t. Legal counsel for a business operating driverless cars should make sure it is fully protected by insurance policies and has as much protection from indemnification provisions as can be obtained contractually.

Who is legally responsible when a two-car accident involves at least one autonomous vehicle?

Traditionally in a car accident, you could sue the driver and the driver’s employer, and you’ve always been able to sue the manufacturer if you think the fault – what caused the accident or contributed to the accident – was a design or manufacturing defect. The more autonomous the operation, the less likely the driver will be held responsible. It will be the manufacturer of the vehicle.

There is one scenario in which the operator of an autonomous vehicle might be held responsible: a company that rents a fleet of autonomous vehicles or uses them like Uber or Lyft. If there’s an accident, the driver of the nonautonomous car may sue the company that has the fleet as well as the manufacturer. And the operator could file a third-party complaint against the manufacturer or vice versa: The manufacturer is sued and replies, “It wasn’t our fault. It was a lack of training, a maintenance problem or an operation outside the technological parameters by this fleet operator.”

What ethical considerations go into programming autonomous vehicles?

This begins with the so-called trolley problem. A trolley goes out of control, and the trolley operator has to choose between moving to the left and one person may be injured or killed, or going to the right and five people on the trolley are killed. Either way, there will be loss of life or serious personal injury. What do you do? You want to protect the trolley passengers, so you steer to the left, killing the bystander.

This relates to how coders program autonomous vehicles to respond to a common accident scenario. Say the cardinal programming objective is to protect the occupants of the autonomous vehicle. The program is designed so that the occupants in the car will be protected to the maximum extent. One or two occupants of that autonomous car might be saved, but 20 people in its path might die as it took evasive action.

Similarly, if a vehicle is programmed to stop to avoid hitting deer in the road, it stops but the cars behind it at a high speed will crash into the stopped vehicle. A human driver would look in the rearview mirror and say, “If I brake suddenly to avoid the deer, I’m going to get smashed and a number of people may be hurt, so I might as well run into the deer.” That’s a human decision, and there are countless considerations that go into calculating the right choice.

How do you program a vehicle in that way? The law may or may not sort that out. If damage and injury are inevitable, at a minimum, the driver and perhaps the manufacturer need to know how an autonomous vehicle is programmed. If a nonautonomous vehicle is about to crash, must the autonomous one make the evasive maneuver because it cannot assume cooperation from the autonomous vehicle? These questions are fascinating but frustrating; there’s not a clear answer. Manufacturers should not be blind to it, but this may be a matter of public policy. Whatever programming choices are made should be transparent.

How have state and federal governments approached automated and autonomous vehicles?

The federal government has authority over the design and manufacture of trucks and cars. That’s the purview of the National Highway Traffic Safety Administration (NHTSA), which is part of the Department of Transportation (DOT). States adopt the rules of the road, including speed limits, car registration and driver licensing. With regard to automation and autonomous vehicles, the NHTSA has yet to put out a single National Motor Vehicle Safety Standard (NMVSS). To date, NHTSA has published only guidance. Congress is pushing the NHTSA and the DOT to come out with more policies and guidance, as well these safety standards.

Lacking a fully developed federal regulatory framework, some states have taken action. Nevada, for example, actually has safety standards for driverless cars. Those could well be preempted when the federal government adopts one or more NMVSS. Many states have amended their laws so that autonomous or semiautonomous cars comply with the rules of the road just as traditional cars with drivers do.

As a matter of policy, NHTSA is likely to stay with its approach of not certifying and not approving a design or the manufacturer of a vehicle that conforms to the design. NHTSA’s historical position is that it’s up to the manufacturer to comply with the National Motor Vehicle Safety Standards, and once that’s OK, those cars are good to sell to the general public. This is the opposite of how the Federal Aviation Administration (FAA) regulates the safety of aviation. FAA says you can’t fly an aircraft unless it is certified as airworthy. NHTSA is not leaning in that direction. Bills moving through Congress would require manufacturers to provide safety certifications to the NHTSA, somewhat of a middle position. The feds will still, I think, stay in the driver’s seat with respect to the design and manufacture of driverless cars, with states relegated to rules of the road.

Insurance is also a state-level matter. As cars have become more technologically complicated, litigation has been less than satisfactory both to plaintiffs – because of the high cost of litigation and the challenge to meet the burden of proof – and to defendants – because of the high cost of punitive damages. Some states have adopted no-fault insurance. In that way, insurance payments could resolve disputes, obviating protracted, very costly litigation. States that have not embraced no-fault insurance might look at it now in the context of autonomous vehicles, in which the problem may be an error in the programming, something that is difficult to specifically identify. But that’s going to take several years to sort out. Many states are waiting for the federal government to act, although liability is largely a state matter.

In addition to the rules of the road, state and local governments control what happens when there’s a tort lawsuit, and courts determine what standard applies. Courts are going to have to grapple with the notion that there’s no longer a driver in one or both of the cars involved in an accident. Will the standard of proof be revised when the defendant is a driverless-car manufacturer?

Right now, for the most part, the burden of proof is on the plaintiff. With autonomous vehicles, it’s going to be more difficult and costly for plaintiffs to determine the cause of the fault. Tort law may be revised in some states either to establish a no-fault insurance system or to use common law doctrines like res ipsa loquitur to relieve the plaintiff of a very challenging burden. Perhaps the nature of an accident will be sufficient to determine that the manufacturer has breached the duty of due care.

What policy questions or issues should companies consider as they seek to deploy driverless cars or contract with transportation companies that plan to use them?

Companies that are going to deploy driverless cars that they own or lease, or companies that contract with a transportation company, will need to vet vehicles to make sure that they are safe and reliable and to ensure that those vehicles comply with applicable National Motor Vehicle Safety Standards. But without those standards, which could take years to develop, companies need to be assured that the vehicle has been designed to address the various risks. Those standards and risk factors are included in DOT policy statements.

A company would need to make sure that its insurance policies adequately cover autonomous vehicle operations and should seek indemnification if possible to address personal injury or property damage.


Greg Walden is a partner at McGuireWoods and a senior advisor in the firm’s full-service public affairs arm, McGuireWoods Consulting, where he is a member of the Emerging Technologies practice group. With experience in government, private practice and teaching, he is a national authority in transportation law and policy. Reach him at [email protected].

Published .