The National Transportation Safety Board is already pushing manufacturers to install standardized black boxes in autonomous vehicles.
If autonomous vehicles crash, who is ultimately at fault? Was it driver error or system error? Was there a conflict in the messaging occurring between vehicles? Aircraft crash investigators, led by the National Transformation Safety Board, have been very effective in pinpointing and remedying the causes of plane crashes over the years, thanks to the presence of a “black box” that provides hard-sealed data and audit trails of pilot and machine actions.
Should self-driving cars and robots also come with black box recorders? That’s the proposal put forth in a paper published in Nature Machine Intelligence. In a related post at Stanford’s Human-Centered AI site, Edmund L. Andrews Andrews notes that the new proposals “focus on three principles: preparing prospective risk assessments before putting a system to work; creating an audit trail — including the black box — to analyze accidents when they occur; and promoting adherence to local and national regulations.”
Black boxes are mandatory parts of aircraft cockpits, and now train cabs, providing recordings of the pilot’s or engineer’s actions as well as what was happening mechanically with the vehicle or aircraft. It’s time to consider installing the same devices in autonomous vehicles and robots, urges the paper, written by a team of researchers, led by Gregory Falco of Johns Hopkins University. This issue has become especially pertinent with some recent crashes involving Tesla cars equipped with autopilot systems. At issue is trust in the capabilities of autonomous vehicles and robots — including industrial robot welders, care-bots and even robot lawnmowers, Andrews adds.
“There are assurance challenges that have become increasingly visible in high-profile crashes and incidents,” Falco and his co-authors state. “Governance of such systems is critical to garner widespread public trust.”
Previous efforts to instill governance principles in autonomous systems have been met by cost and process constraints, they add. They propose an independent audit of AI systems that would embody three ‘AAA’ governance principles of prospective risk assessments, operation audit trails and system adherence to jurisdictional requirements.” Such an independent auditing mechanism “could foster risk awareness, responsible development and thoughtful utilization of highly automated systems.”
The National Transportation Safety Board is already pushing manufacturers to install standardized black boxes in autonomous vehicles, Andrews states. “Falco and a colleague have mapped out one kind of black box for that industry. But the safety issues now extend well beyond cars. If a recreational drone slices through a power line and kills someone, it wouldn’t currently have a black box to unravel what happened. The same would be true for a robo-mower that runs amok. Medical devices that use artificial intelligence need to record time-stamped information on everything that happens while they’re in use.”