Transportation

Feds Put AI in the Driver’s Seat

The artificial intelligence component of Google’s Level 4 autonomous cars can be considered the driver, whether or not the cars are occupied by humans, the U.S.National Highway Transportation Safety Administration said in a letter released Tuesday.

Level 4 full self-driving automation vehicles perform all safety-critical driving functions and monitor roadway conditions for an entire trip.

Google’s L4 vehicle design will do away with the steering wheel and the brake and gas pedals.

Current U.S. Federal Motor Vehicle Safety Standards, or FMVSS, don’t apply because they were drafted when driver controls and interfaces were the norm and it was assumed the driver would be a human, the NHTSA wrote to Chris Urmson, who heads Google’s Self-Driving Car Project.

Those assumptions won’t hold as autonomous car technology advances, and the NHTSA may not be able to use its current test procedures to determine compliance with the safety standards.

Google is “the only company so far committed to L4 because their objective is to completely eliminate the human, and thus human error, from driving,” said Praveen Chandrasekar, a research manager at Frost & Sullivan.

“Ford and GM are thinking about similar levels” of automation, he told TechNewsWorld.

Safety Standards

Google had provided two suggested interpretations of what a driver is and one for where the driver’s seating position is, and then applied these approaches to various provisions so its self-driving vehicle design could be certified compliant with FMVSS.

“The next question is whether and how Google could certify that the SDS (self-driving system) meets a standard developed and designed to apply to a vehicle with a human driver,” the NHTSA wrote. It must have a test procedure or other means of verifying such compliance.

The NHTSA’s interpretation “is significant, but the burden remains on self-driving car manufacturers to prove that their vehicles meet rigorous federal safety standards,” U.S.Transportation Secretary Anthony Foxx said Wednesday.

The NHTSA’s interpretation is “outrageous,” said John Simpson, a consumer advocate atConsumer Watchdog.

“Google’s own numbers reveal its autonomous technology failed 341 times over 15 months, demonstrating that we need a human driver behind the wheel who can take control. You’ll recall that the robot technology failed 241 times, and the human driver was scared enough totake control 69 times,” he told TechNewsWorld.

Unresolved Issues

Many of Google’s requests “present policy issues beyond the scope and limitations of interpretations and thus will need to be addressed using other regulatory tools or approaches,” the NHTSA stated.

They include FMVSS No. 135, which governs light vehicle brake systems; FMVSS No. 101, which covers controls and displays; and FMVSS No. 108, governing lamps, reflective devices and associated equipment.

In some cases, Google might be able to show that certain standards are unnecessary for a particular vehicle design, but it hasn’t yet made such a showing.

Google may have to seek exemptions to prove its vehicles meet FMVSS standards as an interim step because the NHTSA’s interpretations don’t fully resolve all the issues raised.

“All kinds of people are working on L4 cars, and there’s an indication the NHTSA’s going to be relatively accommodating with the granting of exemptions,” said Roger Lanctot, an associate research director at Strategy Analytics.

“The orientation of NHTSA is strongly toward taking the driver out of the driver seat,” he told TechNewsWorld.

Insurance and Liability

Current FMVSS rules will have to change to accommodate Google’s request, which will see changes in auto insurance, Frost & Sullivan’s Chandrasekar predicted, because “currently, insurance is decided based largely on the driver and minimally on the vehicle.”

Further, liability “is a huge factor, and that will need to be carefully analyzed as OEMs will end up being largely responsible,” he said. “This is why OEMs like Volvo, Audi and Mercedes-Benz have stated that in their L3 vehicles they’ll assume all liability when the vehicle is driving itself.”

Richard Adhikari

Richard Adhikari has written about high-tech for leading industry publications since the 1990s and wonders where it's all leading to. Will implanted RFID chips in humans be the Mark of the Beast? Will nanotech solve our coming food crisis? Does Sturgeon's Law still hold true? You can connect with Richard on Google+.

2 Comments

  • This ruling hits on a key reason full automation will remain a dream, liability. Currently, things like adaptive cruise control and accident avoidance skirt the line, because the driver is still required to be behind the wheel. Hence the driver is liable for an accident. Remove driver involvement though, and the car manufacturer (Google in this case) is 100% liable because they become the driver. That makes perfect sence.

    While accidents might be lower this way, Google would still be liable for every computer glitch, bad weather accident, person walking into moving traffic, and fraudster jumping on the hood of a parked car at a light, and screaming they where hit.

  • The ideal that google can eliminate human error is totally wrong. Maybe they think human’s do not create and program this technology. But when it goes wrong and it will. It would not be just one of those cars affected like one bad driver. It could very well be many thousands. The other question still remains is how well can a non human controlled car react with many human drivers? But the other elephant in the room is liability and how insurance companies will embrace totally self driving cars? Right now, I AM sure Google with all its cash is self insured on this testing mostly. I have yet to read much about how insurance feels about insuring a robot car.

Leave a Comment

Please sign in to post or reply to a comment. New users create a free account.

Technewsworld Channels