The U.S. Senate Committee on Commerce, Science, and Transportation held a hearing on March 15 to discuss the future of self-driving cars. During that hearing, one of the experts called to testify, Missy Cummings, said, “while i enthusiastically support the research and development of self-driving cars, I’m less optimistic about what I see as a rush to field systems that are really not ready for widespread deployment.”
Cummings is an engineering professor at Duke University and director of Duke’s Humans and Autonomy Lab. She managed a million Navy program to build a robotic helicopter equipped with sensors similar to those on self-driving cars, and is now leading a National Science Foundation-funded study of how pedestrians interact with autonomous vehicles.
In the hearing, she outlined several scenarios which she believes are still problematic for self-driving technology: operation in bad weather (including standing water on roads, sudden downpours, and snow), the inability of the technology to follow the hand gestures of a traffic policeman, and “vulnerability to malevolent or even prankster intent” (including spoofing GPS signals to send self-driving vehicles off-course).
Cummings also expressed concern about the privacy and collection of personal data (such as places traveled), and about who has access to this data and whether it could be used for other commercial or government purposes.
In an interview with Autonews she was asked, “Can humans be trusted to snack or nap in the car and take the wheel when needed?” and answered simply “no.” That is one of the major drivers of interest in autonomous tech among the public – the idea that commuters will be able to catch up on sleep, eat, work, or watch a movie while the vehicle is operating itself.
Cummings also believes the idea of a “failsafe” – that a human driver could assume control of the vehicle if need be – is a problematic assumption: “The car can never assume that when it needs to hand off control, the human will be ready at that instant.”