SAN FRANCISCO — In the minds of many in Silicon Valley and in the auto industry, it is inevitable that cars will eventually drive themselves. It is simply a matter of how long it will take for the technology to be reliably safe.
But as indicated by Google’s challenges with the so-called handoff between machines and humans — not to mention Uber’s problems during recent tests on the streets of San Francisco — there is a lot more work to be done before self-driving cars are ready for the mainstream. Here are some of the challenges facing technologists.
The ability to respond to spoken commands or hand signals from law enforcement or highway safety employees.
There are subtle signals that humans take for granted: the body language of a traffic control officer, for example, or a bicyclist trying to make eye contact. How do you teach a computer human intuition? Perhaps the only way is endless hours of road testing, so that machines can learn the interactions that humans have been socialized to understand.
Driving safely despite unclear lane markings.
This, too, is a question of intuition. The most challenging driving environments require self-driving cars to make guidance decisions without white lines, Botts Dots (those little plastic bumps that mark lanes) or clear demarcations at the edge of the road.
Notably, California may phase out Botts Dots on its roads because, among other issues, they are not believed to be an effective lane-marking tool for automated vehicles. In short, the highway infrastructure is going to have to change over time to interact with computer-driven vehicles.
Reliably recognizing traffic lights that are not working.
Picking out traffic lights is now done reliably by self-driving car vision systems. Making correct decisions in the event of a power failure is more challenging. Yet again, it’s a question of teaching a machine human intuition and how to cooperate among multiple vehicles.
Making left turns into intersections with fast-moving traffic.
Merging into rapidly flowing lanes of traffic is a delicate task that often requires eye contact with oncoming drivers. How can machines subtly let other machines and humans know what they are trying to do? Researchers are considering solutions like electronic signs and car-to-car communications systems.
Detecting which small objects in the roadway must be avoided.
Recognizing objects is something that machine-vision systems can now do reliably. But so-called scene understanding, which would inform a determination like whether a bag on the road is empty or hides a brick inside, is more challenging for computer vision systems.
The ability to operate safely in all weather conditions. Software improvements to lidar (short for light detection and ranging) technology may help someday, but not yet.
Lidar systems can’t be fooled by darkness or sun glare. But if you’re wondering whether the lidar systems in self-driving cars have problems in rain or snow, you’re on to something. Heavy rain or snow can confuse current car radar and lidar systems, making it necessary for humans to intervene.
Cybersecurity. There is no evidence yet that autonomous cars will be any more secure than any other networked computers.
A self-driving car is a collection of networked computers and sensors wirelessly connected to the outside world. Keeping the systems safe from intruders who may wish to crash cars — or turn them into weapons — may be the most daunting challenge facing autonomous driving.
Continue reading the main story