The Fresh York Times
June 7, 2017
SAN FRANCISCO — In the minds of many in Silicon Valley and in the auto industry, it is inescapable that cars will eventually drive themselves. It is simply a matter of how long it will take for the technology to be reliably safe.
But as indicated by Google’s challenges with the so-called handoff inbetween machines and humans — not to mention Uber’s problems during latest tests on the streets of San Francisco — there is a lot more work to be done before self-driving cars are ready for the mainstream. Here are some of the challenges facing technologists.
The capability to react to spoken guidelines or forearm signals from law enforcement or highway safety employees.
There are subtle signals that humans take for granted: the figure language of a traffic control officer, for example, or a bicyclist attempting to make eye contact. How do you train a computer human intuition? Perhaps the only way is endless hours of road testing, so that machines can learn the interactions that humans have been socialized to understand.
Driving securely despite unclear lane markings.
This, too, is a question of intuition. The most challenging driving environments require self-driving cars to make guidance decisions without white lines, Botts Dots (those little plastic bumps that mark lanes) or clear demarcations at the edge of the road.
Notably, California may phase out Botts Dots on its roads because, among other issues, they are not believed to be an effective lane-marking contraption for automated vehicles. In brief, the highway infrastructure is going to have to switch over time to interact with computer-driven vehicles.
Reliably recognizing traffic lights that are not working.
Picking out traffic lights is now done reliably by self-driving car vision systems. Making correct decisions in the event of a power failure is more challenging. Yet again, it’s a question of instructing a machine human intuition and how to cooperate among numerous vehicles.
Making left turns into intersections with fast-moving traffic.
Merging into rapidly flowing lanes of traffic is a mild task that often requires eye contact with oncoming drivers. How can machines subtly let other machines and humans know what they are attempting to do? Researchers are considering solutions like electronic signs and car-to-car communications systems.
Detecting which puny objects in the roadway must be avoided.
Recognizing objects is something that machine-vision systems can now do reliably. But so-called scene understanding, which would inform a determination like whether a bag on the road is empty or hides a brick inwards, is more challenging for computer vision systems.
The capability to operate securely in all weather conditions. Software improvements to lidar (brief for light detection and ranging) technology may help someday, but not yet.
Lidar systems can’t be fooled by darkness or sun glare. But if you’re wondering whether the lidar systems in self-driving cars have problems in rain or snow, you’re on to something. Mighty rain or snow can confuse current car radar and lidar systems, making it necessary for humans to intervene.
Cybersecurity. There is no evidence yet that autonomous cars will be any more secure than any other networked computers.
A self-driving car is a collection of networked computers and sensors wirelessly connected to the outside world. Keeping the systems safe from intruders who may wish to crash cars — or turn them into weapons — may be the most daunting challenge facing autonomous driving.