Should we promote risk taking in self-driving vehicles?

Those of us building autonomous vehicle technologies worry most about whether these vehicles will be safe, but should we instead be asking: how much risk is acceptable for a vehicle to take?

The next decade will see vehicles rolled out with increasing levels of autonomy. But as consumer acceptance increases and as sales grow, our roads will contain vehicles with varying amounts of intelligence, which will span the Society of Automation Engineers (SAE) levels of autonomy. This standard defines six levels of driving automation, from SAE Level Zero (no automation) to SAE Level 5 (full vehicle autonomy).

These early years, with such a mix of autonomy on the same roads, will see self-driving vehicles face their most complex and challenging scenarios, where even the most advanced sensors will be unable predict a human’s next move. Somewhat counterintuitively, for autonomous vehicles to navigate safely in this environment they will need to take hundreds of decisions that involve risk. These decisions will be made in unknown environments, involving complicated interactions where little experience (or data) exists.

This brings into question if the broad autonomy levels defined by the SAE give us an accurate enough idea of vehicle capability. A certain make and model might be defined as fully autonomous, but does it really perform as safely as an alternative Level 3 vehicle when faced with a similar situation? With human confidence and trust commonly associated with these levels, it’s an important point to consider.

A sophisticated understanding of risk may give us a better idea of a vehicle’s ability to operate safety on our road networks. From an engineering perspective, it may even be valuable to build in ‘acceptable risks’.

If a pedestrian walks outs out in front of our vehicle, we know it will automatically slow and yield to the human, irrespective of the environment. But what happens if it’s a domestic pet, a wild animal or even an escapee from a local zoo?

Scenarios are already emerging where humans are actively exploiting the programmed behaviour of an autonomous vehicle ‘to do no harm’. Think of walking out in front of a vehicle will bring road networks to a standstill. This has serious implications. Will a vehicle that is prepared to proceed at a slow speed or re-route, in the same was as a human, prevent this disruption?

And what happens at roundabouts? To accelerate into a space at a busy roundabout we must take an action that is potentially hazardous. But without this decision busy roads will grind to a halt.

This concept of risk taking will underpin many aspects of self-driving vehicle development. At Stanford University engineers are developing an autonomous race car. By learning from the extreme hazards of racing scenarios, they hope to understand how to better engineer risk into commercial autonomous vehicles. And this makes perfect sense: a vehicle that has been trained in a race environment, such as this, will perform far better in complex real-world scenarios compared to a vehicle developed in the confines of a 20 mile per hour retirement community.

The SAE levels of autonomy are too broad to reflect the nuances of vehicles we are likely to see at each level, so we must evolve our thinking around risk to the passengers, pedestrians, cycles and the local environment. Strategies, such as those being progressed at Stanford University, are vital to start engineering vehicles for the real-world scenarios they must operate in, however infrequently they occur.

Training in low risk scenarios is not enough to prepare for the highly complex and calculated risk taking that comes as second nature to humans. Only by moving to these strategies and by simulating an unprecedented number of scenarios will we understand what it truly means for a vehicle to be autonomous.

Dr Sally Epstein, Senior Machine Learning Engineer, Cambridge Consultants

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE