If you're driving down the highway with a human-driven car on your left and an autonomous car on the right, what happens when you need to change into one of those lanes?
How you do it could be very different depending on which lane you pick...
The process should be the same: you turn on your indicator to show your intent to switch, and then change lanes once there’s a safe opening. But the way we interact with these vehicles can differ drastically because humans and artificial intelligence don’t perceive their surroundings or communicate in the same ways, presenting a challenge to the engineers behind both driverless technologies and road infrastructure.
The roads of the future
Today’s road planners can’t focus solely on how to make complex traffic situations simple for humans to navigate because what’s easy for a human to understand isn’t necessarily easy for an autonomous vehicle.
Take roadworks, for example, when one side of a street is blocked. A human could easily follow instructions to drive onto the ‘wrong’ side of the road. AI might have a tougher time recognizing temporary road markings, or driving in a way that would usually be illegal – after all, computers are designed to follow rules. Every message communicated to a human driver will need to have a digital equivalent that computers can interpret, which could be delivered by placing information beacons on road signage or by updating navigation data in real-time.
It’s also possible that planners will initially implement designated lanes or areas for autonomous vehicles. This is something we’re already seeing in cities where driverless technologies are being tested. These zones could be specially built for optimal performance – such as offering traffic lights that are more easily read by computers in harsh lighting – and will give humans and AI time to get used to driving alongside each other.
The language of driving
There’s a large vocabulary that drivers use when commuting: a wave of the hand can direct pedestrians, headlight flashes can alert oncoming drivers of altered traffic conditions, and tapping the horn is practically its own language in some parts of the world.
Comprehending this vocabulary isn’t impossible for computers – self-driving technologies have proven capable of recognizing cyclists’ hand signals, for instance. However, human drivers face a dilemma as they won’t be able to rely on gestures and facial expressions when anticipating a driverless vehicle’s movements.
To solve this, they’ll need more ways to understand what AI drivers are doing. This could mean having autonomous vehicles provide visual messages that are more nuanced and contextual than turn signals, or having vehicles communicate with each other wirelessly to show their intent to merge, turn, and more.
Driverless technologies are slowly but surely becoming more common on our thoroughfares, so human and AI drivers will need to be able to share these spaces harmoniously. By making the experience conducive to all types of drivers, we can ensure safety for everyone on the road.