Autonomous Cars Should Take More Risks To Improve Traffic Safety
Driverless cars are very preoccupied with safety. Maybe too preoccupied, according to tech company Intel, along with its subsidiary, Mobileye. In an unprecedented statement, Intel says that autonomous vehicles should take more risks while driving, to make it more like the assertive human drivers.
To achieve this, Intel and Mobileye have come up with a new program called the Responsibility-Sensitive Safety (RSS) to make AVs act more like human drivers. But here’s the question: why? We all know that humans are inherently terrible drivers, so why would Intel want to make driverless cars more like cars driven by humans?
Well, Intel says that a more assertive autonomous vehicle on the road will make for safer, freer-flowing traffic. Most driverless vehicles use artificial intelligence (AI) to make decisions, and they constantly rely on a stream of calculations to determine the probability of a crash. This may be good for safety, but the problem is that driverless cars only make movements when the probability of getting into an accident is very low. This causes traffic to freeze up, much like how a new driver on the road can cause a congestion because he or she is unsure when to change lanes.
But Intel’s RSS system is different from the current AI systems in driverless cars, mostly because it runs on a deterministic factor, rather than on probability. The system provides the autonomous vehicle with a playbook of preprogrammed rules that define safe and unsafe driving situations. The system also allows the car to make more assertive maneuvers, right up to the line that separates safe and unsafe. Just like a human driver, the RSS system knows that a crash is possible even if it merges lanes at the correct speed, but it won't freeze up even if it has calculated that there is a small chance of error, like a typical AI-controlled car.
However, the RSS system still has its problems. The probability of a crash could still happen as a result of equipment failures, sensor malfunctions, and, of course, human error. One main problem is that an RSS-equipped AV won't avoid one accident if doing so would create another, which is different from how humans would react.
“Equipment can fail, sensors can incorrectly interpret the world around them, and human drivers will certainly collide with AVs,” says Jack Weast, vice president of autonomous vehicle standards at Mobileye. "Accidents will happen," he says. "It could be the AV, it could be the human-driven car, but you need to know who did what, why, and when."
The RSS is a highly vigilant system and sometimes more assertive than you'd expect a robot to be. We’d expect it to make a big change in the world of autonomous vehicles soon.
What do you think about this? Leave a comment below!