In a world first, Germany adopts the first set of ethical standards for autonomous driving system manufacturing.
A world first
In a country known for its production of premium vehicles, new ethical guidelines have been set not for these car’s drivers, but for the cars themselves.
Federal transport minister, Alexander Dobrindt, presented a report to Germany’s cabinet seeking to establish guidelines for the future programming of ethical standards into automated driving software. The report, was prepared by an automated driving ethics commission comprised of scientists and legal experts and produced 20 guidelines to be used by the automotive industry when creating automated driving systems.
Shortly after its introduction, Dobrindt announced that the cabinet ratified the guidelines, making Germany the first government in the world to put such measures in place.
- Autonomous driving systems become an ethical imperative if the systems cause fewer accidents than human drivers.
- Human safety must always take top priority over damage to animals or property.
- In the event of an unavoidable accident, any discrimination based on age, gender, race, physical attributes, or any other distinguishing factors are impermissible.
- In any driving situation, the party responsible, whether human or computer, must be clearly regulated and apparent.
- For liability purposes, a “black box” of driver data must always be documented and stored.
- Drivers retain sole ownership over whether or not their vehicle data is forwarded or used by third parties.
- While vehicles may react autonomously in the event of emergency situations, humans shall regain control during more morally ambiguous events.
A work in progress
Slated for review after two years of use, these guidelines are certainly a large leap forward in the right direction when it comes to the future programming of autonomous driving systems. That being said, there is undoubtedly much more work to be done.
One potentially catastrophic problem presented by machine-learning oriented autonomous systems, is that should a system wrongly interpret a lesson, it could import false biases into core functions causing harmful actions to be made in the event of an emergency.
Even more significant is the challenge of being unable to predict how these systems will react in any given scenario due to our inability to personally audit changes adopted through machine learning.
While other countries have continued to take a “wait and see” position on this type of legislation, many will most likely end up deciding to follow suit, since these guidelines are in governments best interest when providing the most current safety practices in our changing world.
Would you want to see these – or similar – rules applied in your country as well? Let us know what you think in the comments below.