Would you step into a driverless car?

Would you send your child off to a party in a driverless car?

It’s quite a step putting yourself into one, but what about sending someone you love off into the world in some mobile microwave?

Who is accountable when there’s a crash?

Who picks up the bill for specialist medical care when your child gets injured inside a driverless car?

Is it better or worse on balance to have an actual human being in the car to react to things?

We know taxis and taxi drivers are regulated by NZTA, but Uber drivers claim not to be taxis and are hence very poorly regulated if at all. What happens when there’s not even a driver?

There is no regulatory framework in place at all for driverless vehicles with passengers. That means, no one can be held accountable for what goes on. But before you get to rules, you need some practical ethics.

Well, the German Federal Ministry of Transport and Digital Infrastructure has been doing some thinking about this.

 

For those who are keen on when this is going to start here, no-one is predicting fully autonomous vehicles will be rolling out this year.

As for who’s winning the driverless car race, you decide.

Skepticism is appropriate after watching the Jetsons so many years ago: I’m still waiting for my jetpack.

https://www.youtube.com/watch?v=pRKmhjZy7hw

While  companies are advancing in their development of such vehicles, I’m less concerned about the techno, and more about how this is going to impact the ethics of everyday life. Here’s a few of the guidelines from the German Ministry of Transport:

  1. The primary purpose of partly and fully automated transport systems is to improve safety for all road users (…)
  2. (…) The licensing of automated systems is not justifiable unless it promises to produce at least a diminution in harm compared with human driving, in other words a positive balance of risks.
  3. The public sector is responsible for guaranteeing the safety of the automated and connected systems introduced and licensed in the public street environment. Driving systems thus need official licensing and monitoring. (…)
  4. (…) The purpose of all governmental and political regulatory decisions is thus to promote the free development and the protection of individuals. In a free society, the way in which technology is statutorily fleshed out is such that a balance is struck between maximum personal freedom of choice in a general regime of development and the freedom of others and their safety.
  5. Automated and connected technology should prevent accidents wherever this is practically possible. (…)
  6. The introduction of more highly automated driving systems, especially with the option of automated collision prevention, may be socially and ethically mandated if it can unlock existing potential for damage limitation. Conversely, a statutorily imposed obligation to use fully automated transport systems or the causation of practical inescapabilty is ethically questionable if it entails submission to technological imperatives (prohibition on degrading the subject to a mere network element).
  7. In hazardous situations that prove to be unavoidable, despite all technological precautions being taken, the protection of human life enjoys top priority in a balancing of legally protected interests. Thus, within the constraints of what is technologically feasible, the systems must be programmed to accept damage.
  8. (…) (I)t would be desirable for an independent public sector agency (for instance a Federal Bureau for the Investigation of Accidents Involving Automated Transport Systems or a Federal Office for Safety in Automated and Connected Transport) to systematically process the lessons learned.
  9. In the event of unavoidable accident situations, any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited. It is also prohibited to offset victims against one another.

There’s twenty of them, so that’s just a taste of what the Germans have been thinking. The full text is in the link above. When you boil them down, you start getting to Isaac Asimov’s three rules for robots, which he considered way back in 1942 – during the great accelerated world-mechanisation of World War Two:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second laws.

Driverless trains are now nothing new, in fact they are rolling them out in Sydney this year – and I’m sure they will arrive shortly in Auckland.

 

What is also underway is driverless air-taxis in New Zealand. Note they are electric. Again, they are having to invent the regulatory system when there is no driver to regulate.

 

But actual road cars? Surely this needs human judgement? Autonomous vehicles are not going to radically decrease congestion. They might make T3 lanes a bit more attractive and efficient, but if that’s the sum total of the revolution, I want my money back.

We’ve been waiting 30 years to figure out when and how the next wave of automation would be as revolutionary as the car itself. Of all the needless waste of human life and time popularized by automation over the last 70 years, driving cars is the worst. It’s going to be a massive liberation, but with liberation comes the human need for rules. So far there are none.

Powered by WPtouch Mobile Suite for WordPress