Between a bot and a hard place.
By Bird Lovegod for the Yorkshire Post. First published January but worth a repeat.
Electric vehicles are already well on course to replace old fashioned internal combustion engines. Volvo is going electric and hybrid for new cars, starting next year. VW has set a 2030 target and Tesla is already there and ramping up production of new, lower cost models.
After electrification the next stage in the technology of transport is autonomy, driverless taxis, shuttles, busses, haulage, and so on, guided by Ai, artificial intelligence. The research into this technology is itself being driven by GM, Daimler, Ford, Alphabet, UBER, Tesla, if Elon Musk can put his phone away for long enough, and of course Amazon dreaming of robot delivery drones.
But roads are busy places, populated by humans. Introducing robot vehicles into such an environment is extremally complicated on many levels. The behaviour of humans can be erratic, chaotic, unpredictable. The response from robots must be knowable, predictable, and consistent.
This means the rules of interaction must be coded into the vehicles. Or perhaps into the collective network they will all be integrated into. The name given to this software component is the Ethical Governor. It is the conscience of the machine.
This has posed some very interesting problems for system architects. How, for example, will the AiCar respond to a child running into the road directly in front of it? Let’s explore this scenario. Here’s how it might work.
A child runs into the road from between parked cars. Stopping distance is insufficient to avoid collision. The Ethical Governor algorithm must make an immediate decision.
Option one, brake and hit the child at 15mph. Option two, brake, swerve left and mount the pavement hitting an elderly person at 15mph. Option three, brake, swerve right and mount the pavement at 15mph hitting a crowd including 3 adults and a baby.
The choice must be immediately apparent. The system must know in advance what it would do, otherwise there is the risk of unpredictable response, or even no response. Which would you want the robot car to do? Hit the child, the old person, or the crowd? It’s not a decision many of us would like to take, but take it we must, and it must be codified, made into computer language. And this requires us to think about the values of human life. And uncomfortably, to give one life a value above another. This is the kind of logic these cars will require. A baby has a life value of 100. One point for every year it is expected to live. A ten year old child has a life value of 90. A seventy year old has a life value of 30. Therefor the AiCar will choose to hit the seventy year old rather than the ten year old. It’s logical. And logic is needed to make these decisions. Of course, the car would need to have already estimated the age of every pedestrian in the vicinity. Or know it for fact based on everyone wearing a digital ID.
The very process of making value judgements of human life is extremally problematic. Once it becomes code it becomes true. Is an old person worth less than a child? Is the life of a man worth more or less than the life of a woman? Or are they worth the same? This is why ethics becomes so exciting when applied to technology. It demands actual answers, not just philosophical debate. It demands we look at the values we consciously and unconsciously hold. What if we don’t agree with the life values of our robots? What if our robots become the ethical examples to our children? Will the Ethical Governors learn from us, or will we be learning from them? Fascinating questions, in our age of emergent artificial ethics. Women and children first. Isn’t that the good old fashioned way?