• 09:31
  • Monday ,19 December 2016
العربية

Opinion: If you think software code is ethically neutral, you're lying to yourself

By-DW

Technology

00:12

Monday ,19 December 2016

Opinion: If you think software code is ethically neutral, you're lying to yourself
It was an accident waiting to happen. Despite all efforts by Joachim Müller-Jung the conversation slowly but surely swerved towards self-driving cars. Up until then, I had been rather bored.
So it was Wednesday, and the third "Press Talk" at the 66th Lindau Nobel Laureate Meeting. And the topic was artificial intelligence (A.I.).
Müller-Jung, who's the head of science and nature at the German daily newspaper "Frankfurter Allgemeine Zeitung," had repeatedly said during his long and winding introduction, "We're not going to talk about self-driving cars or 'rogue A.I.' here!" And the audience of journalists and young scientists laughed.
He had wanted to talk about quantum mechanics and invited quantum expert, Rainer Blatt, to do just that.
 Vint Cerf Google Vice-President Lindau
Google's Chief Internet Evangelist Vinton Cerf, left, and Joachim Müller-Jung of the FAZ newspaper's science and nature section, right.
But Vinton (Vint) Cerf, a Google vice president and Chief Internet Evangelist, was also on the panel. So as soon as the floor was opened to questions, self-driving cars were in pole position.
Google, as you may know, is at the forefront of research into self-driving cars and the artificial intelligence they will need to get us from A to B, if we so trust them.
Quips about car crashes
Cerf had fun regaling the audience with stories of Google's self-driving experiments and how one of their cars had hit a bus but that it was "only at about 3 kilometers per hour (2 mph)," and how in other cases Google's autonomous cars had been rear-ended by other cars, driven by humans. The stupid, slow-reacting things.
 
But what happens, asked young scientist Mehul Malik, when a car is faced with the choice of either hitting a child in the road or swerving to avoid the child in a way that would endanger the safety of any passengers in the car.
The evangelist's response was off-hand, to put it lightly. Such as befits an evangelist, Cerf has no understanding for people whose worldviews contradict his own.
He said there was no point in investing self-driving cars with the philosophical concerns that we face as humans. All you have to do is instruct autonomous cars, through the software and code you write, "not to hit anything."
Fuzzy logic
I take issue with this idea, fundamentally. I don't have half the brains Cerf inhabits. He, after all, is one of the "fathers of the internet," having co-designed the TCP/IP software that underpins our global network of computers. He also won the 2004 Turing Award.
But I know for a fact there's a problem with Cerf's logic. It's boarishly utopian.
We humans are philosophical. We can empathise and get confused. We observe degrees of truth, fuzzy logic. Computers don't get confused, they either do or they don't do, according to the instructions in the software that runs them. This may make it easier for self-driving cars to react faster in dangerous situations. But it doesn't mean they will make the right decision.
To repeat, we humans are philosophical. And while we may no longer do the driving, we are still the ones who want to be driven. So it's our responsibility to invest philosophy in the programming of self-driving cars. Without philosophy, we have no ethics or morals.
The problem goes even deeper, however. No matter what Cerf insists, it is impossible not to invest philosophy or ethics in our computer code. As soon as a programmer begins to type, they make decisions. They have to. And those decisions are based on the way they see the world. That is their philosophy - their interests and visions of the future, humanitarian, aesthetic or commercial.