• 09:25
  • Monday ,12 December 2016
العربية

Living in The Glass Cage: why our drive to automation needs an urgent re-think

By-DW

Technology

00:12

Monday ,12 December 2016

Living in The Glass Cage: why our drive to automation needs an urgent re-think

The thing that stands out most for me in your book "The Glass Cage" is the idea of automating moral choices. You suggest it's impossible to automate certain processes without automating moral choices too - for instance, how should a self-driving car be programmed to react in an accident if there's a chance only one of two lives can be saved? How do you feel about companies such as Apple and Google, who are developing self-driving cars, taking on the responsibility of making these moral choices for the rest of us?

Nicholas Carr: We're getting to the point when technology companies have the technical ability to begin developing robots that act autonomously in the world through advances in machine learning and machine vision. But as soon as a robot begins operating autonomously in the world - and this can be a physical robot or a software robot - the robot, like all people, will very quickly run into ambiguous situations, and some of these may be trivial and others may be extremely important, and even involve life and death decisions. And I think very few people have thought about this - what it means to program a machine to make moral choices, and whose morality goes into the machine, and who gets to make decisions about those morals? As we rush forward with technical progress, it seems to me if we don't think about these things, then we cede these very important ethical decisions to the companies, and that seems to me to be a mistake.
Is there a danger that for the sake of expedience we will forget about morals and simply adapt to that?
We're already seeing that phenomenon. In "The Glass Cage," I talk about the robotic vacuum cleaner that sucks up insects, where the owner of the robotic vacuum cleaner, if he or she was vacuuming might actually stop and save the insect. You can say that's at a trivial level, but we're also seeing here in the United States, for instance, robotic lawnmowers becoming more and more popular, and then you're ceding to the machine whether to run over a frog or a snake - something that most people would stop and not do. We're seeing that expediency, efficiency, and convenience supplant our sense that maybe, "I need to think about the moral implications of these choices." And will we continue on that track when we get to automated cars or automated soldiers, or drone aircraft that make their own decisions about whether to fire or not? It seems to me that we're already on a slippery slope in that the complexity of programming morality may simply lead us to say: "Well, I don't want to think about it."