In science fiction, some robots are portrayed as friendly, like Rosie, the Jetsons’ robotic maid, or Disney’s Wall-E. Others are programmed to exterminate, like the daleks of “Doctor Who.”
Robots are not only found in sci-fi. Scientists in Japan created a 4-foot-tall robot that can detect emotions, and NASA uses robots on the International Space Station. Militaries use drones, and iRobot Roombas vacuum floors in homes.
Northern Illinois University communications professor Dr. David Gunkel said he believes humanity is entering a moment in history when science fiction is becoming scientific fact. In 2012, he published “The Machine Question: Critical Perspectives on AI, Robots, and Ethics,” which questions whether and to what extent man-made machines have rights and moral responsibilities.
Gunkel spoke with MidWeek reporter Katrina Milton about the reality of robots and their everyday uses in our modern world.
Milton: What led to your interest in robots?
Gunkel: I recognize at this particular juncture in our lives, robotics are becoming more and more prevalent, and algorithms are making decisions for us all the time. We start to ask questions: if something goes wrong, who is to blame? Is there a point where we can blame a machine for some action in our social world? To go even further, we are now creating machines that are taking care of the elderly. What happens when the people who are under the care of these machines start to connect with them and feel compassion for machinery? What would that even mean? It’s those kinds of questions, almost science fiction questions, that drove my interest.
Milton: Are robots only out of sci-fi?
Gunkel: Robots are very much here and now. We often think about it as futuristic because we think about the science fiction-type robots. We think of automaton, humanoid robots, or androids. We think of something like “Star Trek’s” Data, “Battlestar Galactica’s” Cylons, or something from “Star Wars.” The fact is, there are already a lot of robots in our world doing things. We just don’t recognize them.
Milton: Give an example of a modern robot.
Gunkel: On Netflix, there is an algorithm that makes decisions and recommendations for you. It suggests the movies or TV shows that you would want to look at next. What is happening, is that over the last several years, this algorithm has taken over the decision making for us. At one point, what movie we should watch was told to us by Siskel and Ebert. Now, 75 percent of all films that are seen in the U.S. are based on a machine telling us to watch it. So basically, we are allowing machines to define our cultural consumption. ...We are not watching films by what our friends tell us or what film critics tell us. We are watching moves based on what machines tell us after they’ve analyzed our behavior. This is kind of encroachment that we will see going on. It’s not going to be invasive or robots with spaceships. No. It’s going to be a slow incursion of decision-making systems in all aspects of our lives, from the stock market to Netflix to what we buy on Amazon. ...It’s all going to look invisible to us. As machines become more and more sophisticated, we are going to find that they are talking to each other and making decisions, without us necessarily even knowing that this is happening.
Milton: Are there any misconceptions about robots that you want to set straight?
Gunkel: The big misconception about all of this is that we see it as a problem down the road, something we should worry about in the year 2065 or 2084. ...The time to start thinking about it is now. This is not something that will only happen in the future. It is occurring at this time and the decisions that we make now will have long range and deep repercussions from this point forward. It won’t fix itself.
Milton: Could robots make decisions on their own?
Gunkel: No, they never make a decision without instructions. ...The Netflix algorithm does not have much knowledge to begin with. Its knowledge is learned based on your behavior. Then it makes decisions based on that learning.
Milton: What is an algorithm?
Gunkel: It’s a complex decision-making system that supplies some type of output, either an action, like a jet airliner landing, or data, like a mathematical series of number crunch information.
Milton: Give me an example of how robots learn.
Gunkel: In the early 2000s, there was a huge drop in the stock market. The Dow Jones industrial average lost 9 percent of its value. This happened because two trading algorithms were locked in a sort of trading battle. No one could anticipate that this was going to happen. The humans in charge did not know what was going on. It was something based on learned behavior that was designed into the algorithm. The algorithms become more and more engaged with learning things. Machines can exceed the program, going above and beyond what the programmer has decided for it.
Milton: Humans also have this type of knowledge through learned behavior. Do you think that robots will one day become human?
Gunkel: In the movie “Bicentennial Man,” Robin Williams’ character wants to be recognized as human. That sets a bar for a moral status. I don’t think that we’re going to have a machine that is going to be human, but humanity is different from personhood.
Milton: What is the difference?
Gunkel: For a machine to be seen as a person, it would have certain rights and responsibilities. Normally, we think of person being defined as an individual human being. I’m a person and you’re a person, and that’s pretty non-controversial. But recently, personhood has been extended to mean all kinds of things that we normally don’t think about being persons. Last year, the supreme court in India decided that dolphins are people. They did this to protect the dolphins from exploitation in zoos and dolphin shows. ...There was also a lawyer recently on “The Colbert Show” that was arguing for chimpanzees to be recognized as a person. Recently, the Supreme Court in the United States has decided that corporations are people.
Milton: What does it mean for a corporation to be a person?
Gunkel: The best example is for the ruling that corporations have free speech, much like human beings do. They are a person in regards to their status. The most recent ruling is with Hobby Lobby, ruling that corporations can have a religion. They have personal rights and also have religious freedom. We have extended the concept of “person” to mean animals and corporations. The next question is how much of a stretch is it to see a machine as a person.
...Dolphins are people, corporations are people, so why not the elder care robot that loves your grandma as much as you do?
Milton: What do people think of when they think of robots?
Gunkel: On one hand, we think of the drones and warfare. There’s the apocalyptic vision with the robot on the battlefield that turns against its creator. On the other hand, we have all these elements taking place in our society that look really good and shouldn’t be questioned, like Facebook and Google. We have these objects available to us and we carry them around with us in our pockets. We have refrigerators that can talk to the grocery store to order food before we get there. Everything seems to be for our convenience. ...We have to ask ourselves what we are giving up in the process. What’s really happening on Facebook? What are we signing onto when we click yes?
Milton: Does that convenience become reliance?
Gunkel: What does it mean when the cloud is down? What if you can’t log into Facebook? We become so reliant on it, that when that convenience is gone, we don’t know what to do with ourselves. ...We live in a time where technology is magic like Harry Potter. You type in an incantation on your keyboard, and you hope it works. If it does, you’re happy, if not, you get angry. As long as that’s our relationship towards technology, it is in control. We behave like a magician and hope that our incantations are the right ones. The more reasonable position to be in is to know what you’re doing when you type that thing in. You know why it happens, you know what you’re doing, you know what it is, and what it all means. It is crucial to have that type of knowledge in the next generation of leaders.
Milton: Are there any misconceptions about robots that you want to set straight?
Gunkel: The big misconception about all of this is that we see it as a problem down the road, something we should worry about in the year 2065 or 2084. It sounds like something that’s so far away in the future that it’s not a problem for us now. I would say that the time to start thinking about it is now. This is not something that will only happen in the future. It is occurring at this time and the decisions that we make now will have long range and deep repercussions from this point forward. It won’t fix itself.
Milton: Do you teach about technology?
Gunkel: The field of communication has become far more technologically interesting and interested, especially in the way that print technology has been taken over by digital technology. We now find ourselves talking to machines more than we do to human beings.
Milton: How do we talk to machines more than humans?
Gunkel: We all love Facebook. It’s a free service that allows us to connect with our friends. But what’s really happening is that we are helping the machine track us. We are telling it what we buy, what we like, who we’re friends with, where we go. As a result, Facebook designs an incredible data profile about us. We are targeted with advertising and things are marketed towards us. It’s creepy. When we think that we’re talking to each other, what we are really doing is letting the machine learn about us.
Milton: Do you think that there could one day be a robot apocalypse?
Gunkel: It’s not going to look like the flying saucer alien robots from outer space. It’s going to be more like the fall of the Roman Empire, where it slowly erodes our social fabric by tweaking their way into our lives. Years later, we won’t recognize how they got there. ...The flying saucer aliens are easy to spot. When you’re looking for that, behind the scenes, algorithms are taking control of our lives. We agree to it, we click yes and sign onto it, not knowing that we are assisting in the robot takeover, if you want to put it that way.
Milton: How do we change the future?
Gunkel: You need to have informed consumers and people who know what it is these machines are doing for us and doing to us. ...We forget that technology is a tool. We always think of what the tool can do for us, but we forget that a tool does something to you. Like a hammer, technology is a tool we control, and it changes how we look at things. It is a double-edged sword. It’s not only what we can do with it, but what it does to us in the process.
I have a colleague, and her standpoint is that if we have robots, we should invent and should make a race of slaves. She thinks that we should have a race of intelligent machine slaves to serve us. It sounds somewhat reasonable until you think that every culture that’s ever had slaves has been flawed. You have to think of what it would mean for us to have a future in which slavery would be a component of that society. I’m not saying whether it’s wrong, but we should go into it with eyes open. We should know what we are creating, what that creation will do for us, and the repercussions it will have for future generations.
Milton: Will there be another Y2K scare with all these unknown aspects of technology?
Gunkel: You mention the Y2K scare, and I think that we’re going to see a lot more of it. People are more reliant on technology, but not necessarily more knowledgeable about what technology is or how it works. The reason why Y2K was a scare was because nobody knew what it meant. They were told that it was going to happen, that it was terrible, and nothing happened. It’s because the knowledge the average user had of their machine and of their problem was convoluted and not totally developed. ...As technology becomes more sophisticated, if our knowledge does not keep up, that can be a real problem.
The more you teach young people how to understand algorithms and code and how they work, the better condition these young people will be, as machines will play a larger role in our lives. ...As the tools in our lives become more sophisticated, there is just no way around it. The user has to become more sophisticated and knowledgeable as well. To not is just irresponsible.
Milton: Can we ever leave technology behind?
Gunkel: There is no way we can drop out. By shifting it to an earlier time, you can see what I mean. Before money, people were bartering. ...You can live in a bartering economy, but it’s really difficult. At a certain point, money becomes a thing that you have to contend with if you want to live a successful life. A lot of technology has that sort of feel to it. Yeah, you could do without it, but more and more of our economical lives, our social lives, our occupations, and our free time will become more involved. Opting out will be a very difficult choice, if not impossible.