Can a robot consent? What if my robot punches your robot and breaks it? Robots are transforming from a sci-fi dream to reality. People need to work out how they fit into our lives. This goes beyond embedding injunctions to not harm humans into a robot’s programming, as in Asimov’s three laws of robotics.
So how do robots fit into broader legal and social frameworks?
The prospect of robots gaining sentience seems like an attack against what it means to be human in a way other technological advances do not. The idea that robots compete on an equal level to humans sounds absurd.
But something must be done. Professor David Gunkel spoke to Lunacy Now about his work figuring out exactly that something should be. He is a professor of media studies at Northern Illinois University where he specializes in information and communication technology (ICT), new media and the philosophy of technology. He is the author of nine books, including Robot Rights, and The Machine Question: Critical Perspectives on Robots, AI, and Ethics.
The Problem of Consciousness
Before we get into the legal side of things, are robots really comparable to humans anyway?
The American philosopher John Searle elucidated why a robot, no matter how sophisticated, would still in his eyes never be conscious like a person. His “Chinese Room” thought experiment posits a man imprisoned in a room. Despite speaking no chinese, he who is given cards of chinese characters. He then looks through a code book and passes out different cards with other characters on them. Although he may respond to the language prompts appropriately, he cannot in any meaningful sense be said to actually understand chinese or to be translating it.
LIke the man in the Chinese room, a robot in this framing, however sophisticated they may be, are still machines responding to a code they do not understand.
The debate rages. Some, likes entrepreneur and scientist Nova Spivack argue “By 2050 no synthetic computer nor machine intelligence will have become truly self-aware (ie. will become conscious).” Other scientists, like Christof Koch, chief scientific officer of the Allen Institute for Brain Science in Seattle, more towards see consciousness as something which emerges from a sufficiently complex physical structure. Since is comes from physicality, it could just as easily be said to exist in a sufficiently advanced computer as in the clusters of cells known as human beings. Furthermore many scientists now investigating some of the more intelligent of our animal companions are arguing that those creatures possess consciousness to a similar degree as people.
They point to examples such as the gorilla Koko who blamed a cat for ripping a sink off the wall, elephants holding what look like funerals for their fallen comrades, and chimpanzees going to war against rival tribes.
All of this is rather theoretical, and depends on how you align philosophically and religiously, as well as your level of scientific literacy.
Gunkel focuses more on the practical question of robots in the world.
The Mars rover Opportunity came to its final halt recently, somewhere in the perseverance valley. The scale of the achievement should not be undersold. The mars rover missions not only successfully landed and operated data gathering vehicles from over 33 million miles of open space. They also proved there had once been water on Mars and gathered all kinds of information, pictures and evidence about the distant red planet.
But from a robot rights perspective what’s interesting here is the relationship people have with the rover itself.
“This is a hard day,” Opportunity’s project manager, John Callas, told reporters according to NPR. “Even though it’s a machine and we’re saying goodbye, it’s still very hard and very poignant.”
Humans seem to have a natural propensity to imbue objects with meaning, and to use objects as representations of that meaning. Rivers, cities and even countries are routinely named, gendered and imbued with personality traits. Sailors have traditionally named their ships and referred to them in the feminine, and become very emotionally attached to them.
Even beyond such personal and intimate objects, we valorize and seek after objects we see as being somehow imbued with qualities we wish to see in ourselves, or which relate to bygone eras. The Queen of England wears a specific crown (essentially a glorified shiny hat) for state occasions. Someone paid $2.4 million at an auction in California for a guitar once owned by John Lennon.
When it comes to robots, people are no different. Already we have seen soldiers in Iraq host funerals for an IED disposal robot which was “killed in action.” Julia Carpenter interviewed 23 soldiers who worked with robots while on tours of duty. She found they formed emotional bonds with the bots, giving them names, homemade medal badges, and even funerals when they were destroyed. The soldiers felt genuine emotional loss when the bot was destroyed, despite the fact it was specifically created for this purpose and could not have been any more or less useful than any other piece of equipment.
Others go still further. In 2017 Chinese AI engineer Zheng Jiajia married a robot he built himself, naming it Yingying. “People are going to marry robots,” Gunkel said. He cites cases of people who built a relationship and fell in love with a chatbot they thought was a person is really just an algorithm.
Understanding why this takes place is a whole other question. Understanding that it does take place is vital to making sense of the human robot relationship.
Gunkel argues that as robots get more complex, treating them badly will actually damage us, since it will habituate us to poor conduct.
Causation vs Catharsis
A long standing debate over video games illustrates this point. Do video games encourage people to be more violent? Do people with pre-existing violent tendencies get drawn towards video games? Or does the fictionalization of carnage provide a cathartic release for young men who could otherwise present a threat to the social order?
The same arguments are now being raised regarding whether or not sex robots designed to look like children can or should be deployed to help pedophiles abstain from sexually assaulting minors. Some support groups for “minor attracted persons” (MAPS) already exist to help them remain celibate. In June 2018 the House of Representatives passed the Curbing Realistic Exploitative Electronic Pedophilic Robots (CREEPER) Act, banning humanoid sex dolls which resemble children as inherently exploitative.
The research is not extensive enough to make a hard and fast value judgement about the efficacy of such efforts. Since the ethical concerns surrounding such research are far too complex allow the sort of rigorous empirical data gathering we would need, it is unlikely we will come to know in the near future.
Gunkel is on the fence about this argument in general, but leans slightly towards causation rather than catharsis. He argues that as we treat robots more poorly, we may habituate ourselves to poor treatment of others. It is not likely that our brains will be able to make sense of what goes on and clearly we form deep emotional attachments to the bots. We especially may come to like it. Repeated action can lead to reinforcing specific behaviors. This is a basic principle of operant-conditioning and habit formation.
One solution is a clear demarcation between robots and people. This might even be able to hijack humanity’s notoriously stubborn out-group dynamics to turn against the bots in the same way we have previously turned against people of different racial/ethnic groups and religions. Anthropomorphizing the robots (and then oppressing them) could in this way bring an end to the intercine conflict which has plagued humanity since we first crawled out of the sea.
Gunkel is skeptical that it would be cathartic in this way. More likely than bringing about world peace, he argues such a policy would provide humanity with a way in which to act out our darkest fantasies and desires.
The AI Juggernaught
We could of course, just not develop robots. But that’s never been humanity’s style.
An AI writing tool was recently unveiled, which can produce written content tailored to the exact tone of a specific writer. Automation will take over millions of jobs in the upcoming years, and it is far from clear that those jobs will be replaced with new industries. How humans relate to the robots which will replace them at work is far from clear. Bill Gates has proposed taxing robots at a similar level to humans in order to mitigate the economic impact of having them take all the jobs. Robots may not even take all the jobs. Bryan Caplan has an interesting chart at EconLog showing that Labour Force Participation Rate and technological innovation do not seem to be related to one another. Instead he argues that demographics have a lot more to do with it. How the economy responds to robots will in part relate to how robots are categorized legally, but also has to do with many other factors such as supply and demand.
Democratic Presidential Candidate Andrew Yang is running explicitly on a program of managing a transition to automation which will see millions displaced from their jobs.
It’s All About the Money
The question of robot rights is separate from the question of robots “taking our jobs.”
It’s more about liability.
If a robot becomes an air traffic controller and the algorithm makes a mistake, who will pay for the damages when Boeing 747s collide in mid air?
Of all the places in the historical record to look for inspiration, Gunkel suggested we have a ready to roll out model already constructed: slavery. Another AI ethicist, Luciano Floridi has argued that resurrecting and re-editing roman slave law could be one path forward. Roman slave law did not regard slaves as full humans, but they did have a legal status and existence independent of their masters. Although Americans tend to focus on the specific kind of chattel slavery which was practiced in the South, there are many other models of slavery available to look at. In many of these systems slaves were legal categories of person, and operated as a specific legal entity.
Gunkel is at pains to stress that rights from a legal perspective are a necessary social fiction which enable us to interact with one another with minimal friction. He argues that when we consider rights for robots we are not making a moral claim about their existence, but simply recognizing that when they interact with us we will have to have a framework for comprehending how to navigate potential conflicts.
Among other considerations:
- What happens when a robot damages someone’s property?
- What happens when a robot harms a person?
- What happens when a person damages a robot?
- What happens when a robot earns money?
- What happens when a robot and a person form a sexual relationship?
Gunkel points out that we already live in a world surrounded by artificially constructed entities. Corporations like Coca-Cola can exist, pay tax, employ workers, own intellectual property and perform a host of other functions as a legal entity. Britain has long had a legal concept of “the crown,” wherein the sovereign entity can preside over law courts, own property and be owed revenues such as rent, all legally separated from what the individual person who happens to be the monarch.
Why would a robot be any different from those legal fictions?
What Are Rights Anyway?
Jeremy Bentham described the idea of rights as “nonsense” and the idea of inalienable rights as “nonsense on stilts.” Organizations like the United Nations have attempted to construct a framework of “rights” based on an international intellectual consensus, drawn up in the aftermath of the horrors of the 20th century. Gunkel traces how as civilization has progressed, the concept of personhood has been extended, and with it the concept of rights. Over time we’ve broadened the concepts to include not just free property owning citizen males, but poor men, women, children, religious and ethnic minorities. Dr. Jordan Peterson identifies the mythological figure of Christ as being the symbolic jumping off point for seeing all humans as being born with equal value, and thence a worldview of “rights” associated with that value.
Gunkel cites the American jurist Westley Hohfeld as conceptualizing tight parameters of rights in a social/legal context. He postulates rights as comprising of four sections, powers, claims, immunities and privileges.
When we conceive of rights in that sense, it is possible to break down exactly what we mean and drill in on formulating a specific framework for robot rights. By taking it down to the molecular level we can consider exactly what we want to confer on robots in each of these categories.
Instead of being an aggressive question about the nature of being human, it becomes a sociological and legal question about how we want to arrange our world.
The European Law is the closest to bringing something like this about, as noted by Kurt Marko for Diginomica. A 2017 European Commission report proposed the creation of a Commission on Civil Law Rules on Robotics. The aim would be to draft protocol for a kind of legal personhood for robots. The main purpose is to head off questions of liability when property is damaged. Insofar as the robot is trained or built by a person or groups of people, those individuals may be partially liable, in addition to whoever owns the robot.
Currently robots are considered simple property. This didn’t stop Saudi Arabia from granting a robot called Sophia Saudi citizenship.
Furthermore other non-human entities (ie animals) have been gaining rights in recent decades.
As robots surround our lives to increasing degrees, we must adapt. The day will come when the robots are created regardless of how we feel. The engineers, mechanical specialists, technicians, coders and other technically skilled people are creating a new world. It is up to legal specialists, ethicists and others to facilitate a social and political environment where these new machines can coexist and flourish optimally with the human population.
Part of this will come through adaptations to the legal code. But a lot will come from cultural precedent, expected behaviors and emotional attachments to robots. By thinking intentionally about how we want to put those structures in place, we can ensure that the ones which are created are to our benefit and not our cost.
Gunkel makes a compelling argument that we need to discuss these things now, before the robot revolution takes place.
Gunkel’s book, Robot Rights, is available now.