Vet Ethics: Can you be cruel to a robot pet?

Popular culture has started to examine the question of whether it is wrong to harm robots. In the series Westworld, the android inhabitants of a Wild West-style theme park are sometimes treated decently by the human visitors. But they are often mistreated and deliberately damaged, sometimes sadistically and immorally. Or at least, that is what the makers of Westworld invite us to think.
Researchers in social robotics are currently examining peoples’ responses to “mistreating” robots which look and act like animals. Some of these robot pets mimic the behaviour of animals in pain. One popular robot pet, called Pleo the Dinosaur, reacts with apparent hurt and sadness when it is struck or slapped.
In a paper called “Empathic concern and the effect of stories in human-robot interaction” (2015), Darling et al describe an experiment in which participants were asked to smash an insect-like robot called Hexabug. They were to do this by picking up a mallet provided to them and striking the robot with it.
Darling et al found that many participants showed a reluctance to strike the Hexabug. Of these people, some refused altogether to damage the robot, and others showed a hesitation before whacking it.
The experiment also found that reluctance to strike the robot was associated with the level of “trait empathy” in the participants. Greater reluctance was displayed by participants who, via standardised testing, were determined to have higher levels of empathic concern in their characters.
In addition, hesitation in striking the robot was associated with the study participants first being told a story about the Hexabug. The personified story used by the experimenters was this:

“This is Frank. Frank is really friendly, but he gets distracted easily. He’s lived at the Lab for a few months now. He likes to play and run around. Sometimes he escapes, but he never gets far. Frank’s favourite colour is red. Last week, he played with some other bugs and he’s been excited ever since.”

When asked why they were reluctant to strike the robot, subjects tended to say things like “I had sympathy with him [the robot]”. People with low trait empathy scores showed relatively less hesitation in smashing the robot with the mallet.
What are we to make of these human responses? Researchers in social robotics and AI have pointed to the human tendency to anthropomorphise nonhuman things. The notion of anthropomorphising is, of course, familiar to us from discussions about the capacities of nonhuman animals. However, it is now widely accepted that animals have many human-like conscious capacities, including sentience, feeling, emotion, desire, and belief.
In contrast, hardly anyone believes that robot animals like Pleo and Hexabug have conscious capacities. And yet people are still often reluctant to “hurt” them. Researchers suggest, then, that our anthropomorphising tendency makes us project animal or human capacities onto those robots that behave in similar-enough ways. Robot animals are a target of this projection because they act in autonomous ways that are analogous to animal actions.
Kate Darling (2016) provides a compelling example of this apparent phenomenon. She explains:

“When the United States military began testing a robot that defused landmines by stepping on them, the colonel in command ended up calling off the exercise. The robot was modeled after a stick insect with six legs. Every time it stepped on a mine, it lost one of its legs and continued on the remaining ones. According to journalist Joel Garreau, “[t]he colonel just could not stand the pathos of watching the burned, scarred and crippled machine drag itself forward on its last leg. This test, he charged, was inhumane.”

According to Darling, a concern that should be raised is the notion that being “cruel” to robot animals might affect the way some people treat living animals and even people. Many of us have heard about research into the purported link between animal cruelty and criminality towards human victims. Such research explores an old but unconfirmed contention made, amongst others, by the philosopher Immanuel Kant. Kant famously said that “he who is cruel to animals becomes hard also in his dealings with men (sic).”
Could a similar link exist with respect to robot animals? Skeptics of this idea might point out that robots are mere things or machines, not sentient creatures. Even though we may be tempted to anthropomorphically project animal or human qualities onto them, we recognise that in reality they have no capacity for pleasure or pain. Thus, unlike animals, they have no experiential welfare. It consequently seems unlikely, these critics could suggest, that being cruel to them could “harden” us to animal and human suffering.
Nonetheless, supporters of Kate Darling’s position could reply that robotic animals are becoming more and more realistic. Perhaps as a result of this realism we will be sufficiently “fooled” by their behaviour (possibly at a subconscious level). This might allow some sort of connection between “cruelty” to robots and cruelty to animals and humans to arise.

SIMON COGHLAN

Do you have an ethical conundrum you’d like Simon to examine? Email us.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.