Researchers from the University of Southampton have found that robots can encourage humans to take greater risks.
In a simulated gambling scenario, the team studied if robots would influence the behaviours of humans. The researchers said this would help increase understanding into the ethical, practical and policy implications of robot use.
Dr Yaniv Hanoch, associate professor in risk management at the University of Southampton, who led the study, said: “We know that peer pressure can lead to higher risk-taking behaviour.
“With the ever-increasing scale of interaction between humans and technology, both online and physically, it is crucial that we understand more about whether machines can have a similar impact.”
The study involved 180 undergraduate students taking a computer assessment that asked participants to press the spacebar on a keyboard to inflate a balloon displayed on the screen.
Every time a student pressed the spacebar, the balloon inflated slightly and added 1p to the students’ ‘money bank’. If the balloon popped the students lost any money they won. The students were given the option to ‘cash-in’ at any time.
One-third of the participants took the test in a room on their own and one-third took the test alongside a robot that could only give instructions. The third group took the test with a robot providing both instruction and encouragements. The robot asked questions such as “why did you stop pumping?”.
The results, published in the journal Cyberpsychology, Behaviour, and Social Networking, highlighted that the group who were encouraged by the robots to take more risks blew up their balloons more frequently than those in the other groups.
The researchers said the students in the room with the encouraging robot also earned more money overall. There was no significant difference recorded in the behaviours of the students accompanied by the silent robot and those with no robot.
Hanoch added: “We saw participants in the control condition scale back their risk-taking behaviour following a balloon explosion, whereas those in the experimental condition continued to take as much risk as before.
“So, receiving direct encouragement from a risk-promoting robot seemed to override participants’ direct experiences and instincts.”
The team said further study was required to see whether similar results would emerge from human interaction with other AI systems, such as digital assistants or on-screen avatars.
Hanoch said: “With the wide spread of AI technology and its interactions with humans, this is an area that needs urgent attention from the research community.
“On the one hand, our results might raise alarms about the prospect of robots causing harm by increasing risky behaviour. On the other hand, our data points to the possibility of using robots, and AI, in preventive programmes such as anti-smoking campaigns in schools, and with hard-to-reach populations, such as addicts.”