According to a new study from the University of Tokyo, fitting autonomous vehicles (AVs) with robotic eyes improves the safety of pedestrians’ behaviour.
Participants played out scenarios in virtual reality (VR) simulation using a vehicle fitted with robotic eyes, involving scenarios where the car both did and not acknowledge them with its gaze, in order to test the safety of their behaviour.
“There is not enough investigation into the interaction between self-driving cars and the people around them, such as pedestrians,” said Professor Takeo Igarashi from the Graduate School of Information Science and Technology.
“We focused on the movement of the eyes but did not pay too much attention to their visual design in this particular study.
“We just built the simplest one to minimise the cost of design and construction because of budget constraints.
“So, we need more investigation and effort into such interaction to bring safety and assurance to society regarding self-driving cars.” Igarashi also said the eyes were “cute”.
As drivers in AVs can behave more like passengers, this makes it difficult for pedestrians to know if a vehicle has registered their presence or not, as there might be no eye contact or indications from those within the car.
A self-driving golf cart was fitted with two large, remote-controlled robotic eyes, dubbed the “gazing car”, in order to test how it affected the riskiness of pedestrian actions.
In the experiment, participants had to decide whether or not the cart had noticed them and was going to stop.
The team set up four scenarios, two where the cart had eyes and two without. The cart had either noticed the pedestrian and was intending to stop or had not noticed them and planned to continue driving. When the vehicle was fitted with eyes, the eyes would either look towards the pedestrian (going to stop) or look away (not going to stop).
The team recorded the scenarios using 360-degree video cameras, which involved 18 participants (nine women and nine men, aged 18–49 years) and was completed using VR.
They experienced the scenarios multiple times in random order and were given three seconds each time to decide whether or not they would cross the road in front of the cart.
“The results suggested a clear difference between genders, which was very surprising and unexpected,” said Project Lecturer Chia-Ming Chang, a member of the research team.
“In this study, the male participants made many dangerous road-crossing decisions (i.e., choosing to cross when the car was not stopping), but these errors were reduced by the cart’s eye gaze.
“However, there was not much difference in safe situations for them (i.e., choosing to cross when the car was going to stop).
“On the other hand, the female participants made more inefficient decisions (i.e., choosing not to cross when the car was intending to stop) and these errors were reduced by the cart’s eye gaze.
“However, there was not much difference in unsafe situations for them.”
The results of the study concluded that the eyes led to a smoother or safer crossing for everyone.
Some participants expressed that thought the vehicles eyes were cute, while others saw them as creepy or scary.
For many male participants, when the eyes were not looking at them, they felt more at risk. For female participants, when the eyes were looking at them, many said they felt safer.
The team recognises that this study is limited by the small number of participants playing out just one scenario, and that people might make different choices in VR compared to real life.