How individuals perceive as a human-like robot

According to new research, when robots appear to engage with people and display human-like emotions, people may perceive them as capable of “thinking,” or acting on their own beliefs and desires rather than their programs. “The relationship between anthropomorphic shape, human-like behaviour, and the tendency to attribute independent thought and intentional behavior to robots is […]

Should a robot be an inventor?
by Correspondent - July 11, 2022, 2:54 am

According to new research, when robots appear to engage with people and display human-like emotions, people may perceive them as capable of “thinking,” or acting on their own beliefs and desires rather than their programs.

“The relationship between anthropomorphic shape, human-like behaviour, and the tendency to attribute independent thought and intentional behavior to robots is yet to be understood,” said study author Agnieszka Wykowska, Ph.D., a principal investigator at the Italian Institute of Technology. “As artificial intelligence increasingly becomes a part of our lives, it is important to understand how interacting with a robot that displays human-like behaviors might induce a higher likelihood of attribution of intentional agency to the robot.” The research was published in the journal Technology, Mind, and Behavior.

Across three experiments involving 119 participants, researchers examined how individuals would perceive a human-like robot, the iCub, after socialising with it and watching videos together. Before and after interacting with the robot, participants completed a questionnaire that showed them pictures of the robot in different situations and asked them to choose whether the robot’s motivation in each situation was mechanical or intentional. For example, participants viewed three photos depicting the robot selecting a tool and then chose whether the robot “grasped the closest object” or “was fascinated by tool use.” In the first two experiments, the researchers remotely controlled iCub’s actions so it would behave gregariously, greeting participants, introducing itself, and asking for the participants’ names. Cameras in the robot’s eyes were also able to recognise participants’ faces and maintain eye contact. The participants then watched three short documentary videos with the robot, which was programmed to respond to the videos with sounds and facial expressions of sadness, awe, or happiness.

In the third experiment, the researchers programmed iCub to behave more like a machine while it watched videos with the participants. The cameras in the robot’s eyes were deactivated so it could not maintain eye contact and it only spoke recorded sentences to the participants about the calibration process it was undergoing. All emotional reactions to the videos were replaced with a “beep” and repetitive movements of its torso, head, and neck. The researchers found that participants who watched videos with the human-like robot were more likely to rate the robot’s actions as intentional, rather than programmed, while those who only interacted with the machine-like robot were not. This shows that mere exposure to a human-like robot is not enough to make people believe it is capable of thoughts and emotions. It is human-like behavior that might be crucial for being perceived as an intentional agent.