What characterizes a real human interaction

Human-machine interactions: Bots are more successful when they pretend to be humans

Study examines people's willingness to cooperate with bots

An international research team with the collaboration of Iyad Rahwan, Director of the Human and Machine Research Department at the Max Planck Institute for Human Development, wanted to know whether, in cooperation between humans and machines, it makes a difference if the machine pretends to be human. To do this, the team carried out an experiment in which people interacted with bots. In the in the journal Nature Machine Intelligence In the first published study, the researchers show that bots are more successful than humans in certain human-machine interactions - but only if they are allowed to hide their non-human identity.

The artificial voices of Siri, Alexa or Google and their sometimes wooden answers leave no doubt that we are not talking to a real person here. But the latest technological breakthroughs, which combine artificial intelligence with deceptively real human voices, make it possible for bots to pass for humans on the phone. This leads to new ethical questions: Is it a fraud for bots to pretend to be humans? Does there have to be an obligation to be transparent?

Previous studies have shown that people tend to be reluctant to cooperate with intelligent bots. But if people don't even notice that they are interacting with a machine and thus cooperating more successfully with one another, wouldn't it make sense in some cases to maintain the deception?

For the study carried out in Nature Machine Intelligence was published, a research team from the United Arab Emirates, the USA and Germany, with the collaboration of Iyad Rahwan, Director of the Human and Machine Research Department at the Max Planck Institute for Human Development, invited almost 700 participants to participate in an online cooperation game interact with a human or an artificial play partner. In the game known as the Prisoner's Dilemma, players can either act selfishly to take advantage of the other player or act cooperatively, which is mutually beneficial.

The decisive factor for the experiment was that the researchers gave some participants false information about the identity of their play partner. Some participants who interacted with a human were told that they were interacting with a bot and vice versa. In this way, the researchers were able to determine whether people have prejudices against gaming partners who they consider bots and whether it makes a difference to the efficiency of bots whether they admit that they are bots or not.

The results showed that bots posing as humans were more successful in persuading their game partner to cooperate. However, as soon as they revealed their true nature, the rates of cooperation declined. Transferred to a realistic scenario, this could mean, for example, that help desks operated by bots could actually help faster and more efficiently if they were allowed to pretend to be human. In which cases of human-machine cooperation transparency and in which cases efficiency is more important, society has to negotiate, say the researchers.

Original study
Ishowo-Oloko, F., Bonnefon, J.-F., Soroye, Z., Crandall, J., Rahwan, I., & Rahwan, T. (2019). Behavioral evidence for a transparency-efficiency trade-off in human-machine cooperation. Nature Machine Intelligence.doi: 10.1038 / s42256-019-0113-5