A recent study revealed that when faced with ethical dilemmas and given two solutions, one from a human and the other from artificial intelligence (AI), most individuals rated the AI-provided solution as more acceptable. The study, “Attributions Toward Artificial Agents in a Modified Moral Turing Test,” was led by Eyal Aharoni, an associate professor in the Psychology Department at Georgia State. The research was sparked by the rapid emergence of AI-driven platforms like ChatGPT and other large language models (LLMs) since last March.
Aharoni, who has a keen interest in moral decision-making within the judicial system, explored whether AI tools like ChatGPT could contribute insights into ethical reasoning. He noted that these technologies are increasingly used in scenarios with moral consequences, such as generating environmentally conscious vehicle recommendations or assisting lawyers in case preparation. Aharoni emphasized the importance of understanding how these tools function, their limitations, and their actual operational methods, which might differ from user expectations.
To investigate AI’s capabilities in moral reasoning, Aharoni adapted the Turing test, initially conceived by Alan Turing, a pioneer in computing. Turing speculated that by the year 2000, computers might be able to pass a test in which a human judges the responses of two unseen interactants—one human and one computer-based solely on text-based communication if the human judge cannot discern which is which, the computer could be considered intelligent.
In Aharoni’s version, he presented undergraduate students and AI with identical ethical queries. Then, they displayed their responses to participants, who needed to be made aware of the source of each response. Participants were asked to evaluate the responses based on attributes like virtue, intelligence, and trustworthiness without the pressure to identify whether the responses came from a human or an AI.
The results were striking; responses generated by ChatGPT were consistently rated higher than those from humans. After revealing the nature of the sources, Aharoni asked participants to identify which response came from whom. Although participants could distinguish between human and AI responses, it was primarily because they found the AI’s responses superior, not inferior, as expected years ago.
This outcome suggests that AI can pass a moral Turing test, demonstrating a level of moral reasoning that might fool humans. This has profound implications for future interactions between humans and AI, as reliance on technology for ethical guidance might increase, raising trust and dependency issues.
Aharoni concluded that while AI’s role in society is expanding, we must thoroughly explore and understand its capabilities and implications to manage its integration responsibly, especially as it becomes a more trusted source of human judgment in moral and ethical contexts.
More information: Eyal Aharoni et al, Attributions toward artificial agents in a modified Moral Turing Test, Nature. DOI: 10.1038/s41598-024-58087-7
Journal information: Nature Provided by Georgia State University