Deciding Between Human and Algorithmic Decision-Makers

In contemporary society, algorithms have permeated various sectors, significantly impacting decision-making processes in critical areas such as criminal justice, healthcare, and finance. This growing reliance on algorithmic decision-making, however, has not been without controversy. Critics argue that it institutionalizes biases and compromises the principles of fairness. These concerns are not unfounded, given the opaque nature of some algorithms and the data on which they are trained.

To explore public perception and acceptance of algorithmic versus human decision-makers in high-stakes situations, researchers Kirk Bansak and Elisabeth Paulson conducted an extensive pre-registered study involving 9,000 participants from the United States. This study specifically focused on two scenarios: pretrial release and bank loan applications, both contexts where decision-making can have profound implications on individuals’ lives.

Participants in the study were divided into three groups. Each group was asked to choose between different pairs of decision-makers: one between two human decision-makers, another between two algorithmic decision-makers, and the third between one human and one algorithmic decision-maker. To inform their decisions, participants were provided with simulated statistics detailing the decision-makers’ past performance in terms of efficiency and fairness.

The study’s results revealed a notable trend: across all groups, participants showed a predominant preference for efficiency over fairness. This pattern held true regardless of the decision-makers’ nature—whether algorithmic or human. The inclination towards efficiency was consistent across diverse demographic lines, including race, political affiliation, education level, and personal beliefs about artificial intelligence. These findings suggest a broad, underlying preference that transcends individual demographic differences.

However, it is interesting to note that there was a slight but significant overall preference for human decision-makers over algorithms. This preference was more marked among Republicans than Democrats, highlighting a potential ideological divide in trust or scepticism towards algorithmic decision-making.

A paradox emerged when participants were asked about their values: a large majority claimed that fairness was their top priority. Yet, this priority did not significantly influence their choices, which predominantly favoured efficiency. This discrepancy raises questions about the cognitive dissonance between expressed values and actual decision-making behaviour.

Bansak and Paulson suggest that these findings have broader implications for adopting algorithmic decision-making systems across different sectors. They hypothesize that as algorithms become demonstrably more efficient, they are likely to gain wider acceptance, potentially overcoming existing cultural and psychological barriers. According to the authors, for algorithms to be embraced by all groups, their efficiency must be proven and clearly communicated and understood by the public.

The study highlights the complex dynamics of accepting algorithmic versus human decision-makers. It underscores the need for ongoing research to understand the factors influencing public trust in these systems. As algorithms evolve and become more integrated into critical decision-making processes, it will be crucial to address these challenges and ensure that they are employed to uphold the principles of fairness and transparency.

More information: Kirk Bansak et al, Public attitudes on performance for algorithmic and human decision-makers, PNAS Nexus. DOI: 10.1093/pnasnexus/pgae520

Journal information: PNAS Nexus

Leave a Reply

Your email address will not be published. Required fields are marked *