Unfair AI selections might make us detached to unhealthy human habits

Artificial intelligence (AI) makes important decisions that influence our everyday lives. These decisions are implemented by companies and institutions in the name of efficiency. They can help find out who goes to college, who gets a job, who receives medical treatment, and who is eligible for government support.

As AI takes on these roles, the risk of unfair decisions – or the perception of those decisions by those affected – increases. For example, these automated college admissions or hiring decisions may inadvertently favor certain groups of people or those from certain backgrounds, while overlooking equally qualified but underrepresented applicants.

Or, when AI is used by governments in benefit systems, it can distribute resources in ways that worsen social inequality, leaving some people with less than they deserve and feeling like they are being treated unfairly.

Together with an international team of researchers, we examined how unfair distribution of resources – whether handled by AI or by a human – influences people's willingness to take action against injustice. The results were published in the journal Cognition.

As AI becomes more integrated into daily life, governments are stepping in to protect citizens from biased or opaque AI systems. Examples of these efforts include the White House's AI Bill of Rights and the European Parliament's AI Act. These reflect a common concern: people could feel unfairly treated by the AI's decisions.

So how does experiencing injustice through an AI system affect how people treat each other afterwards?

AI-induced indifference

Our article in Cognition examined people's willingness to take action against injustice after experiencing unfair treatment at the hands of an AI. The behavior we examined applied to subsequent, independent interactions between these individuals. The willingness to act in such situations is often referred to as “prosocial punishment” and is considered crucial to maintaining social norms.

For example, whistleblowers can report unethical practices despite the risks, or consumers can boycott companies they believe are acting harmfully. People who engage in such acts of prosocial punishment often do so to address injustices affecting others, which helps strengthen community standards.


Anggalih Prasetya / Shutterstock

We asked this question: Could experiencing injustice through AI instead of a person influence people's willingness to later confront human wrongdoers? For example, if an AI assigns a shift unfairly or denies a benefit, does it reduce the likelihood that people will subsequently report a colleague's unethical behavior?

In a series of experiments, we found that people who were treated unfairly by an AI were less likely to later punish human wrongdoers than participants who were treated unfairly by a human. They showed a kind of desensitization to the bad behavior of others. We called this effect AI-induced indifference to capture the idea that unfair treatment from AI can weaken people's sense of responsibility towards others. This makes them less likely to speak out about injustices in their community.

Reasons for inaction

This may be because people are less likely to blame AI for unfair treatment and therefore feel less motivated to take action against injustice. This effect is consistent even when participants experienced only unfair behavior from others or both fair and unfair behavior. To examine whether the relationship we uncovered was influenced by familiarity with AI, we ran the same experiments again after ChatGPT was released in 2022. With the later test series we achieved the same results as with the earlier ones.

These results suggest that people's reactions to injustice depend not only on whether they were treated fairly, but also on who treated them unfairly – an AI or a human.

In short, unfair treatment from an AI system can affect how people react to each other, making them less attentive to each other's unfair actions. This highlights the potential impact of AI on human society that goes beyond an individual's experience of a single unfair decision.

When AI systems act unfairly, the consequences extend to future interactions and influence how people treat each other, even in situations that have nothing to do with AI. We recommend that AI system developers focus on minimizing bias in AI training data to prevent these important spillover effects.

Policymakers should also set standards for transparency and require companies to disclose where AI might make unfair decisions. This would help users understand the limitations of AI systems and challenge unfair results. Increased awareness of these impacts could also encourage people to remain alert to injustice, especially after interacting with AI.

Feelings of outrage and blame over unfair treatment are essential to recognizing injustice and holding wrongdoers accountable. By addressing the unintended social impacts of AI, leaders can ensure that AI supports, rather than undermines, the ethical and social standards necessary for a justice-based society.The conversationThe conversation

Chiara Longoni, Associate Professor of Marketing and Social Sciences, Bocconi University; Ellie Kyung, Associate Professor, Marketing Department, Babson College, and Luca Cian, Killgallon Ohio Art Professor of Business Administration, Darden School of Business, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Comments are closed.