As artificial intelligence becomes more integrated into everyday decision-making, a growing body of research is revealing an uncomfortable truth: humans bring their social biases with them when interacting with machines. A recent study suggests that gender-based discrimination does not stop at human-to-human interaction — it extends to AI systems as well.
The Experiment: Testing Trust and Cooperation:
Researchers explored how people behave toward AI systems when those systems are assigned gender labels. To do this, participants were asked to play the classic Prisoner’s Dilemma, a game designed to measure cooperation, trust, and self-interest.
In the game, two players must independently choose whether to cooperate or act alone:
- If both cooperate, both receive the best outcome.
- If one cooperates and the other defects, the defector gains more while the cooperator loses out.
- If neither cooperates, both receive a poor result.
Participants were told they were playing either against another human or an AI partner. These partners were assigned one of four labels: female, male, nonbinary, or no gender at all.
Key Findings: Bias Shapes Behavior Toward AI:
The results were revealing:
- Participants were around 10% more likely to exploit AI partners than human ones.
- Partners labeled as female, nonbinary, or gender-neutral were exploited more often than those labeled male.
- Male-labeled partners were trusted less and therefore faced lower cooperation rates.
Interestingly, people tended to cooperate more with partners they expected to cooperate. Male-labeled partners were perceived as less likely to do so, leading to reduced trust and collaboration.
Gender Differences Among Participants:
The study also uncovered differences based on the participants’ own gender:
- Women were generally more cooperative than men.
- Female participants showed a preference for cooperating with other female-labeled agents — a phenomenon known as homophily.
- Men were more likely to exploit their partners and showed a stronger preference for cooperating with humans over AI.
- Women did not significantly differentiate between human and AI partners in their cooperation levels.
Due to limited data, the researchers could not draw firm conclusions about participants who identified outside the male–female binary.
What Does “Exploitation” Mean Here?
Not all non-cooperation is the same. Researchers identified two motivations:
-
Self-protection — expecting the other player to defect and avoiding loss.
-
Exploitation — believing the other player will cooperate and choosing to defect to gain a higher reward at their expense.
The second behavior occurred more frequently when partners were labeled as female, nonbinary, or gender-neutral — and even more so when those partners were AI.
Why This Matters for AI Design:
These findings have important implications for how AI systems are designed and deployed. As the researchers noted, human biases inevitably influence interactions with automated systems, affecting trust, cooperation, and engagement. Designers must be aware of unwelcome biases in human–AI interactions and actively work to mitigate them. **Anthropomorphizing AI — **giving it names, voices, or genders — may make systems feel more relatable, but it can also unintentionally reinforce harmful stereotypes and unequal treatment.
A Broader Ethical Question:
As AI continues to take on roles in customer service, healthcare, finance, and education, understanding how people treat these systems is no longer just academic. If gendered AI agents are more likely to be exploited or trusted less, these biases could influence outcomes in real-world applications.
The study highlights a crucial takeaway:
AI is not neutral simply because it is artificial. Human psychology shapes every interaction — and without careful design, technology may end up mirroring society’s deepest biases instead of helping overcome them.
Based on research published in (Nov. 2, 2025)



