AI systems hold great potential for supporting behavioural research and improving real-world decision-making. But they also carry risks—particularly the risk of amplifying existing human and systemic biases. Because AI learns patterns from data, it often reflects the real world's limitations, stereotypes, and inequalities. This can happen in many areas, such as healthcare, hiring, education and policing, leading to results that are not only wrong but also harmful. A Real-World Example: Bias in AI-Assisted Policing AI is increasingly used in high-pressure jobs like policing, where officers must make quick decisions with limited information. While these tools aim to support judgement, research shows they can embed and reinforce bias in a feedback loop through the following:Biased Data: AI tools are trained on historical data, including past arrest records or incident reports that already reflect systemic inequalities.Biased Judgements: The AI then provides officers with skewed information, influencing how they interpret and respond to a situation.Biased Outcomes: These responses, in turn, are documented in new reports, which feed back into future training data for the AI model.Over time, this cycle of bias reinforces and amplifies the original bias, even when individuals are acting in good faith. Flawed data, tools, and human decision-making become entangled in a self-reinforcing loop Image How behavioral science interventions can disrupt the cycle of bias in AI-assisted police work - Andrea G. Dittmann, Kyle S. H. Dobson, Shane Schweitzer, 2025AI bias doesn’t demand a complete rethinking of science, but it does come with special responsibilities for researchers using these tools:Scrutinise your data: Who does it represent and who is missing?Disclose your methods: Be transparent about how AI was used, including prompt design and data processing steps.Verify outputs: Always include human oversight, especially when using generative models.Engage communities: Collaborate with affected populations to identify and mitigate potential harms.Be accountable: Researchers are responsible for the quality and fairness of AI-informed findings.Ultimately, researchers must use their judgement and expertise not only to apply AI effectively but also to challenge and improve it to uphold the values of rigorous, ethical, and inclusive science. Read more on emerging AI Bias and Error concerns in research Read more about AI bias and errors This article was published on 2025-06-27