Responsible AI usage in research involves multiple dimensions from regulatory compliance to awareness of broader societal impact illustrated in the figure below (Knöchel et al 2024). Responsible AI usage in research involves multiple dimensions from regulatory compliance to awareness of broader societal impact illustrated in the figure below (Knöchel et al 2024). While this "ladder" of principles provides a comprehensive framework, this section focuses on several key relevant considerations. We offer an overview of critical issues, including Disclosure & Transparency, Privacy, Biases, Sustainability Concerns and Key Regulations, with links to further resources if you want to learn more (e.g. European Commissions’ Living guidelines on the responsible use of generative AI in research) .Disclaimer: Please note that this overview is not comprehensive and was last updated on 16 April 2025. Since AI technology and guidelines change quickly, it's important to check for the latest standards beyond what we discuss here. Knöchel et al 2024. Watch the 'Responsible use of AI in behavioural research' webinar In this webinar speakers Dr Janna Hastings, Professor Susan Michie and Professor Robert West reviewed and discussed ethical challenges in applying AI, including generative AI, to behavioural research and how these can best be addressed. View the slide deck from this webinar Disclosure & Transparency Biases Privacy Sustainability Concerns Key Regulations This article was published on 2025-06-27