Section 2: Ethics, Sustainability & Responsible AI Use

Guidance on ethical and governance considerations, helping you use AI responsibly and reflectively.

Responsible AI usage in research involves multiple dimensions from regulatory compliance to awareness of broader societal impact illustrated in the figure by Knöchel et al (2024). While this "ladder" of principles provides a comprehensive framework, this section of the repository offers an overview of critical issues, including Disclosure & Transparency, Privacy, Biases, Sustainability Concerns and Key Regulations, with links to further resources if you want to learn more.  

For a general introduction, the UK Research Integrity Office recently published “Embracing AI with integrity: A practical guide for researchers.” This guidance provides overviews and recommendations for researchers on using AI in research across five key areas:  

For each area, the guidance sets out key questions researchers should consider before using AI at various stages. For example, before integrating AI into research, they recommend that researchers ask the following questions:     

  1. What are the tangible benefits?   
  1. What is the potential impact, and are there ethical concerns?   
  1. Is AI the only way to achieve the desired outcome?   

 Other high-level guidance exists, such as the European Commission’s Living guidelines on the responsible use of generative AI in research. 

Disclaimer: Please note that this overview is not comprehensive and was last updated on 26th August 2025. Since AI technology and guidelines change quickly, it's important to check for the latest standards beyond what we discuss here. 

An image of a ladder with the phrase 'responsible AI Use in Research' and various stages going up the ladder.
Knöchel et al 2024.

In this webinar speakers Dr Janna Hastings, Professor Susan Michie and Professor Robert West reviewed and discussed ethical challenges in applying AI, including generative AI, to behavioural research and how these can best be addressed.

The MIT AI Risk Repository is a comprehensive, publicly accessible database that consolidates over 1,600 AI risks extracted from 65 existing frameworks and classifications. Created by MIT FutureTech researchers, it serves as a "common frame of reference" for understanding the full spectrum of AI-related risks. 

It has three key components: 

  1. AI Risk Database: Links each risk to source information (paper titles, authors), supporting evidence (quotes, page numbers), and taxonomies.
  2. Causal Taxonomy: Classifies risks by causal factors, including how, when and why AI risks occur.
  3. Domain Taxonomy: Organises risks into seven domains, including Discrimination & toxicity, Privacy & security, Misinformation, Malicious actors & misuse, Human-computer interaction, Socioeconomic & environmental, and AI system safety, failures, and limitations. 

This repository provides: 

  • An accessible overview and regularly updated source of information about new risks and research. 
  • A common frame of reference for researchers, developers, businesses, evaluators, auditors, policymakers, and regulators. 
  • A resource to help develop research, curricula, audits, and policy. 

Slattery, P., Saeri, A. K., Grundy, E. A. C., Graham, J., Noetel, M., Uuk, R., Dao, J., Pour, S., Casper, S., & Thompson, N. (2024). The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks from Artificial Intelligence. https://doi.org/10.48550/arXiv.2408.12622