AI in Behavioral and Social Research: Ethical Considerations, Challenges and Opportunities

We used GenAI to test whether it could improve our efficiency by producing summaries of our webinar content. Here we describe how well it worked for the second time.

The application of Artificial Intelligence (AI) in behavioral and social research can have significant benefits, including increased productivity. However, its use also presents ethical risks, for example relating to bias and fairness, transparency and trust, and consent and privacy. Addressing these issues requires a multi-faceted approach, involving the development of adaptive protocols, technical solutions, human oversight, and public education. Responsible use of AI requires a values-driven approach, accountability, and collaborative efforts within the research community.

A BR-UK webinar in May 2025 about social responsibility in AI, led by Janna Hastings, Robert West and Susan Michie, engaged around 300 people.

 

  1. Bias and fairness

Bias in research can arise in various ways, including skewed datasets and biased model architectures. Algorithms in AI models may be biased in favour of those already privileged in society since they are based on, and may amplify, biased assumptions about attributes such as gender, ethnicity and culture. Such biases can lead to misrepresentation, discrimination, harmful decisions and perpetuation of inequalities. For example, there is evidence of occupational stereotypes in image generation models and demographic misrepresentation or under-representations in medical models, which could lead to harmful clinical decisions. To maximise equitable research outcomes, we need to address both the AI models and the people using them. 

We should be proactive in developing culturally-aware AI models that recognise and avoid cultural and sociodemographic bias, and researchers should check for and minimise biases, whilst drawing attention to the possibility of their effects.  Strategies include culturally-informed prompting of AI programs, explicit de-biasing, careful selection of appropriate models for a given application, and using structured frameworks for behavioural research. It is worth noting that, quite apart from any influence of AI, there is extensive evidence of biases that human researchers bring to the design, conduct and interpretation of research and this this needs to be more widely recognised and addressed.

 

  1. Transparency and trust

Building and maintaining public trust in AI-assisted research is crucial if people are to accept and act on the findings of that research. This requires transparency in methodologies, clear reporting of AI's role, and accountability for potential negative outcomes.  Evidence of influences on trust in AI-enabled research should be used to inform best practice in conducting and reporting research.  Educating the public about AI's capabilities and limitations is important for fostering critical evaluation and maintaining trust in AI-assisted research outcomes.

 

  1. Consent and privacy

A key concern is ensuring that research participants fully understand the nature and implications of their involvement in AI-assisted research. Procedures for obtaining informed consent need to take account of how AI applications may store and use participant data.

Human judgment and oversight remain critical when using AI. Whilst AI can enhance research productivity and analytical capabilities, it should complement, not replace, human expertise, particularly in tasks requiring nuanced interpretation and ethical judgment. AI outputs, particularly from generative models, require careful evaluation and validation. Human researchers bring valuable contextual understanding and can identify nuances missed by AI.  The idea of the ‘human-in-the-loop’ remains important.

To maximise the benefits of AI in social and behavioural research, and minimise the potential for harm, it is important to have an explicit values-driven approach to AI development and application. This includes promoting social responsibility, collective action to shape AI's development and fostering a community where researchers can share knowledge and resources.

 

BR-UK repository of AI tools

Inspired by the quantity and quality of engagement in the three webinars on AI hosted by BR-UK, we are pleased to announce that we are launching a BR-UK repository of AI tools and resources for the behavioural research community.  We will encourage users to share their experiences of using AI applications and associated resources, comment on which are good for specific purposes and to suggest new ones to add; in this way, it will be a living repository. 

The repository currently offers a living guide to using AI across each stage of the behavioural research process with practical examples, guidance on ethical and governance considerations and a curated set of learning resources, including webinars, courses and videos. It will also provide access to the Ask BR-UK AI Help Desk, an initiative in Autumn 2025 following these webinars to support behavioural researchers to navigate this fast-evolving space.