Surfacing and Mitigating the “Hidden Risks” of AI Current AI safety discourse disproportionately centres on highly salient technical risks such as algorithm bias and hallucinations. While these concerns merit attention, they overlook a fundamental truth we've learned from other safety-critical fields like aviation and healthcare: catastrophic risks often arise from the way humans interact with technology, rather than from the technology itself. The Cabinet Office Behavioural Science Team has developed a novel framework to anticipate and mitigate AI risks arising from human-AI interaction. Rather than relying exclusively on technical safeguards and disclaimers, this methodology expands risk mitigation beyond inherent LLM limitations (e.g., bias, susceptibility to jailbreaking) to mitigate those risks that emerge from well-intentioned AI use. The framework also shows how seemingly innocuous micro-level decisions accumulate to generate systemic macro risks (e.g. labour market discrimination) that would remain invisible through conventional risk assessment lenses until harms had already occurred. The framework has informed the risk-managed deployment of an internal Cabinet Office AI system and carries implications for how AI risk is managed globally. The work demonstrates that effective AI governance requires expertise beyond machine learning specialists. The contemporary AI implementation landscape, however, continues to frame challenges predominantly through technological and economic paradigms rather than socio-technical ones. While major AI organisations employ ethicists to “align” AI with human values, philosophical frameworks alone will be insufficient for identifying and mitigating latent behavioural risks. This oversight risks undermining the very efficiency and impact that AI promises to deliver.BR-UK researchers Dr Maggie Yang and Dr Amy Rodger will present an overview of the BR-UK AI Resources Repository and provide an update on BR-UK's statement on AI.Chair: Professor Susan Michie, BR-UK Co-DirectorSpeakers: Holly Marquez, Senior Behavioural Scientist, Cabinet Office, Dr Moira Nicolson, Behavioural Scientist, Cabinet Office, Dr Amy Rodger, BR-UK Research Fellow, University of Edinburgh and Dr Maggie Guanyu Yang, BR-UK Research Fellow, UCL. Nov 24 2025 10.00 - 11.00 Surfacing and Mitigating the “Hidden Risks” of AI Learn more about the novel framework to anticipate and mitigate AI risks arising from human-AI interaction developed by the Cabinet Office Behavioural Science Team. Online Register to attend
Surfacing and Mitigating the “Hidden Risks” of AI Current AI safety discourse disproportionately centres on highly salient technical risks such as algorithm bias and hallucinations. While these concerns merit attention, they overlook a fundamental truth we've learned from other safety-critical fields like aviation and healthcare: catastrophic risks often arise from the way humans interact with technology, rather than from the technology itself. The Cabinet Office Behavioural Science Team has developed a novel framework to anticipate and mitigate AI risks arising from human-AI interaction. Rather than relying exclusively on technical safeguards and disclaimers, this methodology expands risk mitigation beyond inherent LLM limitations (e.g., bias, susceptibility to jailbreaking) to mitigate those risks that emerge from well-intentioned AI use. The framework also shows how seemingly innocuous micro-level decisions accumulate to generate systemic macro risks (e.g. labour market discrimination) that would remain invisible through conventional risk assessment lenses until harms had already occurred. The framework has informed the risk-managed deployment of an internal Cabinet Office AI system and carries implications for how AI risk is managed globally. The work demonstrates that effective AI governance requires expertise beyond machine learning specialists. The contemporary AI implementation landscape, however, continues to frame challenges predominantly through technological and economic paradigms rather than socio-technical ones. While major AI organisations employ ethicists to “align” AI with human values, philosophical frameworks alone will be insufficient for identifying and mitigating latent behavioural risks. This oversight risks undermining the very efficiency and impact that AI promises to deliver.BR-UK researchers Dr Maggie Yang and Dr Amy Rodger will present an overview of the BR-UK AI Resources Repository and provide an update on BR-UK's statement on AI.Chair: Professor Susan Michie, BR-UK Co-DirectorSpeakers: Holly Marquez, Senior Behavioural Scientist, Cabinet Office, Dr Moira Nicolson, Behavioural Scientist, Cabinet Office, Dr Amy Rodger, BR-UK Research Fellow, University of Edinburgh and Dr Maggie Guanyu Yang, BR-UK Research Fellow, UCL. Nov 24 2025 10.00 - 11.00 Surfacing and Mitigating the “Hidden Risks” of AI Learn more about the novel framework to anticipate and mitigate AI risks arising from human-AI interaction developed by the Cabinet Office Behavioural Science Team. Online Register to attend
Nov 24 2025 10.00 - 11.00 Surfacing and Mitigating the “Hidden Risks” of AI Learn more about the novel framework to anticipate and mitigate AI risks arising from human-AI interaction developed by the Cabinet Office Behavioural Science Team.