AI tools raise important privacy concerns for researchers, especially when they are handling personal or sensitive information. To use AI responsibly, it's essential to consider privacy from two key angles: Private Information in AI Outputs Some generative AI tools have been known to include private or confidential information in their responses. While developers claim their training data was obtained legally and ethically, there's often little transparency about what data was used and where it came from. This makes it hard to guarantee that outputs won't contain material that infringes on someone’s privacy.How AI Inputs May Be Stored, Shared, or Reused When you enter data into an AI tool, that input (and sometimes the output) may be saved, analysed, or reused to improve the model. Unless the tool explicitly states otherwise, you should not assume anything you input is private or confidential.Researchers should:Avoid entering personal, sensitive, or confidential data into third-party tools without strong privacy safeguards.Check with your institution to ensure the tools you use meet its privacy and security standards.Understand the terms of any AI tool you use, especially around data handling and storage.Obtain informed consent from participants that you plan to use an AI tool to process their data.This is especially important if you're working with identifiable or sensitive information. Why Privacy Matters in AI Research AI tools often rely on vast amounts of personal data to make predictions or generate content. As their capabilities grow, so do the risks to individual privacy, including how data is collected, shared, stored, and inferred.Yet research into AI and privacy often lags behind technological development. As a result, scholars and practitioners need to stay proactive in protecting data and building trust, particularly when working with marginalised or vulnerable populations. More on emerging AI privacy concerns in research Guidance on AI and data protection This article was published on 2025-06-27