Disclosure & Transparency

As AI tools become more integrated into the research process, transparency about how they’re used is essential. Clear disclosure supports research integrity, builds trust, and ensures compliance with ethical, institutional, and publishing standards.

Generative AI tools, in particular, can shape the tone, content, and structure of research outputs and may introduce bias or inaccuracies. Without proper disclosure, it can be difficult for reviewers, collaborators, or the public to understand how AI influenced the research.

When using AI tools in research or writing:

  • Name the tool and purpose (e.g. summarising literature, drafting, translation).
  • Provide a basic audit trail (e.g. example prompts or usage summary).
  • Confirm human oversight (e.g. whether outputs were reviewed or edited).
  • Check funder and publisher policies early in your workflow, as expectations vary.

Many publishers and funders provide specific guidance on using AI in research. Some require disclosure statements or audit trails, while others prohibit sharing confidential or unpublished work with AI tools. If no clear guidance is available, researchers should contact the relevant organisation directly for clarification.

Core rules for AI use in academic publishing and funding applications

  1. AI cannot be an author
    • No publisher allows AI to be listed as an author or co-author
    • Authorship requires human accountability and responsibility
    • Even if AI contributed significantly to text, humans must take full responsibility
  2. Distinguish between assistive and generative AI use
    • Assistive AI (grammar checking, language polishing): Generally does not require disclosure
    • Generative AI (content creation, substantial editing): Always requires disclosure
    • When in doubt, disclose AI use to maintain transparency
  3. Disclose generative AI use properly
    • Include in methods or acknowledgements section (check journal preference)
    • Specify the name and version of the AI tool used
    • Explain how and why it was used in your research process
    • Some journals require detailed documentation of prompts and outputs
  4. Protect confidentiality
    • Peer reviewers must not input manuscripts they're reviewing
    • Don't share confidential research data or funding applications with AI
    • Be aware: Text entered into Generative AI tools may be stored or used for AI training
  5. Follow strict Image policies
    1. Most publishers prohibit using AI to create or manipulate research images/figures
    2. Limited exceptions exist only when AI imaging is itself the research subject
    3. When permitted, AI-generated images must be clearly labeled
  6. Check Terms of Use
    1. Ensure AI tools don't claim ownership of your content
    2. Verify that using the tool doesn't compromise your intellectual property rights
    3. Consider privacy implications of data entered into AI systems
  7. Always check specific guidelines
    1. Individual journals or funding agencies may have their own specific requirements
    2. Some require submission of AI prompts and outputs as supplementary materials

Different publishers may have different requirements on Disclosure Statement. Below is an example recommended by Elsevier:

We ask authors who have used generative AI or AI-assisted tools to insert a statement at the end of their manuscript immediately above the references or bibliography entitled ‘Declaration of generative AI and AI-assisted technologies in the writing process’. In that statement, we ask authors to specify the tool that was used and the reason for using the tool. We suggest that authors follow this format when preparing their statement:

During the preparation of this work the author(s) used [NAME TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the publication.