Here’s what the publishers exactly said about AI use

Learn about publishers stance on AI usage. Last updated on 16 April 2025

Springer Nature is monitoring ongoing developments in this area closely and will review (and update) these policies as appropriate.

  1. AI authorship
  2. Generative AI images
  3. AI use by peer reviewers
  4. Editorial use

AI authorship

Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs.   Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript. The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term "AI assisted copy editing" as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work.

Generative AI images

The fast moving area of generative AI image creation has resulted in novel legal copyright and research integrity issues. As publishers, we strictly follow existing copyright law and best practices regarding publication ethics. While legal issues relating to AI-generated images and videos remain broadly unresolved, Springer Nature journals are unable to permit its use for publication.

Exceptions:

  • Images/art obtained from agencies that we have contractual relationships with that have created images in a legally acceptable manner.
  • Images and videos that are directly referenced in a piece that is specifically about AI and such cases will be reviewed on a case-by-case basis.
  • The use of generative AI tools developed with specific sets of underlying scientific data that can be attributed, checked and verified for accuracy, provided that ethics, copyright and terms of use restrictions are adhered to.
  • All exceptions must be labelled clearly as generated by AI within the image field.

As we expect things to develop rapidly in this field in the near future, we will review this policy regularly and adapt it if necessary.

Please note: Not all AI tools are generative. The use of non-generative machine learning tools to manipulate, combine or enhance existing images or figures should be disclosed in the relevant caption upon submission to allow a case-by-case review.

AI use by peer reviewers

Peer reviewers play a vital role in scientific publishing. Their expert evaluations and recommendations guide editors in their decisions and ensure that published research is valid, rigorous, and credible. Editors select peer reviewers primarily because of their in-depth knowledge of the subject matter or methods of the work they are asked to evaluate. This expertise is invaluable and irreplaceable. Peer reviewers are accountable for the accuracy and views expressed in their reports, and the peer review process operates on a principle of mutual trust between authors, reviewers and editors. Despite rapid progress,  generative AI tools have considerable limitations: they can lack up-to-date knowledge and may produce nonsensical, biased or false information. Manuscripts may also include sensitive or proprietary information that should not be shared outside the peer review process. For these reasons we ask that, while Springer Nature explores providing our peer reviewers with access to safe AI tools,  peer reviewers do not upload manuscripts into generative AI tools.

If any part of the evaluation of the claims made in the manuscript was in any way supported by an AI tool, we ask peer reviewers to declare the use of such tools transparently in the peer review report.

Editorial use

Nature Portfolio journals occasionally use internal Springer Nature-developed artificial intelligence tools to support the generation of accessory content, such as summary points. These are always edited and fact-checked by the author and/or editor to meet Nature Portfolio publication standards. Any substantive use of artificial intelligence beyond accessory content will be declared on an individual article basis.

Accessory content can include but is not limited to, key points, editorial summaries, glossary terms, plain language summaries and social media posts.


For authors

The use of generative AI and AI-assisted technologies in scientific writing

Please note this policy only refers to the writing process, and not to the use of AI tools to analyze and draw insights from data as part of the research process.

Where authors use generative AI and AI-assisted technologies in the writing process, these technologies should only be used to improve readability and language of the work. Applying the technology should be done with human oversight and control and authors should carefully review and edit the result, because AI can generate authoritative-sounding output that can be incorrect, incomplete or biased. The authors are ultimately responsible and accountable for the contents of the work.

Authors should disclose in their manuscript the use of AI and AI-assisted technologies and a statement will appear in the published work. Declaring the use of these technologies supports transparency and trust between authors, readers, reviewers, editors and contributors and facilitates compliance with the terms of use of the relevant tool or technology.

Authors should not list AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans. Each (co-) author is accountable for ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved and authorship requires the ability to approve the final version of the work and agree to its submission. Authors are also responsible for ensuring that the work is original, that the stated authors qualify for authorship, and the work does not infringe third party rights, and should familiarize themselves with our Ethics in Publishing policy before they submit.

The use of generative AI and AI-assisted tools in figures, images and artwork

We do not permit the use of Generative AI or AI-assisted tools to create or alter images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original. Image forensics tools or specialized software might be applied to submitted manuscripts to identify suspected image irregularities.

The only exception is if the use of AI or AI-assisted tools is part of the research design or research methods (such as in AI-assisted imaging approaches to generate or interpret the underlying research data, for example in the field of biomedical imaging). If this is done, such use must be described in a reproducible manner in the methods section. This should include an explanation of how the AI or AI-assisted tools were used in the image creation or alteration process, and the name of the model or tool, version and extension numbers, and manufacturer. Authors should adhere to the AI software’s specific usage policies and ensure correct content attribution. Where applicable, authors could be asked to provide pre-AI-adjusted versions of images and/or the composite raw images used to create the final submitted versions, for editorial assessment.

The use of generative AI or AI-assisted tools in the production of artwork such as for graphical abstracts is not permitted. The use of generative AI in the production of cover art may in some cases be allowed, if the author obtains prior permission from the journal editor and publisher, can demonstrate that all necessary rights have been cleared for the use of the relevant material, and ensures that there is correct content attribution.

 

View Elsevier’s generative AI author policies for books.


Generative Artificial Intelligence (AI) tools, such as large language models (LLMs) or multimodal models, continue to develop and evolve, including in their application for businesses and consumers.

Taylor & Francis welcomes the new opportunities offered by Generative AI tools, particularly in: enhancing idea generation and exploration, supporting authors to express content in a non-native language, and accelerating the research and dissemination process.

Taylor & Francis is offering guidance to authors, editors, and reviewers on the use of such tools, which may evolve given the swift development of the AI field.

Generative AI tools can produce diverse forms of content, spanning text generation, image synthesis, audio, and synthetic data. Some examples include ChatGPT, Copilot, Gemini, Claude, NovelAI, Jasper AI, DALL-E, Midjourney, Runway, etc.

While Generative AI has immense capabilities to enhance creativity for authors, there are certain risks associated with the current generation of Generative AI tools.

Some of the risks associated with the way Generative AI tools work today are:

  1. Inaccuracy and bias: Generative AI tools are of a statistical nature (as opposed to factual) and, as such, can introduce inaccuracies, falsities (so-called hallucinations) or bias, which can be hard to detect, verify, and correct.
  2. Lack of attribution: Generative AI is often lacking the standard practice of the global scholarly community of correctly and precisely attributing ideas, quotes, or citations.
  3. Confidentiality and Intellectual Property Risks: At present, Generative AI tools are often used on third-party platforms that may not offer sufficient standards of confidentiality, data security, or copyright protection.
  4. Unintended uses: Generative AI providers may reuse the input or output data from user interactions (e.g. for AI training). This practice could potentially infringe on the rights of authors and publishers, amongst others.

Authors

Authors are accountable for the originality, validity, and integrity of the content of their submissions. In choosing to use Generative AI tools, journal authors are expected to do so responsibly and in accordance with our journal editorial policies on authorship and principles of publishing ethics and book authors in accordance with our book publishing guidelines. This includes reviewing the outputs of any Generative AI tools and confirming content accuracy.

Taylor & Francis supports the responsible use of Generative AI tools that respect high standards of data security, confidentiality, and copyright protection in cases such as:

  • Idea generation and idea exploration
  • Language improvement
  • Interactive online search with LLM-enhanced search engines
  • Literature classification
  • Coding assistance

Authors are responsible for ensuring that the content of their submissions meets the required standards of rigorous scientific and scholarly assessment, research and validation, and is created by the author. Note that some journals may not allow use of Generative AI tools beyond language improvement, therefore authors are advised to consult with the editor of the journal prior to submission.

Generative AI tools must not be listed as an author, because such tools are unable to assume responsibility for the submitted content or manage copyright and licensing agreements. Authorship requires taking accountability for content, consenting to publication via a publishing agreement, and giving contractual assurances about the integrity of the work, among other principles. These are uniquely human responsibilities that cannot be undertaken by Generative AI tools.

Authors must clearly acknowledge within the article or book any use of Generative AI tools through a statement which includes: the full name of the tool used (with version number), how it was used, and the reason for use. For article submissions, this statement must be included in the Methods or Acknowledgments section. Book authors must disclose their intent to employ Generative AI tools at the earliest possible stage to their editorial contacts for approval – either at the proposal phase if known, or if necessary, during the manuscript writing phase.  If approved, the book author must then include the statement in the preface or introduction of the book. This level of transparency ensures that editors can assess whether Generative AI tools have been used and whether they have been used responsibly. Taylor & Francis will retain its discretion over publication of the work, to ensure that integrity and guidelines have been upheld.

If an author is intending to use an AI tool, they should ensure that the tool is appropriate and robust for their proposed use, and that the terms applicable to such tool provide sufficient safeguards and protections, for example around intellectual property rights, confidentiality and security.

Authors should not submit manuscripts where Generative AI tools have been used in ways that replace core researcher and author responsibilities, for example:

  • text or code generation without rigorous revision
  • synthetic data generation to substitute missing data without robust methodology
  • generation of any types of content which is inaccurate including abstracts or supplemental materials

These types of cases may be subject to editorial investigation.

Taylor & Francis currently does not permit the use of Generative AI in the creation and manipulation of images and figures, or original research data for use in our publications. The term “images and figures” includes pictures, charts, data tables, medical imagery, snippets of images, computer code, and formulas. The term “manipulation” includes augmenting, concealing, moving, removing, or introducing a specific feature within an image or figure. For additional information on Taylor & Francis’ image policy for journals, please see Images and figures.

Utilising Generative AI and AI-assisted technologies in any part of the research process should always be undertaken with human oversight and transparency. Research ethics guidelines are still being updated regarding current Generative AI technologies. Taylor & Francis will continue to update our editorial guidelines as the technology and research ethics guidelines evolve.

Editors and Peer Reviewers

Taylor & Francis strives for the highest standards of editorial integrity and transparency. Editors’ and peer reviewers’ use of manuscripts in Generative AI systems may pose a risk to confidentiality, proprietary rights and data, including personally identifiable information. Therefore, editors and peer reviewers must not upload files, images or information from unpublished manuscripts into Generative AI tools. Failure to comply with this policy may infringe upon the rightsholder’s intellectual property.

Editors

Editors are the shepherds of quality and responsible research content. Therefore, editors must keep submission and peer review details confidential.

Use of manuscripts in Generative AI systems may give rise to risks around confidentiality, infringement of proprietary rights and data, and other risks. Therefore, editors must not upload unpublished manuscripts, including any associated files, images or information into Generative AI tools.

Editors should check with their Taylor & Francis contact prior to using any Generative AI tools, unless they have already been informed that the tool and proposed use of the tool is authorised. Journal Editors should refer to our Editor Resource page for more information on our code of conduct.

Peer reviewers

Peer reviewers are chosen experts in their fields and should not be using Generative AI for analysis or to summarise submitted articles or portions thereof in the creation of their reviews.  As such, peer reviewers must not upload unpublished manuscripts or project proposals, including any associated files, images or information, into Generative AI tools.

Generative AI may only be utilised to assist with improving review language, but peer reviewers will at all times remain responsible for ensuring the accuracy and integrity of their reviews.


Sage recognizes the transformative potential of AI-powered writing assistants and tools such as ChatGPT. These technologies can support the writing and research process by providing authors with fresh ideas, alleviating writer's block, and optimizing editing tasks. While these tools can offer enhanced efficiency, it's also important to understand their limitations and to use them in ways which adhere to principles of academic and scientific integrity. As a publisher, Sage supports and believes in the value of human creativity and human authorship. Large Language Models (LLMs) cannot be listed as an author of a work, nor take responsibility for the text they generate. As such, human oversight, intervention and accountability is essential to ensure the accuracy and integrity of the content we publish.

We acknowledge that many academics and scholars are already using assistive and generative tools to enhance their productivity and assist in their academic writing. We have developed these guidelines to support authors submitting articles for Sage Journals, publishing books with Sage or Corwin, or working with us to create content for our Learning Resources products.

The distinction between Assistive AI tools and Generative AI tools

For the purposes of these guidelines, we distinguish between Assistive AI tools and Generative AI tools as follows:

Assistive AI tools

Assistive AI tools make suggestions, corrections, and improvements to content you’ve authored yourself. Tools like Google's Gmail and Microsoft's Outlook and Word have offered to flag spelling or grammatical errors for many years. More recently, these assistive tools have introduced features to proactively make suggestions for the next word or phrase or to suggest better or more concise phrasing to improve clarity. Content that you've crafted on your own, but refined or improved with the help of this kind of Assistive AI tool is considered “AI-assisted”.

Generative AI tools

This term refers to tools such as ChatGPT or Dall-e which produce content, whether in the form of text, images, or translations. Even if you've made significant changes to the content afterwards, if an AI tool was the primary creator of the content, the content would be considered "AI-generated”.

Disclosure

We believe that AI-assisted writing will become more common as AI tools are increasingly embedded within tools such as Microsoft Word and Google Docs. You are not required to disclose the use of assistive AI tools in your submission, but all content, including AI-assisted content, must undergo rigorous human review prior to submission. This is to ensure the content aligns with our standards for quality and authenticity.

You are required to inform us of any AI-generated content appearing in your work (including text, images, or translations) when you submit any form of content to Sage or Corwin, including journal articles, manuscripts and book proposals. This will allow the editorial team to make an informed publishing decision regarding your submission.

Where we identify published articles or content with undisclosed use of generative AI tools for content generation, we will take appropriate corrective action.

Things to consider before using Generative AI tools

If you do decide to use AI to generate content or images in your submission, you must follow these guidelines prior to submitting your work to Sage or Corwin.

  • Disclosure: As outlined above you must clearly reveal any AI-generated content within your submission. Detail where the AI-generated content appears, using the disclosure template found at the end of these guidelines and provide this disclosure along with your submission.
  • Carefully verify the accuracy, validity, and appropriateness of AI-generated content or AI-produced citations: Large Language Models (LLMs) can sometimes "hallucinate" – producing incorrect or misleading information, especially when used outside of the domain of their training data or when dealing with complex or ambiguous topics. While their outputs may appear linguistically sound, they might not be scientifically accurate or correct and LLMs may produce nonexistent citations. Remember, some LLMs might only have been trained on data up to a specific year, potentially resulting in incorrect or incomplete knowledge of a topic.
  • Carefully check sources & citations: Offer a comprehensive list of resources utilized for content and citations, including those produced by AI. Meticulously cross-check citations for their accuracy to ensure proper referencing.
  • Appropriately cite AI-generated content: Where you are including content generated by AI, appropriate citation should be included following the appropriate referencing convention. (For example, in Harvard style ChatGPT. 2023. San Francisco: OpenAI. ChatGPT: Microsoft Windows).
  • Avoid plagiarism and copyright infringement: LLMs could inadvertently reproduce significant text chunks from existing sources without due citation, infringing others' intellectual property. As the work's author, you bear responsibility for confirming that there is no plagiarized content in your submission.
  • Be aware of bias: Because LLMs have been trained on text that includes biases, and because there is inherent bias in AI tools because of human programming, AI-generated text may reproduce these biases, such as racism or sexism, or may overlook perspectives of populations that have been historically marginalized. Relying on LLMs to generate text or images can inadvertently propagate these biases so you should carefully review all AI-generated content to ensure it’s inclusive, impartial, and appeals to a broad readership.
  • Acknowledge limitations: In your submission, if you have included AI-generated content, you should appropriately acknowledge the constraints of LLMs, including the potential for bias, inaccuracies, and knowledge gaps.
  • Take responsibility: AI tools like ChatGPT cannot be recognized as a co-author in your submission. As the author, you (and any co-authors) are entirely responsible for the work you submit.
  • Check for specific guidelines: If you are submitting an article to a Sage Journal, check the submission guidelines of your targeted journal, ensuring compliance with any AI-related policies they might have in place, as they may differ from these guidelines.
  • Stay updated: Follow the latest developments in the debates around AI-generated content to ensure you understand the possible ramifications and ethical challenges of using AI-generated content in your submission.

Prohibited use

  • Do not use generative AI to artificially create or modify core research data.
  • Never share any sensitive personal or proprietary information on an AI platform like ChatGPT as this may expose sensitive information or intellectual property to others. Any information that you share with AI tools like ChatGPT is collected for business purposes.
  • Editors and Reviewers must uphold the confidentiality of the peer review process. Editors must not share information about submitted manuscripts or peer review reports in generative AI tools such a ChatGPT. Reviewers must not use AI tools, including but not limited to ChatGPT, to generate review reports.

Further information

If you have questions on these guidelines or would like to discuss how you plan to use AI in your writing, please reach out to your Sage or Corwin editor or contact.

Template for disclosure of the use of Generative AI tools in your submission

Full title of your submission:

Type of submission (e.g., research article, book chapter):

Name of the Generative AI tool used:

Brief description of how the tool was used in your writing process:

Your full name:

Your primary contact at Sage or Corwin:

The name of the Generative AI Tool(s) used in your submission:

(https://www.software.ac.uk/publication/how-cite-and-describe-software**(opens in a new tab))

Rationale for AI use:

Explain your reasoning for using AI and the tool(s) you selected. How it was used? What did you use AI to do?

Final prompt given:

Final response generated:

Please include all of the prompts & responses used in your submission and indicate where in your submission the AI generated content appears.


General Guidance & Best Practice

Authors may wish to use artificial intelligence tools or technologies (“AI Technology”) when preparing a manuscript for submission to a Wiley journal. Wiley welcomes the thoughtful use of AI tools. When used responsibly, authors can maintain high editorial standards, safeguard intellectual property and other rights, and foster transparency with readers. While authors remain fully accountable for their submission, published articles, and any tools or sources they use in its creation, Wiley recognizes AI Technology’s growing role in the research process and manuscript preparation and provides the following guidance.

  • Review Terms and Conditions

    Authors should carefully review the terms and conditions, terms of use, or any other conditions or licenses associated with their chosen AI Technology. Authors must confirm that the AI Technology does not claim ownership of their content or impose limitations on its use, as this could interfere with the author’s rights or Wiley's rights, including Wiley’s ability to publish a submission following acceptance. Authors should periodically revisit the AI Technology terms to ensure continued compliance.

  • Human Oversight

    Authors may only use AI Technology as a companion to their writing process, not a replacement. As always, authors must take full responsibility for the accuracy of all content, and verify that all claims, citations, and analyses align with their expertise and research. Before including AI-generated content in a journal submission, authors must carefully review it to ensure the final work reflects their expertise, voice, and originality while adhering to Wiley's ethical and editorial standards.

  • Disclosure

    Authors should maintain documentation of all AI Technology used, including its purpose, whether it impacted key arguments or conclusions, and how they personally reviewed and verified AI-generated content. Authors must also disclose use of AI Technologies upon submission to a Wiley journal. If not provided, this may be requested during the submission and peer review process, or after publication. Transparency is essential to Wiley's commitment to ethical publishing and integrity. (Please see the additional, specific guidance regarding authorship and disclosure below.)

  • Rights Protection

    Authors must not use any AI Technology that restricts their own, Wiley’s, or any other party’s use of the submission. This includes ensuring that the AI Technology used and the provider of that AI Technology does not gain any rights over the author’s underlying content, including the right to “train” their AI Technology on the content, besides the limited right to access and use the submission to perform the service. By reviewing an AI Technology’s terms and conditions for clauses such as “ownership,” “data reuse,” or “opt out” among others, authors can prevent unintended rights transfers.

  • Responsible and Ethical Use

    Authors must use AI Technology in a manner that aligns with privacy, confidentiality, and compliance obligations. This includes respecting data protection laws, avoiding the use of AI to replicate unique styles or voices of others, and fact-checking AI-generated content for accuracy and neutrality. Authors should be aware of potential biases in AI outputs and take steps to mitigate stereotypes or misinformation. When inputting sensitive or unpublished content, authors should use tools with appropriate privacy controls to protect confidentiality.

  • Adherence to Agreements

    Authors must adhere to the terms and conditions of their publishing agreement with Wiley. Authors remain responsible for upholding the warranties in those agreements, such as ensuring their submission is original, not previously published, and that they have the right to grant the necessary permissions to Wiley as set forth in their agreement.

Wiley values authors' unique creativity and expertise and views AI Technologies as tools that enhance rather than replace creativity. Wiley remains committed to providing clear guidance that fosters trust with readers, protects authors' and Wiley’s rights, and ensures high-quality content. These guidelines will evolve as technology and author needs advance. For reference, please also see the STM general guidance for all stakeholders in scholarly publishing regarding the role of generative AI technologies.

Specific Guidance

Disclosure: If an author has used AI Technology to develop any portion of a manuscript, its use must be described, transparently and in detail, in the Methods section (or via a disclosure or within the Acknowledgements section, as applicable). The author is fully responsible for the accuracy of any information provided by the tool and for correctly referencing any supporting work on which that information depends. GenAI tools must not be used to create, alter, or manipulate original research data and results. Tools that are used to improve spelling, grammar, and general editing are not included in the scope of these disclosure guidelines. The final decision about whether use of a GenAI tool is appropriate or permissible in the circumstances of a submitted manuscript or a published article lies with the journal’s editor or other party responsible for the publication’s editorial policy.

AI & Authorship: AI Technology cannot be considered capable of initiating an original piece of research without direction by humans. Tools cannot be accountable for a published work or for research design, which is a generally held requirement of authorship (as discussed in the Authorship section in these guidelines), nor does it have legal standing or the ability to hold or assign copyright. Therefore—in accordance with COPE’s position statement on Authorship and AI tools—these tools cannot fulfil the role of, nor be listed as, an author of an article.

Peer Review: AI Technology should be used only on a limited basis in connection with peer review. A GenAI tool can be used by an editor or peer reviewer to improve the quality of the written feedback in a peer review report. This use must be transparently declared upon submission of the peer review report to the manuscript’s handling editor. Independent of this limited use case, editors or peer reviewers should not upload manuscripts (or any parts of manuscripts including figures and tables) into AI Technology. AI Technology may use input data for training or other purposes, which could violate the confidentiality of the peer review process, privacy of authors and reviewers, and the copyright of the manuscript under review. Moreover, the peer review process is a human endeavor and responsibility and accountability for submitting a peer review report, in line with a journal’s editorial polices and peer review model, sits with those individuals who have accepted an invitation from a journal to undertake the peer review of a submitted manuscript. This process should not be delegated to a GenAI tool.


Generative artificial intelligence, specifically the kind based on Large Language Models (LLMs) like ChatGPT, has become a transformative force in many fields. Scholarly writing and publishing are no different, and generative AI has begun to have an impact on scholarly work.

In response to this impact, the APA Publications and Communications Board has approved policies regarding the use of generative AI in scholarly materials. These policies (as well as APA policies on other potential issues in scholarly publishing, and additional reading on the subject) can be found on the APA Publishing Policies page and will continue to develop as we gain a better understanding of the effects of generative AI on scholarly publishing.

APA’s current policies on generative AI are:

  • When a generative AI model is used in the drafting of a manuscript for an APA publication, the use of AI must be disclosed in the methods section and cited.
  • AI cannot be named as an author on an APA scholarly publication.
  • When AI is cited in an APA scholarly publication, the author must employ the software citation template, which includes specifying in the methods section how, when, and to what extent AI was used. Authors in APA publications are required to upload the full output of the AI as supplemental material.

Authors

APA policy states that authors are responsible for the accuracy of any information in their article. This means that authors must verify any information and citations provided to them by an AI tool. Authors may use, but must disclose, AI tools for specific purposes such as editing.

Please note that for the purposes of this policy, generative AI does not include grammar-checking tools, citation software, or plagiarism detectors which don’t employ the use of generative AI; use of these tools does not need to be disclosed or cited in manuscripts submitted to journals.

Additionally, please note that when information is entered into generative AI, the organization which runs the generative AI will likely have access to this data. Authors should be aware of this possibility and how it may impact the privacy of participants in their studies, as well as how it may impact their own privacy and intellectual property. For this reason, journal editors and reviewers may not enter materials from submitted manuscripts into generative AI as it would constitute a violation of the confidentiality of the peer review process.


Artificial intelligence: fair use and disclosure policy

This policy covers acceptable uses of generative AI technologies such as Large Language Models (ChatGPT, Jasper) and text-to-image generators (DALL-E 2, Midjourney, Stable Diffusion) in the writing or editing of manuscripts submitted to Frontiers.

AI generated text and authorship

If AI tools have been used to generate main text, then this must be clearly disclosed in the acknowledgments. Authors should not list a generative AI technology as a co-author or author of any submitted manuscript. Generative AI technologies cannot be held accountable for all aspects of a manuscript and consequently do not meet the ICMJE criteria required for authorship.

If the author of a submitted manuscript has used written or visual content produced by or edited using a generative AI technology, this use must comply to all Frontiers guidelines and policies. Specifically, the author remains responsible for checking the factual accuracy of all content created using generative AI technology. This includes, but is not limited to, any quotes, citations, or references.

AI generated figures and images

Figures produced by or edited using a generative AI technology must be checked to ensure they accurately reflect the data presented in the manuscript. Authors must also check that any written or visual content produced by or edited using a generative AI technology is free from plagiarism.

If the author of a submitted manuscript has used written or visual content produced by or edited using a generative AI technology, such use must be acknowledged in the acknowledgments section of the manuscript and the methods section, if applicable. This explanation must list the name, version, model, and source of the generative AI technology.

We encourage authors to upload all input prompts provided to a generative AI technology and outputs received from a generative AI technology in the supplementary files for the manuscript.

AI use by editors and reviewers

Generative AI technologies should not be used to review the content of a submitted manuscript or used to make decisions as to the acceptance or rejection of a manuscript. Responsibility for the integrity of the review process must remain with the editors and reviewers who have accepted their roles as relevant experts.

The use of generative AI technologies in the writing or editing of a submitted manuscript remains under the discretion of the author following the policy outlined above.


AI Contributions to Research Content

  • AI use must be declared and clearly explained in publications such as research papers, just as we expect scholars to do with other software, tools and methodologies.
  • AI does not meet the Cambridge requirements for authorship, given the need for accountability. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge
  • Authors are accountable for the accuracy, integrity and originality of their research papers, including for any use of AI.
  • Any use of AI must not breach Cambridge’s plagiarism policy. Scholarly works must be the author’s own, and not present others’ ideas, data, words or other material without adequate citation and transparent referencing.

Please note, individual journals may have more specific requirements or guidelines for upholding this policy.


Use of AI and generative AI software, such as Large Language Models or ChatGPT, for manuscript preparation, including drafting or editing text, must be disclosed in the Materials and Methods section (or Acknowledgments, if no Materials and Methods section is available) of the manuscript and may not be listed as an author. Authors are solely accountable for, and must thoroughly fact-check, outputs created with the help of generative AI software. AI tools for creating images or graphics are not permitted to be used unless the software is the subject of the work under consideration. Accordingly, PNAS does not allow the use of AI in cover art submissions. See Protecting scientific integrity in an age of generative AI (2024).


20 April 2023

MDPI’s Updated Guidelines on Artificial Intelligence and Authorship

The introduction of generative artificial intelligence tools creates new opportunities, while at the same time challenging the concept of authorship. As a leading scholarly publisher, we have been following the developments attentively.

Our updated guidelines on Authorship recognize that tools such as the AI chatbot ChatGPT and other large language models (LLMs) do not meet authorship criteria and thus cannot be listed as authors on manuscripts. While AI can contribute intellectually to the writing process, it is now widely accepted that it cannot take responsibility of the content it produces.

Authors are fully responsible for the originality, validity, and integrity of the content of their manuscript and must ensure that it complies with all of MDPI’s publication ethics policies.

AI technology can still be used when writing academic papers. However, this must be appropriately declared when submitting a paper to an MDPI journal. In such cases, authors are required to be fully transparent, within the “Acknowledgments” section, about which tools were used, and to describe in detail how the tools were used, in the “Materials and Methods” section.

Our new guideline is in line with the Committee on Publication Ethics (COPE) position statement on the use of AI and AI-assisted technology in manuscript preparation. It holds that "authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication ethics."


Declaration of generative AI in scientific writing

Authors must declare the use of generative AI in scientific writing upon submission of the paper. The below guidance only refers to the writing process, and not to the use of AI tools to analyze and draw insights from data as part of the research process.

  • Generative AI and AI-assisted technologies should only be used in the writing process to improve the readability and language of the manuscript.
  • The technology must be applied with human oversight and control and authors should carefully review and edit the result, as AI can generate authoritative-sounding output that can be incorrect, incomplete or biased. Authors are ultimately responsible and accountable for the contents of the work.
  • Authors must not list or cite AI and AI-assisted technologies as an author or co-author on the manuscript since authorship implies responsibilities and tasks that can only be attributed to and performed by humans.

The use of generative AI and AI-assisted technologies in scientific writing must be declared by adding a statement at the end of the manuscript when the paper is first submitted. The statement will appear in the published work and should be placed in a new section after the "declaration of interests" section. An example:

  • Title of new section: Declaration of generative AI and AI-assisted technologies in the writing process.
  • Statement: During the preparation of this work the author(s) used [NAME TOOL / SERVICE] in order to [REASON]. After using this tool/service, the author(s) reviewed and edited the content as needed and take(s) full responsibility for the content of the published article.

The declaration does not apply to the use of basic tools, such as tools used to check grammar, spelling and references. If you have nothing to disclose, you do not need to add a statement.

We advise you to read Elsevier's policy Opens in new window  for authors on the use of generative AI and AI-assisted technologies.

Please note that to protect authors’ rights and the confidentiality of their research, this journal does not currently allow the use of Generative AI or AI-assisted technologies such as ChatGPT or similar services by reviewers Opens in new window  or editors Opens in new window   in the peer review and manuscript evaluation process. We are actively evaluating compliant AI tools and may revise this policy in the future.

Generative AI and figures, images, and artwork

Please read our policy Opens in new window   on the use of generative AI and AI-assisted tools in figures, images and artwork, which states:

  • We do not permit the use of Generative AI or AI-assisted tools to create or alter images in submitted manuscripts.
  • The only exception is if the use of AI or AI-assisted tools is part of the research design or methods (for example, in the field of biomedical imaging). If this is the case, such use must be described in a reproducible manner in the methods section, including the name of the model or tool, version and extension numbers, and manufacturer.
  • The use of generative AI or AI-assisted tools in the production of artwork such as for graphical abstracts is not permitted. The use of generative AI in the production of cover art may in some cases be allowed, if the author obtains prior permission from the journal editor and publisher, can demonstrate that all necessary rights have been cleared for the use of the relevant material, and ensures that there is correct content attribution.