Artificial Intelligence Policy
Introduction
The journal recognises the transformative potential of generative artificial intelligence (AI) and AI-assisted technologies (collectively, "AI Tools"), such as large language models, image generators, and deep research assistants. These tools can aid researchers in synthesising literature, identifying gaps, generating ideas, organising content, and enhancing language clarity. However, their use must be ethical, transparent, and subordinate to human expertise.
This policy outlines guidelines for authors, reviewers, and editors to ensure integrity, accountability, and trust in the scholarly record. It aligns with broader publishing ethics and will be updated periodically to reflect technological advancements.
A. Authorship
AI Tools cannot be listed as authors or co-authors, nor cited as such, as authorship requires human accountability for the work's accuracy, originality, ethical compliance, and final approval. All listed authors must meet standard criteria: substantial contributions, drafting/revising, final approval, and agreement to be accountable for all aspects of the work. Authors should consult the journal's general authorship policy for further details.
1. Use in Manuscript Preparation
Authors are permitted to use AI Tools to support the preparation of manuscripts, provided such use is conducted under human oversight and does not replace critical thinking, analysis, or original contributions. Key principles include:
-
Carefully reviewing and verifying the accuracy, comprehensiveness, and impartiality of all AI-generated output (including checking the sources, as AI-generated references can be incorrect or fabricated).
-
Editing and adapting all material thoroughly to ensure the manuscript represents the author’s authentic and original contribution and reflects their own analysis, interpretation, insights and ideas.
-
Ensuring the use of any tools or sources, AI-based or otherwise, is made clear and transparent to readers. For the use of AI Tools we require a disclosure statement upon submission.
-
Ensuring the manuscript is developed in a way that safeguards data privacy, intellectual property and other rights, by checking the terms and conditions of any AI Tool that is used.
AI use in the research process itself (e.g., data analysis or hypothesis generation) must be detailed in the Methods section, including tool names, versions, and application specifics.
2. Responsible Use of AI Tools
Authors must check the terms and conditions of any AI Tool that they use to ensure that the privacy and confidentiality of their data and inputs, including their unpublished manuscripts, is maintained. Particular care should be taken with any personally identifiable data. Images that duplicate or refer to existing copyrighted images, real people, or others’ identifiable products or brands must not be generated, nor any likeness of an individual’s voice. Authors should check for factual errors and for any potential bias.
Authors should also check the terms and conditions of any AI Tool they wish to use to ensure that, they only grant to the AI Tool the right to use their materials to provide the service to them and that they do not grant to the AI Tool any other rights to the materials that they input into the AI Tool (including without limitation the right to train the AI Tool on those materials). They must also ensure that the AI Tool does not impose constraints on the use of outputs from the AI Tool in a way that could restrict the subsequent publication of the relevant article.
3. Disclosure
Authors should disclose the use of AI Tools for manuscript preparation in a separate AI declaration statement in their manuscript upon submission and a statement will appear in the published work. Authors should document their use of AI, including the name of the AI Tool used, the purpose of the use, and the extent of their oversight. Declaring the use of AI Tools supports transparency and trust between authors, readers, reviewers, editors and contributors and facilitates compliance with the terms of use of the relevant AI Tool. Basic checks of grammar, spelling and punctuation need no declaration. AI use in the research process should be declared and described in detail in the methods section.
4. Using AI Tools in Figures, Images and Artwork
We do not permit the use of AI Tools to create or alter images in submitted manuscripts. This may include enhancing, obscuring, moving, removing, or introducing a specific feature within an image or figure. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original. Image forensics tools or specialized software might be applied to submitted manuscripts to identify suspected image irregularities.
The only exception is if the use of AI Tools is part of the research design or research methods (such as in AI-assisted imaging approaches to generate or interpret the underlying research data, for example in the field of biomedical imaging). If this is done, such use must be described in a reproducible manner in the methods section. This should include an explanation of how the AI Tools were used in the image creation or alteration process, and the name of the model or tool, version and extension numbers, and manufacturer. Authors should adhere to the AI software’s specific usage policies and ensure correct content attribution. Where applicable, authors could be asked to provide pre-AI-adjusted versions of images and/or the composite raw images used to create the final submitted versions, for editorial assessment.
The use of generative AI or AI-assisted tools in the production of artwork such as for graphical abstracts is not permitted. The use of generative AI in the production of cover art may in some cases be allowed, if the author obtains prior permission from the journal editor and publisher, can demonstrate that all necessary rights have been cleared for the use of the relevant material, and ensures that there is correct content attribution.
5. Examples of Disclosure
Transparency fosters trust and reproducibility. Authors must:
- Declare AI Use: Include a dedicated "AI Declaration" statement at submission, specifying:
(1) the AI Tool(s) name and version;
(2) purpose(s) (e.g., literature synthesis, language refinement); and
(3) extent of human oversight. This statement will appear in the published article, immediately before the references.
- Methods Section Details: For AI Tools involvement in core research (e.g., experimental design or data generation), provide reproducible descriptions, including tool parameters and validation steps.
- Figures and Images: If AI Tools was used in research-related image processing (excluding prohibited alterations), disclose in Methods. Raw or pre-AI versions may be requested.
Below are some examples of how other authors who have utlised AI Tools in their journals are disclosing their use.
i. "During the preparation of this work the authors used Copilot (Microsoft) to reword and rephrase text. After using this tool/service, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication."
ii. "During the preparation of this work, the authors used Chat Generative Pre-Trained Transformer (ChatGPT; OpenAI, San Francisco, CA, USA) to enhance readability and language, aiding in formulating and structuring content. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication."
iii. "The author used Grammarly to enhance the writing quality during the revision process. Additionally, the author utilized other AI tools, such as SCISPACE and Elicit, to search for relevant literature while revising the manuscript. After using these tools, the author reviewed, edited, and took full responsibility for publishing the content."
iv. "During the preparation of this work the authors used SCISPACE (https://typeset.io/) and ResearchRabbit (https://researchrabbitapp.com/home) to search for relevant literature. For SCISPACE, we used prompts related to "Good Modelling Practices in Ecology", "Good Modelling Practices in Science", "Reproducibility in Science", and "Reusability in Science". The manuscripts to be read were then selected from the suggested ones by reading the title and abstract similar to what is done when searching through Google Scholar. To ensure we found all important literature related to the theme, two fundamental manuscripts (Ihle et al., 2017;Powers and Hampton, 2019) were added to ResearchRabbit. The network of cross-referenced manuscripts created by identifying "similar work" was then used as a starting point for literature review. ChatGPT 4.0, ChatGPT 4.0o, DeepL Translator, and DeepL Write were used for edits and improvements of some sentences. After using these tools, the authors reviewed and edited the content as needed and take full responsibility for the content of the publication."
B. Peer Reviewers
When a researcher is invited to review another researcher’s paper, the manuscript must be treated as a confidential document. Reviewers should not upload a submitted manuscript or any part of it into a generative AI Tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.
This confidentiality requirement extends to the peer review report, as it may contain confidential information about the manuscript and/or the authors. For this reason, reviewers should not upload their peer review report into an AI Tool, even if it is just for the purpose of improving language and readability.
Peer review process should abide by the highest standards of integrity. Reviewing a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI Tools should not be used by reviewers to assist in the scientific review of a paper as the critical thinking and original assessment needed for peer review is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The reviewer is responsible and accountable for the content of the review report.
Please note that the journal uses identity protected AI-Tools screening process to conduct completeness and plagiarism checks and identify suitable reviewers. These licensed technologies respect author confidentiality. Our programmes are subject to rigorous evaluation of bias and are compliant with data privacy and data security requirements.
C. Editors
A submitted manuscript must be treated as a confidential document. Editors should not upload a submitted manuscript or any part of it into a generative AI Tool as this may violate the authors’ confidentiality and proprietary rights and, where the paper contains personally identifiable information, may breach data privacy rights.
This confidentiality requirement extends to all communication about the manuscript including any notification or decision letters as they may contain confidential information about the manuscript and/or the authors. For this reason, editors should not upload their letters into an AI Tool, even if it is just for the purpose of improving language and readability.
Peer review is at the heart of the scientific ecosystem and should abide by the highest standards of integrity in this process. Managing the editorial evaluation of a scientific manuscript implies responsibilities that can only be attributed to humans. Generative AI Tool should not be used by editors to assist in the evaluation or decision-making process of a manuscript as the critical thinking and original assessment needed for this work is outside of the scope of this technology and there is a risk that the technology will generate incorrect, incomplete or biased conclusions about the manuscript. The editor is responsible and accountable for the editorial process, the final decision and the communication thereof to the authors.
Please note that the journal uses identity protected AI-Tools screening process to conduct completeness and plagiarism checks and identify suitable reviewers. These licensed technologies respect author confidentiality. Our programmes are subject to rigorous evaluation of bias and are compliant with data privacy and data security requirements.