Artificial Intelligence Policy

1. Introduction

The journal recognises the transformative potential of generative artificial intelligence (AI) and AI-assisted technologies (collectively, "AI Tools"), such as large language models, image generators, and deep research assistants. These tools can aid researchers in synthesising literature, identifying gaps, generating ideas, organising content, and enhancing language clarity. However, their use must be ethical, transparent, and subordinate to human expertise. This policy outlines guidelines for authors, reviewers, and editors to ensure integrity, accountability, and trust in the scholarly record. It aligns with broader publishing ethics and will be updated periodically to reflect technological advancements.

 

2. Use in Manuscript Preparation

Authors are permitted to use AI Tools to support the preparation of manuscripts, provided such use is conducted under human oversight and does not replace critical thinking, analysis, or original contributions. Key principles include:

- Human Accountability: Authors bear full responsibility for the accuracy, originality, and integrity of all content. This includes verifying AI-generated outputs for factual correctness, completeness, and impartiality (e.g., cross-checking references, as AI may fabricate citations).

- Oversight and Editing: All AI outputs must be reviewed, edited, and integrated to authentically represent the authors' work. Basic language tools for grammar, spelling, or punctuation do not require disclosure.

- Data Privacy and Rights: Authors must review AI Tool terms to protect confidential data, avoid inputting personally identifiable information, and ensure no unintended rights (e.g., for AI training) are granted. Outputs must not infringe intellectual property, such as by duplicating copyrighted material, real individuals, or branded elements.

- Bias and Errors: Authors should mitigate risks of bias, hallucinations, or errors in AI outputs through rigorous validation.

AI use in the research process itself (e.g., data analysis or hypothesis generation) must be detailed in the Methods section, including tool names, versions, and application specifics.

 

3. Authorship

AI Tools cannot be listed as authors or co-authors, nor cited as such, as authorship requires human accountability for the work's accuracy, originality, ethical compliance, and final approval. All listed authors must meet standard criteria: substantial contributions, drafting/revising, final approval, and agreement to be accountable for all aspects of the work. Authors should consult the journal's general authorship policy for further details.

 

4. Peer Review

Peer review is a confidential, human-led process essential to scholarly rigor. Reviewers must:

- Maintain Confidentiality: Do not upload manuscripts, excerpts, or peer review reports into AI Tools, as these risks breaching author privacy, proprietary rights, and general data protection regulations (e.g., GDPR).

- Provide Original Assessments: Reviews must reflect the reviewer's independent, critical evaluation. AI cannot substitute for human judgment, and its use in drafting reports is discouraged due to risks of inaccuracy or bias.

- Accountability: Reviewers are fully responsible for the content and recommendations in their reports.

The journal may employ identity-protected AI for administrative tasks, such as plagiarism screening or reviewer matching, in compliance with responsible AI principles.

 

5. Editorial Processes

Editors uphold the confidentiality and integrity of submissions. They must:

- Avoid AI Uploads: Refrain from inputting manuscripts, decision letters, or related communications into AI Tools to prevent confidentiality breaches.

- Human Decision-Making: Editorial evaluations, recommendations, and decisions require human oversight; AI cannot perform scientific assessments.

- Handling Violations: If potential non-compliance with this policy is suspected (e.g., undisclosed AI use), editors should consult the publisher and investigate per [Journal Name]'s ethics guidelines.

Like reviewers, editors remain accountable for their processes and outputs.

 

6. Transparency and Disclosure

Transparency fosters trust and reproducibility. Authors must:

- Declare AI Use: Include a dedicated "AI Declaration" statement at submission, specifying:

(1) the AI Tool(s) name and version;

(2) purpose(s) (e.g., literature synthesis, language refinement); and

(3) extent of human oversight. This statement will appear in the published article, immediately before the references.

- Methods Section Details: For AI involvement in core research (e.g., experimental design or data generation), provide reproducible descriptions, including tool parameters and validation steps.

- Figures and Images: If AI was used in research-related image processing (excluding prohibited alterations), disclose in Methods. Raw or pre-AI versions may be requested.


Example Declaration: 

"This work utilised [AI Tool Name, Version] for [specific purpose, e.g., summarizing prior studies]. All outputs were reviewed and verified by the authors for accuracy."

Failure to disclose may result in rejection or retraction.

 

6. Prohibitions

To safeguard originality and prevent manipulation, the following are strictly prohibited:

- Figure and Image Manipulation: Using AI to create, generate, enhance, obscure, or alter images, figures, artwork, or graphical abstracts in submissions. Permissible adjustments are limited to non-deceptive changes (e.g., brightness/contrast for clarity). The journal reserves the right to use forensic tools to detect irregularities.

- Direct Substitution: Presenting unedited AI outputs as original text without verification.

- Confidentiality Breaches: Uploading protected materials to AI Tools by any party.

- Cover Art Exceptions: Generative AI for cover art requires prior editor approval, rights clearance, and attribution.


Violations may lead to submission rejection, publication retraction, or sanctions per [Journal Name]'s misconduct policy.

 

7. Additional Guidelines

- Internal AI Use: The journal may leverage AI for supportive functions, such as manuscript screening or journal recommendations, always with human oversight and ethical safeguards.

- Evolving Policy: This policy applies to tools like ChatGPT, DALL-E, and similar. Updates will be communicated via the journal website.