Policy on the Use of Generative Artificial Intelligences
1. Summary
This policy outlines the acceptable use of generative artificial intelligences (Generative AI) and AI-assisted technologies in the preparation, review, and publication of manuscripts submitted to Realism: Law Review. This policy aligns with best practices established by Elsevier to uphold the integrity and transparency of scholarly communication.
2. For Authors
2.1 Permitted Use
-
Authors may utilize generative and AI-assisted tools (e.g., ChatGPT, Grammarly) to improve the readability and language quality of the manuscript.
-
These tools must not be used to generate scientific content, interpret data, or draw conclusions.
-
AI usage must be supervised by humans; authors must critically evaluate and edit AI-generated outputs to ensure accuracy and originality.
2.2 Disclosure Requirements
Authors must disclose any use of generative AI or AI-assisted tools in the manuscript. The disclosure should be placed in the "Assistive Technology Use Statement" section, identifying the tool used and its purpose. Example disclosure:
During the preparation of this manuscript, the author used [Tool Name] to enhance language clarity. The author reviewed and edited the content to ensure its accuracy and accepts full responsibility for the final version of the manuscript.
2.3 Authorship Attribution
-
Generative AI tools cannot be credited as authors or co-authors.
-
Authorship requires accountability and the ability to take responsibility for the work, which AI cannot fulfill.
2.4 Use in Images and Illustrations
-
The use of generative AI to create or modify images, illustrations, or graphical abstracts is prohibited unless it forms part of the research methodology.
-
If AI-generated images are a significant part of the research, authors must provide a detailed description in the methods section, including:
-
The AI tool used (name, version, developer).
-
The process of creation or modification.
-
Compliance with the tool’s usage policy and proper attribution.
-
3. For Reviewers
-
Reviewers must not use generative AI tools to evaluate or summarize manuscripts, as this may compromise confidentiality and the integrity of the review process.
-
All evaluations must be based on the reviewer’s expertise without AI assistance.
4. For Editors
-
Editors are prohibited from using generative AI tools to make editorial decisions or generate editorial content.
-
Editorial decisions must be grounded in human expertise to maintain the quality and integrity of the publication process.
5. Compliance and Accountability
-
Failure to disclose the use of generative AI tools or misuse of such technologies may result in manuscript rejection or retraction of published articles.
-
Authors, reviewers, and editors are expected to comply with this policy to uphold the ethical standards of Realism: Law Review.
References
-
Elsevier: Generative AI Policy for Journals
-
Elsevier: Publishing Ethics Guidelines




