Artificial Intelligence policy
The University of Thi-Qar Journal of Medicine sets a new policy on AI for authors that are using generative artificial intelligence (AI) and AI-assisted technologies in the writing process Large Language Models (LLMs), such as ChatGPT, authors should only use these technologies to improve readability and language. Applying the technology should be done with human oversight and control, and authors should carefully review and edit the result, as AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. AI and AI-assisted technologies should not be listed as an author or co-author, or be cited as an author.
Authors should disclose in their manuscript the use of AI and AI-assisted technologies in writing by following the instructions below. A statement will appear in the published work. Please note that authors are ultimately responsible and accountable for the contents of the work.
This policy doesn't apply to basic tools for checking grammar, spelling, references, etc. There is no need to add a statement if there is nothing to disclose.
AI authorship
Authorship implies responsibilities and tasks that can only be attributed to and performed by humans, Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably, an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. The use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript.
Generative AI images
The fast-moving area of generative AI image creation has resulted in novel legal copyright and research integrity issues. for the University of Thi-Qar Journal of Medicine , we strictly follow existing copyright law and best practices regarding publication ethics. While legal issues relating to AI-generated images and videos remain broadly unresolved, the University of Thi-Qar Journal of Medicine is unable to permit its use for publication.
Exceptions are images/art that have created images in a legally acceptable manner. Other exceptions to this policy include images and videos that are directly referenced in a piece that is specifically about AI and will be reviewed on a case-by-case basis.
As we expect things to develop rapidly in this field in the near future, we will review this policy regularly and adapt it if necessary.
Please note: Not all AI tools are generative. The use of non-generative machine learning tools to manipulate, combine, or enhance existing images or figures should be disclosed in the relevant caption upon submission to allow a case-by-case review.
AI use by peer reviewers
Peer reviewers play a vital role in scientific publishing. Their expert evaluations and recommendations guide editors in their decisions and ensure that published research is valid, rigorous, and credible. Editors select peer reviewers primarily because of their in-depth knowledge of the subject matter or methods of the work they are asked to evaluate. This expertise is invaluable and irreplaceable. Peer reviewers are accountable for the accuracy and views expressed in their reports, and the peer review process operates on a principle of mutual trust between authors, reviewers, and editors. Despite rapid progress, generative AI tools have considerable limitations: they can lack up-to-date knowledge and may produce nonsensical, biased, or false information. Manuscripts may also include sensitive or proprietary information that should not be shared outside the peer review process. For these reasons we ask that, while The University of Thi-Qar Journal of Medicine explores providing our peer reviewers with access to safe AI tools, peer reviewers do not upload manuscripts into generative AI tools.
If any part of the evaluation of the claims made in the manuscript was in any way supported by an AI tool, we ask peer reviewers to declare the use of such tools transparently in the peer review report.