Originality, Plagiarism, and AI Use

ARTIFICIAL INTELLIGENCE (AI) USE POLICY FOR VERBUM ET LINGUA

Verbum et Lingua acknowledges that Artificial Intelligence (AI) and Large Language Models (LLMs) are becoming increasingly integrated into academic research and scholarly writing. This policy aims to promote the ethical, transparent, and responsible use of these tools while safeguarding the integrity, originality, and confidentiality of the scholarly record.


1. Guidelines for Authors

1.1 AI Authorship is Strictly Prohibited

Under no circumstances can an AI tool (e.g., ChatGPT, Claude, or Gemini) be listed as an author or co-author.

Authorship requires legal and ethical accountability for the published work, the ability to defend the research methodology, and the capacity to manage copyright agreements—criteria that AI tools cannot fulfill.


1.2 Acceptable Uses of AI (Assistance)

Authors may use AI tools to assist in the preparation of their manuscript, provided this use is strictly limited to the following purposes:

Language Editing

  • Polishing grammar, syntax, and phrasing.

  • Translating text to improve the readability of the manuscript, particularly for non-native English or Spanish speakers.

Formatting and Organization

  • Structuring references (e.g., APA 7th edition formatting).

  • Organizing the logical flow of already written drafts.

Data Processing

  • Assisting in the coding or processing of data, provided that the algorithms and prompts used are fully transparent and verified by the authors.


1.3 Unacceptable Uses of AI (Generation)

Authors must not use AI tools to:

  • Fabricate or manipulate empirical data, research results, or statistical analyses.

  • Generate core theoretical frameworks, research hypotheses, or substantive original ideas.

  • Create visual elements (graphs, charts, or images) without explicit disclosure and methodological justification, as AI-generated images may infringe copyright or misrepresent data.

  • Generate bibliographical references, due to the high risk of AI “hallucinations” or fabricated citations.


1.4 Mandatory Transparency and Disclosure

If an AI tool was used in the preparation of the manuscript, authors must explicitly disclose it in the Acknowledgementssection or in a dedicated “AI Use Declaration” section at the end of the article.

The declaration must specify:

  • the name of the tool,

  • the specific version, and

  • exactly which sections or tasks the tool was used for.

Example

“ChatGPT-4 was used solely to polish the English grammar in the Abstract and Methodology sections.”


1.5 Ultimate Human Accountability

Human authors hold 100% responsibility for the entire manuscript. This includes:

  • the accuracy of the data,

  • the absence of plagiarism, and

  • the validity of all citations and references.

Any errors, biases, or fabricated citations introduced by an AI tool remain the sole responsibility of the human authors and may result in an immediate desk rejection or article retraction.


2. Guidelines for Peer Reviewers and Editors

2.1 Strict Confidentiality in Peer Review

The peer-review process is strictly confidential. Reviewers and Editors are strictly prohibited from uploading any part of an unpublished manuscript into public AI tools (such as ChatGPT) in order to:

  • generate summaries,

  • evaluate the methodology, or

  • draft peer-review reports.

Uploading an author’s unpublished work into a public large language model (LLM) violates the double-blind peer-review confidentiality agreement, infringes upon the author’s copyright, and exposes original research to third-party databases.


2.2 The Human Element in Evaluation

Although reviewers may use specialized, locally hosted, or journal-approved secure AI tools to check grammar in their own review reports, the critical evaluation of the manuscript—including the assessment of its scientific rigor, theoretical contribution, and alignment with the scope of Verbum et Lingua—must be exclusively the product of human intellectual judgment and disciplinary expertise.