Generative AI has moved from the margins of academic publishing into its core infrastructure. Tools like large language models are now widely used by researchers to assist with writing, translation, and formatting. At the same time, these technologies are being misused to produce low-quality or misleading content at scale. Recent analyses reveal the scale of this shift: in biomedical publishing alone, more than 13% of 2024 abstracts likely involved AI-generated language, based on identifiable stylistic patterns.
Elsewhere, investigations into Google Scholar have uncovered papers with fabricated citations and text clearly generated by AI, often without any disclosure. Simultaneously, the number of retracted scientific papers globally has surged, fueled in part by AI-aided fraud, including text manipulation, image fabrication, and citation scams.
These developments signal a shift in responsibility for those who build and maintain publishing platforms. Editorial tools, submission systems, and peer review workflows must now account for machine-generated content and ensure that AI involvement is transparent and traceable. This is not only about managing risk—it is about preserving the credibility of the academic record.
In our work with publishers such as The BMJ, we’ve seen that adapting to this new environment requires both technical flexibility and policy awareness. The challenge is no longer whether AI will influence scholarly publishing, but how to design systems that can support its use responsibly.
We’ll explore this through the lens of software design decisions that support trust and transparency, starting with the most visible threat: AI-generated content pollution.
Unlike plagiarism or duplication, generative models produce original text that may appear legitimate but lacks meaningful contribution. This creates a detection problem and a growing risk for journals and indexing services.
Many of these submissions follow accepted formats, use domain-specific language, and include plausible references. But under scrutiny, they often reveal incoherent arguments, fabricated citations, or misleading interpretations. When such content passes through standard review unchecked, the damage is not only to individual journals, but it also affects the credibility of the broader research ecosystem.
Existing editorial workflows are not designed for this kind of content. Plagiarism checks, while effective in identifying recycled material, do little against AI-written text. Manual review alone struggles to keep pace with the volume and subtlety of machine-generated submissions. This leaves editorial teams under pressure and vulnerable to reputational harm.
Addressing the issue requires practical upgrades. Submission systems should support content validation tools that flag unusual linguistic or structural patterns. Review workflows must allow editors and reviewers to pause, verify, or escalate suspicious cases. These checks should be configurable, adapting to each journal’s policies on AI use.
Without active detection and response, platforms risk becoming passive carriers of unreliable science. For technology teams, the priority is to strengthen submission pipelines against misuse while keeping them agile enough to support legitimate innovation.
Another critical dimension of the AI transition is the widespread but often undisclosed use of generative tools by legitimate researchers, and what publishing platforms can do to support transparency and accountability.
While much attention has been paid to fraudulent or low-quality AI-generated content, a more pervasive issue is unfolding quietly: AI-assisted writing from legitimate authors is entering the publication pipeline without clear attribution or oversight.
A 2025 survey published by Nature found that while most researchers view AI-assisted editing and translation as ethically acceptable, few report using such tools in practice, and even fewer disclose that use during submission. The same study showed significantly lower support for AI involvement in producing results sections or in peer review, with many respondents labeling such practices as inappropriate or unethical.
This gap between accepted practice and actual disclosure raises operational risks for publishers. When authors use AI for language editing, summarization, or drafting without acknowledgment, reviewers and editors assess manuscripts without full insight into how the content was produced. This lack of transparency can obscure limitations, factual inaccuracies, or generated content that has not been adequately verified, undermining the integrity of the review process.
Current submission platforms are not equipped to close this gap. Many lack structured fields for AI disclosure, enforceable metadata policies, or prompts that guide authors through responsible declarations. Even when journals implement policies, inconsistent interfaces and the absence of automated checks make enforcement unreliable.
Simple design additions, such as submission logic that prompts for disclosure based on writing characteristics or integrated third-party checks, can strengthen compliance without slowing down workflows. Visualization tools within editorial systems can help teams track disclosure patterns over time and identify submissions that may warrant additional scrutiny.
Just as authors are increasingly using AI without disclosure, reviewers are beginning to adopt similar tools, often without oversight. Peer review, long seen as a safeguard for research integrity, now faces its own transparency challenge.
Peer review remains central to academic publishing, yet its integrity is under new pressure. Reviewers increasingly use AI tools to summarize submissions or speed up assessments, typically without disclosure or clear editorial guidance. Most publishing platforms still lack the mechanisms to track or regulate this behavior.
Earlier, we referenced evidence that peer review assistance receives the lowest ethical support among common AI use cases in publishing. Still, informal use is growing. Without transparency, editors may unknowingly rely on feedback shaped by tools not designed for expert judgment, tools that can miss key flaws or introduce subtle bias.
To address this, platforms need to offer structured support, including:
Some may adopt hybrid approaches, permitting AI for preliminary summaries while reserving in-depth evaluations for human reviewers. Regardless of the model, the goal is the same: enable efficiency without sacrificing trust.
As the roles of AI in writing, reviewing, and decision-making expand, the underlying platforms must balance usability with safeguards. Let’s look at how you can design for both innovation and editorial integrity, without compromising either.
Machine learning models increasingly handle tasks such as metadata generation, reference validation, and layout optimization. These applications can reduce delays and free up editorial capacity. But even at this operational level, automated systems require human oversight to guard against silent errors or unintended omissions.
More consequential is the integration of AI into content creation. As noted earlier, a growing share of submissions now includes AI-generated or AI-edited language. Without proper disclosure and oversight, this material poses not only editorial risks but long-term challenges to credibility and trust.
To meet these demands, publishing infrastructure must evolve beyond enablement. Systems should actively support responsible AI use by logging where tools are applied, flagging irregularities, and giving editors the context they need to make informed decisions.
Editorial policies also need enforcement mechanisms. Whether a journal allows AI-assisted editing or prohibits AI-authored content altogether, the platform must support those distinctions through configurable forms, metadata fields, and workflow rules.
At Thinslices, we’ve responded to these needs by building modular components that:
Responsible use of AI in publishing is not about avoiding innovation; it’s about implementing it with integrity.
Scientific publishing is facing a structural reset. The rise of generative AI, combined with ongoing shifts in open access models, is changing how research is produced, reviewed, and shared. For publishers, the priority is no longer whether to adapt, but how to do so in ways that reinforce trust, quality, and long-term resilience.
Platform infrastructure plays a central role in this transition. To remain viable, publishing systems must embed capabilities that go beyond efficiency. They need to provide transparency, enforce editorial policy, and support decisions grounded in both ethical standards and operational realities.
Based on our work with organizations like the BMJ and others navigating similar transitions, here are five actions that can help future-proof publishing platforms:
Enable authors and reviewers to declare how AI was used, distinguishing between editing, summarizing, and drafting. Capture this data as part of the submission metadata.
Use pattern recognition tools to flag potential AI-generated content. Support configurable thresholds that match each journal's editorial standards. For example, AI detection sensitivity can be calibrated differently for high-impact medical journals versus rapid-turnaround preprint platforms, depending on review depth and content type.
Give editors visibility into AI involvement through dashboards and alerts. Build workflows that allow for targeted review and enforcement where needed.
Track patterns in AI usage, disclosure rates, and content quality across the platform. Use this data to inform both internal governance and external reporting.
Policy is evolving quickly. Platforms must be flexible and designed in collaboration with editorial teams, not just technical stakeholders.
What comes next will depend on the choices publishers make now. Platforms that treat AI as a tool to be managed, not just enabled, will be better positioned to protect integrity while embracing innovation. If your team is working through these changes, we are available to share our experience and support your roadmap.