Latest World India Business Lifestyle Sports Education Entertainment Technology Astrology

 

---Advertisement---

Why AI Cannot Replace Humans in Scientific Peer Review

On: February 3, 2026 4:19 PM
Follow Us:
---Advertisement---

According to Akhil Bhardwaj, if we truly believe that science is a calling based on curiosity and careful argument, we cannot hand over judgment to machines.

Artificial Intelligence has already found its way into the heart of academic peer review. It often happens quietly. Surveys indicate that a large number of reviewers are now using AI tools to help them with their work. This is happening even though official guidelines often discourage it. These new “review-bots” promise to read papers, summarize them, and even offer critiques. For many, the offer is hard to refuse.

The reason for this shift is clear. The peer review system is under immense pressure. The number of research papers being submitted has skyrocketed. However, the number of qualified human reviewers has not kept up. Those who are qualified are often flooded with requests. They are constantly chasing deadlines for reviews they have already promised to do.

For editors, AI looks like a perfect solution to this scarcity. It offers a way to sort through papers faster. It promises quicker reports. It seems to offer a cheaper and more consistent way to make decisions.

Review is a Conversation, Not a Factory Line

However, viewing peer review as just a logistical problem misses the point. It is a mistake to treat it like a manufacturing process where speed is the only goal. Peer review is not just a production stage. It is one of the few places left where the scientific community actually talks to itself.

When we invite machines to act as reviewers, we change the nature of the process. We stop seeing it as a human conversation and start treating it as a technical service.

A “peer” is not just a machine that detects errors. A peer is a human being who lives in the same world of ideas as the author. They understand the context. They recognize that a paper is part of a live debate within a specific field. They know the current trends and the weak spots in an argument.

When a reviewer writes, “I see what you are trying to do, but…”, it is not a flaw in the system. That frustration is the system working. It signals that another human intelligence has struggled with the work long enough to care about the outcome.

Processing vs. Understanding

AI tools do not struggle with ideas. They process data. They can copy the style of a referee’s report. They can mimic the structure. But this is “understanding” without any actual comprehension.

Systems built on Large Language Models (LLMs) work by statistically stitching together words. They predict which word comes next based on patterns. This does not imply they understand the science. In reality, much of what they produce is just a remix of the material they were trained on.

There is a fundamental difference between a human and a bot. Humans care about knowledge. LLMs do not. An AI has no stake in whether a scientific finding is true. It does not care if a method is convincing or if a theory sheds new light on a problem. AI has not helped build the history of scientific knowledge. It does not belong to the community of scholars. Perhaps most importantly, an AI cannot be embarrassed if it is wrong.

The Culture of Argument

This distinction matters deeply. Science is not just a collection of results. It is a culture of argument. We often measure progress with data points like citation curves or replication studies. But underneath all that data sits something older and more fragile.

Science relies on the expectation that a researcher’s claims will be judged by their peers. Those peers must be open to persuasion. This back-and-forth process is often messy. It is slow. It can be emotionally draining for everyone involved. Personal bias and ego can certainly cause problems. However, this human friction accomplishes something that no automated system can replicate.

The current rush to use automated review is part of a larger trend. Academic life is becoming industrialized and commercialized. We already manage research using dashboards and performance benchmarks. AI is seen as just another tool to make workflows smoother. It is sold as a way to gain “efficiency” in deciding who gets grants or which papers get published.

But friction is not always waste. sometimes, a reviewer is late because they are genuinely unsure about the findings. Sometimes they resist a trendy claim until the authors provide more proof. This delay embodies a kind of care that efficiency metrics struggle to measure.

The Accountability Problem

Supporters of AI will argue that the technology is not meant to replace humans. They say it is there to support them. They argue that using AI can free up reviewers to focus on the bigger picture.

There is some truth to this. However, the line between assistance and authorship is very thin.

Imagine an editor who starts relying on a score generated by an AI. Or imagine a reviewer who generates a draft with a bot and pastes it into the report with very few changes. At that point, it becomes difficult to say where human responsibility ends and the machine’s suggestion begins.

This ambiguity creates a political problem: accountability. Human reviewers have biases, but they can be questioned. They can be persuaded. What does it mean to persuade a bot? Would an author even care to try? If authors stop caring about persuading the reviewer, the ability of peer review to improve manuscripts will fade away.

Furthermore, if a bot’s output decides the fate of a paper, who is responsible? Is it the editor? The publisher? The software company? Or the “ghosts” of past reviewers whose data trained the AI?

If reviewers start leaning heavily on AI, it raises another question. Why shouldn’t researchers use AI to write the manuscripts in the first place? This would take humans out of the publication loop entirely.

Conclusion: A Choice of Values

When nobody truly owns the judgment, it becomes dangerously easy for everyone to avoid responsibility. Yet, science relies on the personal responsibility of authorship.

This is not to say the current system is perfect. Peer review has flaws. It can be exclusionary. It can be petty. We have all seen reports that make no sense or delays that seem unfair. But the answer to these problems is to invest more in the human practice. We need better training, more recognition for reviewers, and realistic expectations. The answer is not to hollow out the system with automation.

Policymakers, funders, and publishers should resist the urge to view AI as inevitable. It is not something that must simply be “managed.” It is a choice. Like any choice about infrastructure, it reveals what we value.

If we value speed and volume above all else, we will automate judgment. We will accept a thinner, weaker definition of understanding. But if we still believe that science is a vocation grounded in curiosity and care, we must be honest about the limits of technology.

Let AI help with the paperwork if necessary. But the act of deciding if a piece of work belongs in the shared conversation of science is something only a peer can do.

Rowan Stormscribe

Join WhatsApp

Join Now

Join Telegram

Join Now

और पढ़ें

Leave a Comment