Image: fizkes / Shutterstock.com
Artificial intelligence has long since become part of everyday life. It writes emails, summarizes texts, and assists with research. For many students, it therefore seems like a perfectly normal tool. But this is precisely where the problem begins. After all, at universities, it’s not just about ending up with a usable text on the table. It’s also about whether that text is truly the student’s own work. As Legal Tribune Online reports, the Kassel Administrative Court has now drawn a clear line on this: Anyone who uses AI in an exam without disclosing it may be committing a particularly serious act of academic dishonesty.
Two cases, a clear signal
The focus was on two cases under review at the University of Kassel. One case involved a term paper in the Master’s program in Public Management, specifically in the area of administrative law. The other involved a bachelor’s thesis in computer science. In both cases, the university was convinced that the papers had been written largely with the help of AI.
One student admitted to the use of the material. In the other case, doubts arose due to several inconsistencies. The court cited a “discrepancy between the written and oral accounts of the plaintiff’s knowledge regarding the topic of his bachelor’s thesis.” Put simply: The thesis was very strong, but his knowledge during the discussion on the topic was significantly weaker.
Why AI isn't simply comparable to Google
Many people might instinctively ask: What’s the difference? After all, anyone who searches on Google is using technology. So why should AI suddenly be banned? That is precisely the question the court addressed—and the answer was clear-cut.
A Google search provides sources, results, clues, and material. But the actual work still falls to the human. They must read, select, verify, and develop something from it on their own. With AI, things are often different. It provides not only material, but also direct phrasing, structure, and sometimes even the entire line of reasoning. This is precisely where the court sees the decisive difference.
In academic work, it is not just the result that counts, but also the process of getting there. A term paper is not merely a product, but proof that someone has thoroughly explored, organized, and analyzed a topic on their own. If a machine takes over this core process, it is precisely this proof that is compromised.
These clues may reveal the use of AI
What is particularly noteworthy is that the Kassel Administrative Court also identified typical signs of AI-generated text. These include “frequently used, excessively repetitive positive phrasing in relation to neutral technical content.” What this refers to is that often somewhat polished style in which even dry topics come across as artificially friendly or overly polished.
The court also cites repetitive summaries as a red flag. AI-generated texts often tend to reintroduce ideas multiple times instead of truly developing them further. This can come across as polished, but also strikingly empty.
Even once can be too much
Particularly harsh: According to the court, even a single instance of unattributed use of generative AI can cross the line. That’s a clear message. Anyone who thinks that a small snippet from AI won’t be noticed or isn’t that bad is walking on very thin ice.
At the same time, the court also states that a simple spell-check performed by AI does not, as a rule, constitute deception. This is important because such features are now standard in many programs.
In the end, one lesson stands out above all others: AI is not a harmless gimmick at universities, but a genuine risk to exams. The critical point, however, is another: universities must not limit themselves to simply cracking down. Those who demand fairness from students must also make it crystal clear what is permitted and what is not. AI has long been part of everyday life. Anyone who acts as if this development can simply be banned has failed to understand the reality in the lecture hall.
Source: lto.de




