In just a few short years, generative AI has upended the way we evaluate student work. Some universities have banned tools like ChatGPT outright; others have drawn fine lines around their use or even embraced them as learning aids. The result is a tangle of mixed messages: instructors aren’t sure what to allow, students worry they’ll unknowingly cross a line, and everyone questions whether a degree still reflects genuine understanding.
Navigating Campus AI Policies
Most institutions have responded by spelling out “dos and don’ts.” At Leeds, for example, a traffic-light system labels AI use as red (forbidden), amber (allowed only for brainstorming), or green (fully acceptable). Melbourne asks students to declare any AI support on their assignments. These rules aim to protect each university’s sense of “assessment validity”—to make sure grades measure what students actually know, not how well they can prompt an algorithm.
Why Rules Alone Aren’t Enough
But as AI gets better at imitating human writing and even coding, policing every submission feels like chasing a moving target. Recent studies from Deakin University and the University of Sydney argue that an overreliance on rule-making puts too much trust in student compliance. In practice, policing slipups drains faculty time and still leaves room for unfair advantage.
Designing Assessments for Authentic Work
Instead of tightening the rules, the researchers recommend rethinking assignments themselves. Rather than trusting students to follow guidelines about AI use, we can build assessments that naturally spotlight their own ideas and skills. For instance, rather than handing in a complete take-home essay, students might draft sections in class under supervision, then workshop their arguments over several weeks. This process-based approach makes outsourcing to AI both impractical and unnecessary.
Real-Time Demonstrations in Every Discipline
In computer science, one simple twist is to ask students to record live commentary on their coding decisions—explaining why they chose a particular algorithm or data structure as they write it. Similar strategies can work in history, engineering or business: think oral presentations, reflective journals, or small-group problem solving observed by an instructor. When assessment hinges on what’s created in the moment, authenticity follows naturally.
Charting a Path Forward
Redesigning courses is hard work. It demands fresh thinking about learning goals, class size, and the workload of both students and professors. But if we want degrees that stand for real expertise, it’s an investment worth making. By shifting our focus from policing AI to crafting assessments that foreground student thought, we can preserve—and even strengthen—the integrity of higher education.
Conclusion
The rise of AI doesn’t have to spell the end of meaningful evaluation. On the contrary, it challenges us to renew our commitment to assessing genuine learning. With thoughtfully designed tasks that emphasize process over product, universities can ensure their graduates leave not just with a certificate, but with skills and creativity no algorithm can replicate.