Table of Contents
Somewhere between the launch of ChatGPT and the latest 'no-AI' syllabus clause, we’ve arrived at an absurd impasse: students hide their AI use, professors pretend not to notice, and universities collect six-figure tuition to sustain the fiction. With sources reporting as high as 90% of students now cheating using AI tools, we all clearly can't continue pretending that such uses are "strictly prohibited," as if policy statements could stop a technological tsunami. Everyone knows the truth, yet here we are, pretending the emperor is fully clothed when he's standing naked in the quad.
Thus far, the response to AI use has been dominated by largely ineffective attempts to police cheating, rather than acclimating to the new reality. Research keeps showing that AI detectors are "neither accurate nor reliable," since light paraphrasing of AI-generated text breaks most systems. Vanderbilt publicly disabled Turnitin's AI detector after months of testing, and the University of Pittsburgh's teaching center followed suit, warning that current detectors create unacceptable false positives. Inside Higher Ed reports that Montclair State, UT Austin, and Northwestern told faculty not to rely on detectors. As University of Adelaide professors concluded: "We should assume students will be able to break any AI-detection tools, regardless of their sophistication." The only result of policing was forcing students to reword LLM-generated text or nudge code style.
Perhaps the greatest irony in all this is that developing the ability to work more efficiently with AI is actually a highly in-demand skill. Companies want employees who can leverage AI to work better and faster, executing lengthy and complicated tasks in a fraction of the time it would have taken otherwise. LinkedIn's 2024 Workplace Learning Report shows strong demand for AI literacy, and the World Economic Forum forecasts a major skill churn in the coming years, tied to AI adoption. Randomized trials with the Boston Consulting Group found that AI significantly boosted productivity and quality on complex tasks. While universities are busy forcing students to hide AI use, the real world would rather have them weaponize it for productivity.
The only rational way forward is to stop fighting AI and assimilate it into education. Right now, we're cosplaying enforcement while the world rewards proficiency. Even if you believe students shouldn't use AI, or that it still can't produce work akin to humans, policing it is still counterproductive. The sustainable move is to push the bar higher and make AI an explicit, graded collaborator. If we want graduates who can think with machines, not be replaced by them, then classes should reward orchestration, critique, and high-level judgment— human edges that AI doesn't have.
In computer science, ditch the "fill in this function" assignments that Cursor could autocomplete in three seconds. Have students build complex projects where the focus is on learning how to architect systems, understand code at a high level, debug issues, and make design decisions. Grade them on the prompts, decision logs, testing, and the quality of the deliverable. The same logic extends across disciplines. Rather than simple short essays, assign longer papers designed to be written collaboratively with AI, where students are mainly responsible for overseeing structure and development. Teach them to advocate and defend their arguments live, sharpening the interpersonal and conversational skills that AI can't replace. Move the most foundational concepts, which require memorization and real-time problem-solving, to in-person exams.
Let's stop this pointless theater, embrace the new reality, and effectively redesign so that using AI well is the point, rather than the scandal.