Not long ago, discussions about “AI at universities” were mostly about the future: someday cheating would become a problem, someday curricula would change, someday new tools would need to be taught. Today, this is no longer hypothetical. It is the present state of affairs. Generative AI has shifted part of student work from the realm of “I do” to “I delegate and edit,” and universities—especially in technical fields—are being forced to rebuild assessment methods, teaching practices, and expectations toward graduates’ competencies.
The scale of this phenomenon can be surprising even to those closely following the topic. A study by the Higher Education Policy Institute (conducted with Kortext) showed that in 2025 nearly all surveyed students declared using AI in some form (92%), and 88% admitted to using generative tools for assessed assignments. At the same time, only 36% reported receiving real support from their universities in developing these skills, despite considering “AI literacy” important.
This creates a new academic reality—and a new set of risks.
First problem: projects and homework no longer measure what they used to
In the traditional model, an assignment (code, essay, report, analysis) largely tested independent work. Generative AI can now, in many cases, deliver results that are “good enough” to pass evaluation based on the final artifact alone. In IT, this applies not only to descriptions, but also to working code snippets, tests, documentation, and even “plausible-looking” architectural designs.
In practice, universities face a choice: either try to police AI use or redefine what “independence” means. International recommendations increasingly favor building policies and competencies over simple bans—partly because the pace of tool development outstrips regulatory adaptation, and institutions are often unprepared for tool validation and privacy protection.
Second problem: the “detection vs. evasion” race is a weak foundation for assessment
Many universities turn to AI-generated content detection tools. The problem is that their effectiveness varies, depends on the model and content type, and false positives are not merely theoretical. Research reviews show high variability in detector quality and indicate that even with seemingly low false-positive rates, large-scale grading can produce thousands of “suspicions” requiring explanation.
From the perspective of disciplinary fairness, another research-backed observation matters: detection tools can produce inconsistent classifications, and detecting content generated by newer models is often harder than by older ones. Researchers stress that detectors should not be the sole basis for decisions. Tool providers themselves communicate this cautiously: Turnitin’s documentation explicitly warns about false positives and states that results should not be the sole basis for adverse actions, with some thresholds intentionally left “unsurfaced” to limit harm.
The conclusion for IT education is blunt: if assessment relies mainly on final code or reports, AI turns it into a test of “file delivery,” not necessarily a test of understanding.
Third problem: AI dependence deepens an older dependence on “external answers”
The argument “this isn’t new—once it was Google and Stack Overflow” is partly true, but incomplete. Previously, students usually had to perform integration work: find multiple sources, evaluate them, assemble a solution, adapt it to context. Generative AI compresses this into a single interaction, changing cognitive habits: less exploration, more acceptance of the “first reasonable answer.”
This ambivalence is visible in the developer community. Stack Overflow surveys show a growing share of people using or planning to use AI tools in software development (76% in 2024), but enthusiasm is declining, and those learning to code are more skeptical than professionals. This mirrors academia: AI is widely used but not always trusted—paradoxically increasing the risk of uncritical acceptance of errors (“it produced it, so it must know”).
Fourth problem: a generational gap in faculty and institutions
At many departments, the weakest link is not the student but the system: lack of coherent policy, no standard for citing AI, no secure tools for working with data, no consensus on what is allowed. At the same time, things are changing fast. In the cited HEPI study, the share of students who rated faculty as “well prepared to work with AI” rose year over year from 18% to 42%. This is still not comfortable, but the trend is clear: institutions are learning on the fly.
In Poland, this “learning on the run” is visible in the growing number of documents and guidelines. The University of Łódź, for example, refers to a rector’s ordinance regulating GenAI use in education and degree work, signaling formalization at the institutional level. Białystok University of Technology explicitly emphasizes that substantive content must come from the student, with GenAI playing a supporting role and being clearly marked. Warsaw University of Technology publishes recommendations for using GenAI as a learning support tool while highlighting risks and doubts. Gdańsk University of Technology combines formal guidelines with “competency infrastructure”: training, courses, a good-practices base, a prompt library, and resources for students and staff.
This is an important signal: universities are beginning to treat AI not as a one-off “assessment incident,” but as a permanent element of the competency ecosystem.
Fifth problem: demotivation and fatalism (“AI will replace me before I graduate”)
This fear does not come from nowhere. The job market—especially in IT—currently carries two parallel narratives. One says: AI boosts productivity, so fewer people are needed for simple tasks. The other responds: AI raises the entry bar, but demand for “higher-order” skills is growing.
Public debate increasingly mentions erosion of junior roles, which contain the most “grunt work” and are easiest to support with automation. The World Economic Forum cites data showing that 40% of employers expect staff reductions where AI can automate tasks; at the same time, technologies are expected to both create and destroy jobs at scale, and some analyses show younger candidates feeling a decline in the value of degrees.
More cautious institutions emphasize AI’s “dual effect”: substitution of some tasks and complementarity with human work. The IMF describes AI as a technology that will often augment work, although some roles and tasks remain automation-prone. OECD analyses of GenAI use in SMEs show firms more often reporting productivity gains and growing demand for highly skilled workers than simple “replacement of people by models.”
For an IT student, the practical reality is less dramatic but more demanding: AI doesn’t have to “replace the programmer” to change the starting conditions. It’s enough that it raises expectations for juniors.
What this means for the IT job market: a shift from “writing” to “understanding”
Before GenAI, many beginners built their edge on fast delivery of simple code, repetitive tasks, and assembling integrations. With GenAI, this layer becomes cheaper. The value rises of skills AI does not guarantee:
-
understanding the business problem and refining requirements,
-
critical evaluation of correctness (logic, edge cases, security),
-
debugging and “reading code” with full awareness of consequences,
-
system design (trade-offs, reliability, cost, maintainability),
-
teamwork: reviews, decision justification, risk communication.
Notably, university documents state this explicitly. A DELab UW report warns that outsourcing tasks to generative AI can prevent acquiring key competencies needed to critically evaluate results. This is the core of the “hidden shift” in computer science education: the problem is not tool use itself, but loss of training in quality assessment and decision responsibility.
How universities can respond: less “hunting,” more redesign
The most sensible educational responses don’t pretend AI doesn’t exist. Instead, they redesign assessment to still measure competence rather than mere delivery of text or code.
-
Assess process, not just outcome: versioning (commit history), short design notes, decision justifications, analysis of alternatives, retrospectives of errors and fixes.
-
Add “understanding checks”: oral defenses, live tasks, short quizzes on reading and modifying code, work on previously unseen project fragments.
-
Design tasks resistant to “one-prompt” solutions: projects embedded in local context (data, requirements, constraints), where diagnosis and trade-offs matter.
-
Clear rules for AI use and disclosure: what is allowed (e.g., explanations, summaries, drafts), what is not (e.g., generating substantive content), and how to document tool contribution—Polish universities increasingly formalize this.
-
Detectors as signals, not judges: use them as prompts for discussion and understanding checks; research shows why this is safer.
-
Teach AI literacy: tool limits, hallucinations, data privacy, legal risks, bias—aligned with UNESCO’s capacity-building approach.
And the student? Two decisions that truly change the trajectory
First: treat AI as a training tool, not a subcontractor. If AI writes for you, your learning loop disappears. If it helps you reach the point where you must evaluate, correct, and defend a solution, you gain.
Second: build evidence of competence that doesn’t dissolve into “did AI write this?” Portfolios with decision histories, tests, deployments, short architectural defenses, and honest disclosure of tool use resemble real IT work more than a “nice assignment file.” The market—regardless of replacement narratives—will reward people who can deliver responsible results, not just generate text.
AI has already changed higher education. The biggest paradox is that, for a while, it may make studying easier while simultaneously making entry into the job market harder for those who let it pass through education on their behalf.