This question sounds like a dilemma about whether it still makes sense to study “for a profession” when the profession itself may change faster than the curriculum. And honestly: the risk that traditional paths of “I learn X, then I do X for 30 years” will work less and less effectively is real. At the same time, it does not automatically follow that computer science stops making sense. Rather, what changes is what the market pays for and what the entry threshold will look like.
It’s worth organizing the facts first, and only then adding scenarios.
At the core of the market, we can see two parallel signals. First, generative tools have become part of programmers’ daily routine: in Stack Overflow’s 2025 survey, 84% of respondents declared that they use or plan to use AI tools in the software development process, and among professional developers about 51% use them daily. Second, these tools genuinely increase productivity in some tasks: a controlled experiment with GitHub Copilot showed an average of ~55.8% faster task completion for the group with access to the tool. At the same time, analyses by McKinsey & Company indicate that the biggest gains concern repetitive parts of the job: drafting code, documentation, refactoring, and corrections.
That sounds like “fewer programmers will be needed.” Sometimes that will be true. But the second part of the picture is just as important: programmers do not want (and often cannot) hand over the highest-responsibility tasks to AI. The same Stack Overflow survey shows strong resistance to using AI in areas such as deployment/monitoring (76% do not plan to use it) and project planning (69% do not plan to use it). When asked about a future with highly advanced AI, the most common reason for still turning to a human is lack of trust in AI’s answers (75.3%). This is key: tools accelerate work, but they do not remove human responsibility for decisions, risk, safety, and compliance.
In Poland, the market appears to be “post-correction,” not in a state of euphoria. Data from Just Join IT shows stabilization after declines: in 2025, 110,996 job listings were published, about 8.42% more than in 2024, but clearly fewer than in the record year 2022. A No Fluff Jobs report describes a rebound in 2025 (a 44% year-over-year increase in tech job offers), while also indicating where demand lies (including backend, data, fullstack, DevOps) and noting that remote work is declining in share in favor of hybrid models. This is not a market that is “dying,” but one that has become more selective and more sensitive to the value delivered by each individual.
With this background, we can address the question of studying computer science.
Computer science makes sense if you treat it not as a course in a specific framework, but as an investment in fundamentals and in the ability to switch tools. AGI (a hypothetical artificial intelligence with broad, “general” cognitive abilities) matters here less as a date on the calendar and more as a direction: more and more work will shift from “writing” to “deciding, integrating, and controlling quality and risk.”
What does that mean for the return on studying?
First, the value of fundamentals increases—especially those that are harder to “skip” with a short course: algorithms, computational complexity, databases, operating systems, networks, security, software engineering as a process (testing, reliability, observability), mathematics, and statistics. These are areas where even an excellent tool does not replace understanding why something works and where it will break under load.
Second, the premium for “mechanically churning out code” declines. If your advantage is that you can write a CRUD app faster in yet another framework, you are competing with a tool specifically designed for that. This does not mean juniors will disappear, but the entry bar will shift toward skills previously expected at the mid-level: reading unfamiliar code, working on existing systems, understanding requirements, testing, debugging, security, cost, and maintenance. Notably, Stack Overflow data shows that one of the frequent issues is the “almost correct but not quite” problem and costly debugging of AI-generated code (45.2% of responses). This rewards people who can verify, not just produce.
Third, regulation will create new “moats”—and new obligations. In the EU, the AI Act entered into force on August 1, 2024. Obligations for general-purpose AI (GPAI) models began to apply on August 2, 2025, and full applicability is scheduled for August 2, 2026 (with exceptions and longer transition periods for certain high-risk systems). The European Commission has also publicly confirmed that it does not plan to “pause” implementation. For the labor market, this means growing demand for people who combine technical skills with compliance, auditing, security, risk management, and documentation. In these areas, “it works” is not enough—you must also demonstrate why it is safe and compliant.
Fourth, even in strongly pro-AI scenarios, computer science remains one of the few fields that allow you to work “close to the lever” of automation. The world will become ever more dependent on software, regardless of whether AGI arrives quickly or slowly. From a macroeconomic perspective, there is an expectation of major task shifts: Goldman Sachs estimated that workflow changes could expose the equivalent of hundreds of millions of jobs to task automation, while emphasizing that many professions will be partially automated rather than eliminated outright. The World Economic Forum’s Future of Jobs report also describes simultaneous job creation and displacement, listing software developers among the rapidly growing roles, alongside AI- and data-related positions. In other words: “change” is not synonymous with “the end.”
That leaves the most important question: when is the answer “yes,” and when is it “no”?
It is worth studying computer science if:
- you are interested in solving technical problems at the systems level, not just “building apps,”
- you are ready to learn new tools in 6–18 month cycles (because that is often how long a specific stack remains “current”),
- you want to combine computer science with a domain that has its own entry barriers: finance, industry, medicine, energy, law, cybersecurity, embedded/IoT, critical infrastructure,
- you accept that the beginning of your career may be more competitive and that “simple” tasks will be commoditized faster.
It makes less sense as a plan if:
- you expect a stable profession based mainly on routine coding according to specification,
- you do not want to deal with mathematics, networks, systems, databases, and security (that is, what distinguishes engineering from “gluing things together”),
- your goal is solely to enter the market as quickly as possible; in that case, a practical path (internships, projects, industry specializations) may sometimes be better, with university studies treated as a parallel option rather than the only one.
If someone asks, “What should I do so that this investment makes sense in the era of AGI?”, the most clear-headed answer is: build your advantage where AI is a tool, but risk and responsibility remain on the human side.
In practical terms, that usually means focusing on directions that tend to “hold value” longer than trends:
- security (AppSec, cloud security, secure SDLC, threat modeling),
- reliability and infrastructure (SRE, platform engineering, observability, cloud cost management),
- distributed systems and data (integrations, streaming, data quality, governance),
- embedded/edge/industrial robotics (more physics, standards, and responsibility),
- requirements engineering and architecture (making technical decisions under business constraints),
- compliance and technology auditing (especially in the EU, where regulatory pressure is increasing).
And one more note—less comfortable, but necessary: if AGI in the sense of “almost everything cognitive can be automated” truly arrives quickly and broadly, then the question “Should I study computer science?” becomes part of a larger question: “How do we plan education and careers when the value of cognitive labor itself declines?” In that scenario, computer science is neither a safe haven nor a waste of time per se; it is rather a way to understand and co-create the system that will reorganize the economy. Over a shorter horizon (several to a dozen years), however, a more realistic world is one in which AI increases productivity and changes the structure of work, rather than instantly wiping out an entire profession. This is suggested both by real productivity gains in selected tasks and by developers’ own caution in delegating high-responsibility stages of the process to AI.
In one sentence: computer science in the age of AGI can still be a worthwhile investment—but no longer as a “ticket to stable coding,” rather as an education for working at the intersection of tools, systems, risk, and responsibility, and for continuous retraining before the market forces you to do so.