Skip to content

IT degree paths: what is resistant to automation? AI development scenarios and “moats” in the labor market

2026-02-13

The question “what should I study in IT to avoid being automated?” makes sense today, but it is often framed incorrectly. Automation rarely removes entire professions at once; more often, it cuts out sets of tasks, changes the economics of junior roles, and shifts the center of gravity toward responsibility, integration, and risk. Reports on the future of work regularly show the same pattern: the importance of AI-related skills is growing, but so too are skills in security, networking, and system resilience. The World Economic Forum in its Future of Jobs Report 2025 points to AI and big data as the fastest-growing skills area—closely followed by networks and cybersecurity.

In Poland in 2025, the job market began to rebound (more listings, less application pressure), but the structure of demand is shifting toward specialization. The No Fluff Jobs report “Rynek pracy IT 2025/2026” cites clear figures: +44% new job listings year over year and –45% in the average number of applications per offer, while the largest salary increases are visible, among others, in DevOps and Security.

That is a good starting point: “resilience to automation” in IT does not stem from AI being incapable, but from the fact that in the real world someone must bear responsibility, meet requirements, obtain access, sign off on decisions, pass audits, respond to incidents, or go to the factory floor.

Three AI development scenarios that change what a “safe degree” means

Below are three highly simplified scenarios. They are not forecasts—they are tools for thinking about risk.

1) AI stagnation + robotics disappointment

Language models stabilize, robotics remains expensive and unreliable, and automation continues along traditional paths (systems, scripting, low-code). In this world, fundamentals still matter: software engineering, distributed systems, databases, security. “AI skills” are useful, but not the central axis of a career.

2) Gradual AI progress (AGI in ~10–20 years) + slow robotics

This resembles “the last few years, extended further.” AI gradually takes over routine fragments of work: generating code, tests, documentation, supporting data analysis. Those who stay “above the tool” benefit: they design architecture, ensure data quality, take responsibility for production, security, and compliance.

In this scenario, research on GenAI highlights augmentation as the dominant path, not full automation—but augmentation changes the structure of competencies and the number of entry-level openings. The International Labour Organization already pointed out in 2023 that GenAI’s primary impact would be task enhancement (with clerical roles most exposed), and methodological updates in 2025 continue in the same direction: what is measured is task exposure, not “disappearance of occupations.”

3) Sudden AI breakthrough (AGI in 2–3 years) + robots everywhere in 5

Here, something qualitatively different happens: not just “task shifting,” but a productivity jump and pressure for massive economic reorganization. In the short term, risk for many roles increases, as the cost of producing software, analyses, content, and parts of services falls.

Paradoxically, a sudden breakthrough may be “systemically safer,” because it triggers political necessity for reaction. Rapid change is visible, large-scale, and harder to ignore—so the likelihood rises of interventions: regulation, redistribution, support programs, education reform, new auditing and safety obligations. With slow, gradual change, it is easier to fall out of the market quietly: rates drop first, then demand, then selectivity increases, and marginalization affects individuals separately, without a clear political tipping point. (This is a socio-economic hypothesis, not a certainty—but the mechanism has appeared repeatedly in other technological transitions.)

“Moats” against AI: what actually makes work harder to automate

In practice, it is better to think not about an “automation-proof degree,” but about a combination: specialization + moat.

Regulatory/legal moat

Where requirements, audits, and liability are involved, technical feasibility alone is not enough for automation. In the EU this is particularly clear. The AI Act has a fixed implementation timetable, covering obligations for general-purpose AI models and high-risk systems. In addition, cybersecurity and operational resilience regulations create demand for security and compliance roles: NIS2 Directive (transposition deadline: October 17, 2024) and Digital Operational Resilience Act (applicable from January 17, 2025 in the financial sector).

Physical-world moat

Where work touches the material world (hardware, networks, OT, production systems, medical devices), automation is slower because it requires reliability, safety, servicing, and integration.

Reputation and relationship moat

In many organizations, access to production systems, architectural decisions, or security responsibilities is based on trust, track record, and accountability. AI may prepare a proposal; someone must defend it, sign it, and bear the consequences.

Analysis of selected IT specializations under these scenarios

There are no guaranteed “safe bets.” There are risk profiles—and ways to build moats.

1) Cybersecurity and privacy engineering

Why it holds up: attackers also use AI, while regulation and liability grow independently of model progress. NIS2 and DORA increase pressure on processes, incident reporting, risk management, and supply chains.

  • AI stagnation: stable demand; less hype, more practice (IAM, SOC, cloud security, incident response).
  • Gradual progress: AI automates parts of detection and triage, but increases threat volume and the need for oversight, testing, red teaming, and GRC.
  • Sudden progress: security becomes critical infrastructure; work shifts toward policy, supervision, autonomous system safety, and resilience at national and corporate levels.

Moats: regulation knowledge, certifications, incident experience, trusted reputation.

2) DevOps / SRE / Platform Engineering (cloud, reliability, cost control)

Why it holds up: production is not just a repository. SLA, on-call, incidents, dependencies, cloud costs, permissions, migrations, disaster recovery—all require operational responsibility.

  • AI stagnation: system and cloud complexity grows; AI as auxiliary tool.
  • Gradual progress: parts of configuration and diagnostics automate, but faster change increases risk and need for control.
  • Sudden progress: as “software everywhere” becomes cheaper, infrastructure, observability, cost control, and resilience matter even more.

Moats: infrastructure “physicality,” operational accountability, trusted access.

3) Computer science (software engineering) as a base degree

Still a good choice—but not as “I will write CRUD apps forever.” Routine applications and simple integrations face the most automation pressure.

  • AI stagnation: classic market—still selective for juniors.
  • Gradual progress: less work “from scratch,” more integration, reviews, security, testability, architecture, maintenance.
  • Sudden progress: programming splits into (a) product and responsibility roles, and (b) regulated/physical niches.

Moats: domain expertise (finance, health, industry), responsibility (tech lead, architecture), stakeholder relations.

4) Data Engineering / BI / Analytics (more pipelines than slides)

Why it holds up: AI is only as good as the data and access behind it. Organizations struggle with data chaos, legal constraints, fragmented sources, and quality demands.

  • AI stagnation: demand grows with digitization.
  • Gradual progress: AI simplifies reporting but increases demand for solid data models, governance, lineage, and quality control.
  • Sudden progress: advantage lies with those who control data and can provide it without violating law or security.

Moats: privacy and compliance, domain expertise, responsibility for data quality.

5) Electronics / Embedded / IoT / Networks / OT

Why it holds up: this is the domain of physical constraints—hardware limitations, reliability, certifications, real-world testing.

  • AI stagnation + robotics disappointment: fewer spectacular robot deployments, but embedded systems and networks still power automotive, energy, telecom, industry.
  • Gradual progress: AI becomes a component (e.g., at the network edge), but the core remains integration and safety.
  • Sudden progress: demand rises, but work shifts toward safety, reliability, and standardization.

Moats: physicality + industry regulation + practitioner reputation.

6) Automation and robotics

Highly scenario-dependent.

  • AI stagnation + robotics disappointment: risk of missing the mass market; classic PLC and industrial niches remain.

  • Gradual progress: stable industrial/logistics growth; fewer humanoids, more task-specific robots.

  • Sudden progress: major opportunity—but with heavy responsibility (safety, certification, integration). Building the robot is not enough; deploying it safely is key.

Moats: physicality + functional safety + deployment experience.

7) IT in regulated and safety-critical sectors

Often not a separate degree, but a specialization entered after computer science or electronics.

Examples of regulatory moats are concrete. In automotive, functional safety standards like ISO 26262 formalize safety processes and evidence. In medical device software, standards like IEC 62304 play a similar role. In finance, DORA increases pressure on ICT resilience and risk management.

  • AI stagnation: stable—regulation and accountability persist.

  • Gradual progress: AI enters processes, but validation, documentation, and auditability grow in importance.

  • Sudden progress: certification, testing, and the human sign-off under critical systems become even more essential.

Moats: law, audit, liability, formalized processes.

8) UX / HCI / Business Analysis / Product (at the intersection of tech and people)

Often underestimated by those choosing “hard IT,” yet potentially a strong moat: understanding problems, negotiating priorities, working with stakeholders.

  • AI stagnation: grows with product maturity.

  • Gradual progress: AI accelerates artifact production, but not decision-making about trade-offs and responsibility.

  • Sudden progress: organizations must redesign processes and products; someone must lead that change.

Moats: reputation, relationships, trust, domain insight.

What to study if you want “resilience”: a practical formula (without miracle promises)

  1. Choose a fundamentals-based degree (computer science, electronics, automation) and treat it as a base, not a finished profession.

  2. Build a moat during your studies:

  • regulation and compliance (AI Act, NIS2, DORA),

  • security and resilience (Security, SRE),

  • physical systems (embedded/OT),

  • domain expertise (finance, med-tech, industry).

  1. Collect evidence of competence, not just grades: internships, production projects, incident exposure (even via student groups/CTFs), “boring” tasks like monitoring, logging, regression testing.

  2. Train soft elements that matter in automation-heavy environments: writing technical decisions, communicating risk, negotiating scope, handling ambiguous requirements.

Finally, an uncomfortable but honest point: the slower automation progresses, the more quietly the market can rearrange itself—and the easier it is to fall out of circulation before political pressure emerges to “rescue” anyone. That is why a sensible strategy is not “find an automation-proof degree,” but rather:

enter IT through solid fundamentals, then position yourself behind a moat—regulatory, physical, or reputational—before competition turns into a crowd.