Skip to content

Legally Regulated Professions – The Only “Safe” Degrees in the Age of AI? A Perspective from the IT Market

2026-02-16

The claim that legally regulated professions are the “only safe” educational path in the age of AI resurfaces in waves—usually when the tech market becomes more selective and generative tools accelerate the automation of knowledge work. It sounds tempting: if access to a profession is restricted by regulations, exams, and licenses, then demand should be stable and competition lower.

The reality, however, is less romantic and more market-driven: regulation may slow down change, but it does not guarantee immunity to technological shifts, evolving business models, or public policy changes.

Below, I break the topic down into its components—what a regulated profession actually is, why it “seems safe,” how AI fits into this equation, and where the Polish IT labor market enters the picture.

1) What exactly is a “regulated profession,” and why does it matter?

In EU and practical terms, a profession is “regulated” when access to practicing it (or using a professional title) depends on meeting requirements set by the laws of a given country: specific education, professional experience, exams, entry in a register, membership in a professional body, etc. Crucially, Member States decide for themselves which professions to regulate—the same profession may be regulated in one country and not in another.

In practice, official lists and qualification recognition procedures are maintained publicly (in Poland, among others, through national information paths and references to EU databases), and the European Commission maintains a database of regulated professions in EU/EEA countries.

Regulation is not a “reward” for a profession. It is a tool meant to protect the public interest (patient safety, consumer protection, financial system stability, construction safety, etc.). When the risk of error is high, there is a strong incentive to limit access to the profession.

2) Why do people believe regulated professions are “AI-resistant”?

There are three rational reasons why people associate regulation with career security:

First, barriers to entry. Long studies, internships, exams, and formal supervision limit the number of people who can legally provide services. In the short term, this stabilizes the market (you cannot “inject” thousands of new specialists within six months).

Second, liability and legal risk. Where the consequences of mistakes are severe, organizations want a licensed human being—someone with insurance, a signature, and disciplinary responsibility. AI can suggest solutions, but someone must ultimately “take responsibility.”

Third, social trust. A patient, client, or court does not have to trust a model. They are expected to trust procedures and the professional operating within established standards.

The problem is that these mechanisms relate primarily to the stability of access to a professional title, not to the stability of the scope of work or the number of available positions.

3) AI does not need to eliminate a profession to eliminate “junior roles”

The most visible effect of generative AI on the labor market does not have to involve mass elimination of entire professions. More often, it looks like this:

  • less routine work traditionally performed by juniors and trainees (research, summaries, simple analyses, first drafts of documents),
  • greater emphasis on quality control, accountability, and process integration,
  • growing advantage for those who can “manage” tools and risk.

In surveys of large employers, there are explicit expectations of headcount reductions where AI can automate tasks—reports by organizations such as the World Economic Forum have indicated a share of employers anticipating job reductions for this reason.

This is particularly important for people choosing their studies: even if a profession remains regulated, the “bottom rung of the ladder” may shrink, making entry more difficult due to fewer training-level positions where one traditionally learns the craft.

4) Where does Polish IT fit into this—and why did the debate erupt at all?

In Poland, the narrative about the “end of the IT gold rush” mixes with data showing stabilization after declines in 2023–2024. Industry sources point to a rebound in job postings in 2025 and a more selective market: fewer mass recruitment drives, greater emphasis on specialization and experience.

In practice, this triggers two reactions:

  • Some people begin searching for “safer” degree paths—ideally state-stamped and regulated.
  • Some companies shift expectations: less “we’ll train you,” more “come ready.”

And here a simple but misleading shortcut emerges: if IT has become more competitive, perhaps “only” regulated professions guarantee security.

They do not. They simply offer a different risk profile.

5) What regulation actually protects—and what it does not

Regulation effectively protects:

  • titles and professional rights (not everyone can perform specific tasks),
  • minimum entry standards,
  • certain segments of public demand (e.g., healthcare systems will always need defined roles).

Regulation weakly protects:

  • the number of available internships and residency spots,
  • wage levels (these still depend on market forces, budgets, demographics),
  • scope of tasks (AI can “extract” entire bundles of routine activities),
  • pace of work and productivity norms (AI often means: “if faster, then more”).

If many people flee IT for regulated fields, the effect may be paradoxical: greater competitive pressure at entry level, longer queues for specialization, and more mid-path disappointment. Barriers to entry work both ways: they protect the profession but also cost time and flexibility.

6) Regulation in the age of AI: “regulated profession” vs. “regulated industry”

Something interesting is happening alongside traditional professions. Increasingly, it is not a specific profession that is regulated, but a market and technological risk—and that creates demand for people who can meet legal and audit requirements.

Examples particularly relevant for IT:

  • AI Act: This EU regulation establishes a risk-based framework for AI systems. It entered into force on 1 August 2024, with obligations phased in (including prohibited practices and “AI literacy” requirements from 2 February 2025; GPAI model obligations from 2 August 2025; broader applicability of many provisions from 2 August 2026, and longer transitional periods for certain high-risk systems embedded in regulated products). This does not create a “regulated software engineer” profession, but it does create sustained demand for governance, documentation, risk assessment, compliance, testing, and security competencies.
  • NIS2: The EU cybersecurity directive strengthens organizational accountability and places oversight responsibility on management bodies. It explicitly requires approval and supervision of cyber risk management measures and allows for liability in case of breaches. This fuels demand in cybersecurity, audit, SOC, GRC, and “security by design”—areas where IT meets regulation.
  • DORA: In the financial sector, it introduces ICT operational resilience and risk management requirements, directly applicable as an EU regulation. Again: not a “regulated profession,” but regulated responsibility—creating jobs for those who can combine IT, risk, and process.
  • DPO (Data Protection Officer) in practice: The GDPR does not turn the DPO into a traditional “regulated profession,” but it formalizes the role and requires “expert knowledge.” The market effect is similar: rising demand for professionals who understand both law and systems.

The conclusion: what tends to be “safe” are not specific degrees, but intersections of competencies where the law enforces continuous processes and auditability.

7) Does this mean the best strategy is to abandon IT and move into regulated fields?

No. That is an overly simplistic prescription—and simple prescriptions are usually the most expensive in the long term.

A more reasonable interpretation:

  • regulated professions have a different risk profile (slower change, greater responsibility, more difficult entry),
  • IT has higher volatility, but also greater flexibility, portfolio-building potential, and access to international work,
  • AI shifts value toward roles involving accountability, security, compliance, and designing error-resistant systems—often roles at the intersection of regulation and technology.

If someone today asks about “safe degrees,” they are really asking where demand will exist, who will bear responsibility, and how difficult entry will be. Regulation is one factor—but not the only one.

8) How to approach choosing a field of study in the AI era—pragmatically, not magically

If you are choosing a path, it makes sense to test it with three questions:

  1. In this field, must someone bear legal and reputational responsibility? The higher the stakes of an error, the longer a “human with a signature” will remain central.

  2. Does the law or regulator require processes, documentation, and audits? Where compliance is an ongoing cost of operating (finance, healthcare, critical infrastructure, data), demand for certain competencies tends not to disappear—it only changes in profile.

  3. Can I build an advantage by combining two domains? Law + IT (legaltech, privacy engineering), medicine + data, finance + cyber, engineering + safety, AI + risk management. These intersections are often safer than “pure” specialization.

Finally, something less media-friendly: “safe studies” are those after which a person can learn and update their skills faster than the market changes. Regulation may buy you time. It will not give you character.