Blog :

AI Hiring Is Becoming a Definition Problem

May 11, 2026
AI Hiring Is Becoming a Definition Problem

AI Hiring Is Becoming a Definition Problem

Why more businesses know they need AI talent, but still struggle to define what they are actually hiring for

Demand for AI talent continues to grow, but one of the biggest challenges in the current market is not necessarily access to candidates, it is definition.

More businesses know they need AI capability in some form, whether that is building internal tools, deploying copilots, implementing automation or exploring agentic workflows, but many are still unclear on the type of expertise they actually need to make those projects successful.

As a result, hiring conversations are becoming increasingly misaligned before the process has even started.

According to Gartner, only 31% of recruiting teams currently use labour market data to help shape job design and hiring strategy, despite organisations rapidly restructuring roles around AI adoption.

The term “AI Engineer” is a good example of this.

For some businesses, that means a software engineer integrating APIs and building features on top of existing models. For others, it means someone with a machine learning background who can train, fine-tune and evaluate models directly. Both are valid roles, but they are fundamentally different skillsets.

The challenge is that they are often being grouped under the same title.

The market is moving faster than role definition

Part of the issue is how quickly the market has evolved.

Many AI-focused roles barely existed in their current form a few years ago and new specialisms continue to emerge at pace. Titles are broad, responsibilities vary significantly between organisations and expectations are often inconsistent internally.

Global demand for AI talent now exceeds supply by more than 3:1, with over 1.6 million open AI-related positions globally and fewer than 520,000 qualified candidates estimated to be available.

In some hiring processes, one stakeholder may believe a candidate is a strong fit while another disagrees entirely, not because the candidate is wrong for the role, but because there is no shared understanding of what the role is supposed to be.

That lack of clarity creates problems quickly.

Hiring processes become longer, interview feedback becomes inconsistent and strong candidates are often lost because businesses are still trying to define the requirement while hiring for it.

Years of experience are becoming less relevant

AI is also challenging traditional ideas around seniority.

In more established areas of technology, years of experience have historically been one of the clearest indicators of capability. In AI, that is becoming less reliable.

Someone who has spent three years focused entirely on AI implementation, LLM workflows or machine learning infrastructure may be significantly more relevant than someone with fifteen years of broader engineering experience but limited exposure to modern AI systems.

The strongest candidates are increasingly defined by use cases, delivery experience and technical depth rather than simply tenure.

This shift is already changing hiring behaviour. Research into AI hiring trends shows that AI skills now command a wage premium of around 23%, while formal degree requirements for AI roles have declined significantly over recent years as employers place more value on demonstrable capability.

That shift is forcing businesses to rethink how they evaluate talent.

The difference between building AI and using AI

One of the clearest distinctions emerging in the market is between businesses building models and businesses building on top of them.

This is where hiring conversations often become much more focused.

Companies building and training models typically require machine learning engineers or more research-focused AI specialists with deep expertise in training, evaluation and optimisation.

Businesses building features on top of existing models often need software engineers with strong backend capability, API integration experience and understanding of AI workflows in production environments.

Both may sit under “AI Engineering”, but they solve very different problems.

This is also why more technical hiring conversations are now centering around questions like:

  • Are you building models or building on top of existing models?
  • Have candidates shipped AI features into production?
  • Is the requirement more software engineering focused or more machine learning focused?
  • What practical AI use cases have candidates actually delivered?

The answers usually clarify the hiring need far more effectively than the job title itself.

Why AI projects are still failing

The ambiguity around hiring is also contributing to wider delivery problems.

Gartner estimates that at least 50% of generative AI projects were abandoned after proof of concept due to unclear business value, poor data quality, escalating costs or weak governance.

Other reports suggest failure rates may be even higher in practice, with many organisations still struggling to move beyond experimentation into scalable operational use cases.

In many cases, the issue is not the technology itself, it is that businesses start with the assumption that they need AI before clearly defining the actual use case.

There is often pressure to “implement AI” without enough clarity around what problem it is solving, how it integrates into existing workflows or where human oversight still needs to sit.

The organisations seeing the strongest results are typically the ones approaching AI much more practically, focusing on specific operational problems, defined outcomes and human-in-the-loop processes rather than treating AI as a standalone solution.

AI hiring is becoming more infrastructure-led

Another shift happening beneath the surface is that AI hiring is no longer just about models and applications.

Infrastructure, governance and sovereignty are becoming increasingly important parts of the conversation.

Across Europe particularly, more organisations are looking at how dependent they are on external providers, cloud platforms and non-European AI ecosystems. That is driving increased investment into local infrastructure, AI governance, security and operational resilience.

At the same time, businesses are also starting to realise that AI systems require governance in the same way human employees do. Agentic AI workflows, autonomous systems and non-human identities are creating entirely new security and access challenges that many organisations are not yet prepared for.

Gartner predicts that by 2028, 25% of enterprise generative AI applications will experience multiple security incidents every year as adoption accelerates.

This is beginning to create demand for a much broader range of AI-focused roles beyond traditional engineering positions alone.

The longer-term talent gap businesses are starting to worry about

One of the more interesting conversations now emerging in the market is what AI means for long-term workforce development.

As coding assistants and AI-powered development tools become more widely adopted, many businesses are already reducing reliance on junior engineering resource for lower-level tasks.

On the surface, that improves efficiency.

The concern is what happens long term if fewer junior developers are gaining the foundational architectural experience that historically helped them progress into senior engineering and architecture roles later in their careers.

Gartner recently warned that organisations pausing entry-level hiring due to AI adoption may face significant talent shortages and higher hiring costs later in the decade as senior capability gaps emerge.

At the same time, Gartner predicts that by 2030, half of enterprises could face irreversible skill shortages in critical roles due to AI-related skills erosion and overreliance on automation.

That is starting to shift attention towards architecture, governance and senior delivery capability as areas likely to become increasingly valuable over time.

What businesses are starting to prioritise

The companies navigating this best are usually the ones that spend more time defining the problem before hiring for the solution.

Rather than starting with generic AI job titles, they are becoming much clearer on:

  • What they are actually trying to build
  • Whether the need is product, engineering, ML or infrastructure focused
  • What success looks like commercially
  • Whether the requirement is long term or project-based
  • Where AI genuinely adds value within existing workflows

That clarity is becoming a major differentiator in the hiring process.

Because right now, the biggest challenge in AI hiring is often not finding talent.

It is defining what “good” actually looks like.

Get in touch

If you are currently hiring within AI or trying to understand how the market is evolving, we are always happy to share what we are seeing across the space.

📩 info@weareorbis.com

Read on
Mar 9, 2026

Blog :

New Emerging Job Titles to Watch in 2026

The roles that didn’t exist three years ago and the ones already shaping the future of work
Mar 30, 2026

Blog :

What Hiring Managers Are Actually Prioritising Right Now

What Hiring Managers Are Actually Prioritising Right Now Expectations haven’t dropped, but how...