5.Mar.2026

AI tools are now regulated in the EU. Staffing businesses have until August 2026.

Blog Thimble Image

If your business uses AI to screen, rank, match candidates or help with certain employment decisions, the EU now regulates those tools as high-risk systems. Here is what changed, what it means for your operating model, and what you should be doing about it.

GDPR WAS ABOUT THE DATA - THE AI ACT IS ABOUT THE TOOLS

GDPR required you to rethink how you handle personal data. The EU AI Act requires you to rethink how you use the tools that process it.

Under the EU AI Act (Regulation 2024/1689), AI systems used in employment decisions fall into the high-risk category. That covers recruitment, selection, targeted job advertising, candidate evaluation, performance monitoring, and certain decisions about compliance, contract terms or termination. And it does not matter whether you built the technology. If your business deploys it, compliance is your responsibility, even if the platform vendor says otherwise.

Starting 2 August 2026, each of those tools will need mandatory risk assessments, technical documentation, bias testing, human oversight, transparency disclosures, and continuous monitoring. For staffing businesses, Employers of Record (EORs), and workforce platforms, which sit between employers, technology, and workers, the obligations are more demanding than they are for a single corporate HR department.

WHY STAFFING BUSINESSES FACE A HARDER COMPLIANCE PROBLEM

Most commentary on the AI Act’s employment provisions is written for in-house HR teams at single employers. That misses the reality for staffing businesses.

Think about the typical staffing supply chain. A Vendor Management System (VMS) platform uses algorithmic matching to surface candidates. A Recruitment Process Outsourcing (RPO) provider runs AI-powered screening across thousands of applicants. A staffing agency deploys chatbot pre-qualification. An EOR uses AI to manage onboarding, compliance, termination and performance across multiple jurisdictions. At every stage, AI systems are making or influencing decisions about people’s livelihoods, and the Act does not distinguish between entities in the chain based on who owns the technology. It focuses on who deploys it and whose rights are affected.

Under Article 3 of the Act, a “deployer” is any legal person using an AI system under its authority. If your staffing firm selects, configures, or relies on an AI tool to inform workforce decisions, you are a deployer, even if you did not build the technology and even if the platform vendor tells you compliance is their responsibility. The Act assigns obligations to both providers (the vendors who build the systems) and deployers (the businesses that use them). You cannot pass your compliance obligations to a technology partner any more than you can under the GDPR.

EXTRATERRITORIAL REACH

The Act has extraterritorial reach. If the output of your AI system affects anyone in the EU (a candidate screened for a role in Berlin, a contractor evaluated in Dublin, a temp worker matched to an assignment in Amsterdam), the regulation applies regardless of where your company is headquartered or where the technology is hosted.

HUMAN OVERSIGHT IS NOT OPTIONAL

Every high-risk AI system must be used in a way that allows effective human oversight. No AI tool should make final placement, rejection, or evaluation decisions without a qualified human in the loop. Your recruiters and account managers need to understand how the system works, what its limitations are, and when to override its outputs. Article 14 requires the people exercising oversight to detect and correct errors, including discriminatory patterns. A policy document alone does not satisfy this.

CANDIDATES AND WORKERS MUST BE TOLD

Before deploying a high-risk AI system, Article 26(7) requires you to inform workers’ representatives and affected workers. In staffing, this extends to candidates and contingent workers. They have a right to know that AI is being used, how it functions, and what role it plays in decisions that affect them. Under Article 86, individuals subject to decisions made by high-risk AI systems can request an explanation of the main factors behind those decisions. For high-volume recruitment, your disclosure process needs to be operational and visible, not buried in a terms-of-service document.

DATA QUALITY AND BIAS MONITORING NEED REAL ATTENTION

If you exercise control over input data fed into high-risk AI systems, you must ensure that data is relevant, representative, and free from bias. Staffing agencies, where candidate pools are often skewed by geography, language, or existing network effects, face a particularly substantive version of this obligation. You need to know what data your AI tools are trained on, how they handle protected characteristics, and whether they produce equitable outcomes across demographics.

LOGS AND DOCUMENTATION ARE AN INFRASTRUCTURE REQUIREMENT

Deployers must keep logs generated by high-risk AI systems for at least six months. Combined with the requirement to monitor system performance on an ongoing basis, this creates an operational infrastructure need that many staffing businesses have not yet scoped.

THE TIMELINE

The Act entered into force on 1 August 2024. Certain provisions already apply. Since February 2025, banned AI practices (including emotion recognition in the workplace and biometric categorisation) have been prohibited, and AI literacy obligations became effective.

The main date for staffing businesses is 2 August 2026, when the full suite of high-risk system obligations becomes enforceable for Annex III systems, including all employment-related AI.

Some industry commentary has suggested that the European Commission’s Digital Omnibus package, proposed in November 2025, will push this deadline back. But this is a proposal, not enacted law. It must pass through Parliament and Council.

THE PENALTY FRAMEWORK

The Act’s enforcement structure follows a tiered model. For deployers who fail to meet their high-risk system obligations, fines can reach up to EUR 15 million or 3% of global annual turnover, whichever is higher. For use of prohibited AI practices, the ceiling rises to EUR 35 million or 7% of turnover. For providing incorrect or misleading information to regulators, up to EUR 7.5 million or 1%.

National market surveillance authorities, not a single EU-wide regulator, handle enforcement in regard to AI systems. In January 2026, Finland became the first member state to confer enforcement powers on its market surveillance authority pursuant to Article 99 AI Act, rendering these powers fully operational. Other member states are following. This decentralised model means enforcement priorities and interpretive approaches may differ across member states. For multi-country staffing operations, that variation creates planning challenges and, for those who engage early with national regulators, potential advantages.

The fine itself, though, is often the wrong thing to focus on. Regulators also have the power to suspend or recall non-compliant AI systems from the market. For a staffing business whose operating model depends on technology-enabled matching and screening, that is the more commercially significant risk: a core tool pulled mid-contract, with immediate operational disruption.

SOME OF YOUR TOOLS MIGHT BE EXEMPT. MOST OF THEM PROBABLY AREN’T

There is a provision in the Act (Article 6(3)) that gets very little attention in industry commentary, and staffing operators should know about it. It sets out four conditions under which an AI system used in an employment context might fall outside the high-risk classification, even though it operates in a high-risk area.

The exemptions cover four types of system: those performing narrow procedural tasks (sorting documents, flagging duplicates), those improving the output of a completed human activity (cleaning up contract language), those detecting patterns in prior human decisions (flagging inconsistencies in past performance ratings), and those performing purely preparatory work (indexing, searching, or translating material before a human decides).

However, Article 6(3), read together with Recital 53, explicitly states that none of those exemptions apply if the AI system involves profiling within the meaning of Article 4(4) GDPR. Profiling means any automated processing of personal data that evaluates personal aspects of an individual, including analysing or predicting work performance, reliability, behaviour, or location.

Most candidate matching tools, ranking algorithms, and workforce allocation systems do exactly that. They take personal data, apply automated logic, and produce predictions about suitability or fit. That is profiling, and it renders the exemption unavailable in these cases.

The practical implication: when you run your AI inventory and start classifying systems, you may look at the Article 6(3) exemptions and assume several of your tools are outside scope. For the tools that handle procedural or preparatory tasks, that may be correct. For anything that matches, ranks, evaluates, or allocates workers based on personal characteristics, it almost certainly is not.

THIS IS A COMPETITIVE QUESTION, NOT JUST A COMPLIANCE ONE

The most useful way to think about the EU AI Act is as a market-shaping event that will separate operationally mature staffing businesses from those running on unexamined technology.

Enterprise clients with EU operations, particularly in regulated industries, are already building AI governance into their vendor selection criteria. Staffing businesses that can demonstrate compliant AI practices, transparent candidate processes, and documented oversight frameworks will have a measurable advantage in RFPs and preferred supplier negotiations.

There is a broader pattern here too. The EU AI Act is the first major AI regulation of its kind, but it will not be the last. Similar frameworks are taking shape in the UK, Canada, and at state level in the US. Investing in AI governance infrastructure now builds capacity that transfers across jurisdictions. The businesses treating compliance as a one-off project are solving for today. The ones building governance into their operating model are solving for the next decade.

WHAT TO DO NOW TO PREPARE YOUR BUSINESS

1.Map every AI system your business uses that touches candidate or worker decisions: screening, matching, ranking, chatbots, performance analytics, scheduling algorithms. Include tools embedded in third-party platforms you deploy. You cannot comply with obligations you do not know you have.

2.For each tool, determine whether you are the deployer, whether the system falls within Annex III’s employment category, and who the provider is. Document this in a way your compliance and operations teams can work from.

3.Contact every AI vendor in your stack and ask specific questions: Are they aware of the EU AI Act? Are they pursuing conformity assessment? Can they provide technical documentation, bias audit results, and usage logs? Will they contractually commit to supporting your deployer obligations?

4.Identify who in your organisation will oversee each high-risk AI system. Make sure those people have the training and authority to understand, monitor, and override AI outputs. Document the process. AI literacy obligations are already in effect, so this is a current requirement.

5.Draft the notifications, disclosures, and explanation frameworks you will need to inform candidates and workers about AI use. In a labour market where candidate experience drives placement volume, being clear and upfront about how you use technology is a differentiator, not a burden.

The staffing businesses that will handle this transition best are the ones that start now, move methodically, and treat AI governance as an operating capability rather than a legal exercise. The EU AI Act is the clearest signal yet that the era of unexamined AI deployment in workforce decisions is ending. For operators willing to get ahead of it, the reward is not just compliance but the trust of clients and candidates who increasingly expect it.

More Article & blogs
Blog V1 Image

Argentina's labour law just got its biggest overhaul in decades. Ley 27.802 came into force on 6 March 2026, and if you operate in the country, or you're thinking about it, you need to know what changed.

Blog V1 Image

With the rollout of tighter equal pay transparency rules and expanded digital reporting obligations under eSocial, the country is moving toward a compliance model that runs on structured data, cross-checking and public accountability.