Skip to content

AI Legislation for Business: What UK and Irish SMEs Need to Know

Updated on:
Updated by: Ciaran Connolly
Reviewed byEsraa Ali

AI legislation for business is no longer a concern only for large technology companies. The EU AI Act, the UK’s evolving AI regulatory framework, and updated data protection obligations now have direct implications for any business that uses AI tools, automates decisions, or processes personal data with AI systems. That includes most SMEs.

This guide explains what the current regulatory landscape actually requires, which obligations are most likely to affect small and medium-sized businesses in the UK and Ireland, and what practical steps you can take now to reduce compliance risk while continuing to use AI effectively.

What Is AI Legislation and Why Does It Affect Your Business?

Four illustrated robotic arms each hold an icon above text. The icons represent law, AI regulation, SME impact, and commercial case, with corresponding brief descriptions about the impact of artificial intelligence legislation.

AI legislation refers to the laws, regulations, and official frameworks that govern how artificial intelligence systems can be developed, deployed, and used. Until recently, AI was largely unregulated in most jurisdictions. That has changed substantially since 2023.

The drivers are straightforward. AI systems now influence consequential decisions: credit scoring, recruitment screening, content moderation, medical diagnosis, predictive policing, insurance pricing. Governments and regulators have concluded that leaving these systems entirely unregulated creates unacceptable risks for individuals and society.

Why SMEs are affected more than they expect

A common assumption among small business owners is that AI regulation applies to the companies building AI, not the companies using it. This is partly true for the most stringent obligations, but it is not the whole picture.

Under the EU AI Act, obligations attach to both providers (companies that develop AI systems) and deployers (companies that use AI systems in their operations). Most SMEs are deployers. If you use an AI recruitment tool, an automated credit decision system, a customer-facing chatbot, or an AI content moderation system, you have deployer obligations under the Act.

The UK framework is less prescriptive, but the same principle applies: if you use AI in ways that affect individuals, you need to be able to demonstrate that you are doing so responsibly and with appropriate safeguards.

The commercial case for compliance

Beyond the legal requirement, there is a commercial case. Business customers, procurement teams, and enterprise clients are increasingly asking suppliers to demonstrate responsible AI use as part of vendor assessment processes. A business that cannot answer basic questions about how its AI tools handle personal data, or what safeguards are in place, is at a competitive disadvantage in enterprise sales and public sector procurement.

The EU AI Act: What It Requires and Who It Affects

Infographic titled AI Legislation Impact features four sections—AI Legislation Definition, Drivers of AI Regulation, SME Impact, and Commercial Case—with icons and brief descriptions beneath robotic arm graphics, highlighting artificial intelligence policies.

The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. It was formally adopted in May 2024 and published in the EU Official Journal in July 2024. Enforcement begins in stages: the prohibition on unacceptable-risk AI systems applies from February 2025, obligations for high-risk systems from August 2026, and most remaining provisions from August 2027.

The risk-based classification system

The Act classifies AI systems into four risk categories, each with different obligations.

Unacceptable risk (prohibited): AI systems that pose a clear threat to fundamental rights. This includes real-time biometric surveillance in public spaces (with narrow exceptions), social scoring systems, and AI that exploits psychological vulnerabilities. These systems are banned entirely.

High risk: AI systems used in areas where errors can cause serious harm to individuals. This covers AI in recruitment and employment decisions, credit scoring and insurance assessment, educational evaluation, critical infrastructure, law enforcement, and migration processing. Businesses deploying high-risk AI must comply with detailed requirements including conformity assessments, data governance documentation, human oversight mechanisms, and registration in an EU database.

Limited risk: AI systems like chatbots or deepfake generators that interact with people or generate content. The main obligation is transparency: users must be told they are interacting with AI.

Minimal risk: The majority of AI applications, including most business productivity tools, spam filters, and recommendation systems. No specific obligations beyond general good practice.

What this means for a typical UK or Irish SME

Most SMEs will fall into the limited or minimal risk categories for the majority of their AI use. Using an AI writing assistant, a customer service chatbot, or an AI-powered analytics dashboard does not, by itself, create high-risk obligations.

However, if your business uses AI to make or materially influence decisions about people (employees, job applicants, customers, or loan applicants), you need to assess whether that use falls into the high-risk category. The threshold is lower than many businesses assume.

The UK’s Approach to AI Regulation

Following Brexit, the UK is not bound by the EU AI Act, but UK businesses that operate in or sell into the EU market must comply with it for those activities. Within the UK, the government has taken a different path.

The sector-led, principles-based framework

The UK’s 2023 AI White Paper set out a framework built on five cross-sector principles: safety, security, robustness, transparency, fairness, accountability and governance, and contestability and redress. Crucially, the UK government decided not to create a single AI regulator or a single AI law. Instead, existing regulators (the ICO for data protection, the FCA for financial services, the CQC for healthcare) apply these principles within their sectors.

This gives UK businesses more flexibility than the EU approach, but also more uncertainty. There is no single document that tells a UK business exactly what it must do. Instead, obligations flow from sector-specific guidance, existing data protection law, and the evolving positions of relevant regulators.

Ciaran Connolly, founder of ProfileTree, works with SMEs on AI implementation across Northern Ireland and the wider UK: “What we consistently find is that businesses are not short of enthusiasm for AI. What they lack is clarity on what responsible use actually looks like in practice. The regulatory frameworks are genuinely useful once you understand them, because they give you a checklist for getting AI implementation right, not just legally but operationally.”

What the ICO expects from businesses using AI

The Information Commissioner’s Office (ICO) has been the most active UK regulator in this space. Its guidance on AI and data protection is the most practically relevant for SMEs.

The ICO’s position is that most AI use involving personal data requires compliance with UK GDPR. This means having a lawful basis for processing, providing transparency to individuals about how their data is used in AI systems, and conducting a Data Protection Impact Assessment (DPIA) for AI applications that present high risks to individuals.

The ICO has specifically flagged AI recruitment tools, AI-generated profiling, and automated decision-making as priority areas for enforcement attention.

GDPR and AI: The Obligations Already in Force

For UK and Irish businesses, the most immediately enforceable AI-related obligations come not from new AI-specific laws but from existing data protection rules. GDPR (in Ireland and across the EU) and UK GDPR (in Great Britain and Northern Ireland) both contain provisions directly relevant to AI use.

Article 22: Automated decision-making

Article 22 of GDPR gives individuals the right not to be subject to solely automated decisions that produce legal or similarly significant effects. If your business uses AI to make decisions about individuals without meaningful human review, this provision applies.

In practice, this means any business using AI for credit decisions, job application screening, insurance pricing, or any other process where an algorithm makes a consequential decision without a human reviewing and being able to override it needs a lawful basis for doing so and must be able to explain the decision to the affected individual on request.

Data minimisation and purpose limitation

AI systems often perform better with more data, which creates a tension with GDPR’s data minimisation principle. You should only collect and process personal data that is necessary for your specified purpose. An AI system trained on unnecessarily extensive personal data, or used for purposes beyond what individuals were told when their data was collected, creates a compliance risk.

Subject access requests and AI

When an individual makes a subject access request, they are entitled to meaningful information about any automated processing that affects them, including the logic involved. Businesses using third-party AI tools need to understand enough about how those tools process data to respond to this kind of request. Relying entirely on the vendor without understanding the basics is not a sufficient position.

Ethics, Bias, and Accountability in AI Systems

A diagram titled AI Ethics: Bias and Accountability Challenges features Tetris-like blocks and covers artificial intelligence issues—lack of oversight, biased training data, and vendor assumptions—with icons, highlighting EU AI regulation concerns.

Beyond formal legal compliance, the practical question of whether your AI systems produce fair and explainable outcomes is increasingly important.

Why bias matters for businesses

AI systems can perpetuate or amplify bias present in their training data. A recruitment tool trained primarily on historical hiring data from a workforce that was not diverse will tend to replicate that lack of diversity in its outputs. This is not a hypothetical risk; several major companies have abandoned or significantly reworked AI recruitment tools for exactly this reason.

For SMEs, the practical implication is that you cannot assume a commercially available AI tool is unbiased simply because a large vendor built it. You need to understand what training data was used, how the tool has been validated, and what oversight mechanisms are in place.

Human oversight as a practical safeguard

Meaningful human oversight is one of the most effective practical safeguards against AI errors and bias. For most SME use cases, this means ensuring that consequential decisions influenced by AI are reviewed by a person who has the authority and information to override the AI’s output if it appears incorrect or unfair.

This is not just a regulatory recommendation; it is good operational practice. AI systems make mistakes. A business that has removed human review entirely from consequential decisions has also removed its ability to catch those mistakes before they cause harm or legal exposure.

What Responsible AI Use Looks Like for an SME

Understanding AI legislation for business is one thing. Translating it into operational practice is another. The following steps are practical starting points for most SMEs.

Step 1: Audit your current AI use

List every AI tool your business currently uses. Include AI features embedded in tools you already use (many CRM platforms, HR systems, and accounting packages now include AI features that process personal data). Categorise each use by whether it involves personal data, whether it makes or influences decisions about individuals, and whether individuals are aware it is happening.

Step 2: Assess risk category

For each AI application, assess whether it falls into high-risk categories under the EU AI Act or whether it triggers automated decision-making obligations under GDPR. Most tools used for productivity, content creation, or internal analysis will not. Tools used in HR, customer credit assessment, or personalised pricing warrant closer review.

Step 3: Check vendor data processing agreements

If your AI tools process personal data, your contract with the vendor should include a Data Processing Agreement (DPA) that specifies the vendor’s role, the categories of data processed, retention periods, and security measures. Many vendors provide standard DPAs; if yours does not, request one.

Step 4: Document your decisions

Regulators and auditors look for documentation. If you have assessed a tool and concluded it does not require a DPIA, record that assessment. If you have implemented oversight procedures, document them. Documentation demonstrates due diligence and is the most practical protection if a regulatory question arises.

Step 5: Train your team

Your legal obligations extend to the people in your business who use AI tools. An employee who uses an AI system to make an employment decision without understanding the system’s limitations or their oversight responsibilities creates compliance risk. Structured AI training that covers both practical use and responsible deployment is increasingly a compliance requirement, not just a development option.

ProfileTree’s AI training programmes through Future Business Academy are designed specifically for SME teams, covering both practical AI skills and the governance and compliance considerations that responsible deployment requires. Our AI adoption resources for SMEs provide current data on how businesses across the UK are approaching this.

Looking Ahead: How AI Legislation Will Develop

The current regulatory frameworks are early-stage. They will develop significantly over the next three to five years, particularly as enforcement cases generate precedents and as AI capabilities change more rapidly than legislation can track.

EU AI Act enforcement timeline

The EU AI Act‘s enforcement timeline means that businesses have time to prepare, but not unlimited time. The prohibition on unacceptable-risk AI applies from February 2025. High-risk AI obligations apply from August 2026. Businesses trading with or into the EU need to be tracking this timeline and assessing which obligations apply to their specific AI use cases.

UK regulatory convergence

The UK government has indicated it will keep its approach under review, with the possibility of introducing primary AI legislation if the sector-led approach proves insufficient. The general election in 2024 and subsequent policy development suggest that some form of UK AI Act equivalent is plausible within the next parliamentary term.

International alignment

Global standards bodies, including the ISO and IEEE, are developing AI standards that are likely to underpin future regulatory requirements in multiple jurisdictions. Businesses that build compliance practices around these standards now are better positioned as regulations evolve, regardless of which specific laws apply.

Frequently Asked Questions

Does the EU AI Act apply to UK businesses?

Yes, for UK businesses that provide AI systems to EU customers or deploy AI systems that affect EU citizens. The Act follows the same extraterritorial logic as GDPR: if your business interacts with EU residents, the relevant provisions apply to those interactions, regardless of where your business is based.

What is a high-risk AI system under the EU AI Act?

High-risk AI systems are those used in specific areas where errors can cause serious harm: employment and recruitment decisions, credit and insurance assessment, educational evaluation, critical infrastructure, law enforcement, migration processing, and access to essential services. Full details are in Annex III of the Act.

Does my business need to do a DPIA for every AI tool?

No. A Data Protection Impact Assessment is required when an AI application is likely to result in a high risk to individuals’ rights and freedoms. For most standard business productivity tools, a DPIA is not required. It is required for AI that profiles individuals, makes automated decisions with significant effects, or processes sensitive personal data at scale.

What should I do if a vendor cannot explain how their AI tool makes decisions?

This is a practical risk signal. If a vendor cannot explain the logic of their AI system, cannot tell you what data it was trained on, and cannot provide a Data Processing Agreement, that vendor may not meet the transparency and accountability standards your business needs under UK GDPR. Consider whether the tool is appropriate for use cases involving personal data.

What is the difference between the EU AI Act and GDPR?

GDPR governs the processing of personal data, including by AI systems. The EU AI Act governs the safety, transparency, and accountability of AI systems themselves, regardless of whether personal data is involved. The two frameworks overlap significantly for AI applications that process personal data: an AI recruitment tool, for example, falls under both.

How can SMEs prepare for AI regulation without a dedicated compliance team?

Focus on three things: audit your current AI use and document what you find, ensure you have Data Processing Agreements with every AI vendor that processes personal data, and build human oversight into any AI-influenced decision that affects individuals. For more structured support, AI training programmes that cover governance and compliance alongside practical skills are the most efficient route for SME teams.

Leave a comment

Your email address will not be published.Required fields are marked *

Join Our Mailing List

Grow your business with expert web design, AI strategies and digital marketing tips straight to your inbox. Subscribe to our newsletter.