The EU AI Act: What do businesses need to know?

EU AI Act

The EU’s groundbreaking Artificial Intelligence Act entered into force in August 2024, but key provisions will start to be enforceable from the start of August 2025. These rules will require companies to implement transparency, documentation, risk assessment, and local representation for systems deployed across the EU. This marks the start of serious regulatory oversight of which its importance cannot be understated.

While some major tech firms have pushed back (including Meta), the European Commission is standing firm. A voluntary code of practice was introduced to guide early compliance, but tensions remain over legal clarity. This shift signals the EU’s determination to lead in ethical AI governance, setting a global benchmark and forcing companies, whether within or outside the EU, to rethink how they build and deploy AI tools.

Maintaining security and protection

One of the key aims of legislation such as this is to ensure that AI systems are as secure as possible. This is welcomed by experts, including Ilona Cohen, Chief Legal and Policy Officer at HackerOne, who believes that: “securing AI systems and ensuring that they perform as intended is essential for establishing trust in their use and enabling their responsible deployment.”

Cohen says that she is “pleased that the Final Draft of the General-Purpose AI Code of Practice retains measures crucial to testing and protecting AI systems, including frequent active red-teaming, secure communication channels for third parties to report security issues, competitive bug bounty programs, and internal whistleblower protection policies. I also support the commitment to AI model evaluation using a range of methodologies to address systemic risk, including security concerns and unintended outcomes.”

lona Cohen, Chief Legal and Policy Officer at HackerOne
Ilona Cohen - Chief Legal & Policy Officer, HackerOne
Martin Davies, Audit Alliance Manager at Drata.
Martin Davies - Audit Alliance Manager, Drata

Holding developers accountable

The EU AI Act also aims to hold organisations accountable for the AI applications that they create, with a purpose to reduce any potential risks to end users. “By prohibiting a range of high-risk applications of AI techniques, the risk of unethical surveillance and other means of misuse is certainly mitigated”, explains Martin Davies, Audit Alliance Manager at Drata.

He continues: “Even in circumstances where high-risk biometric AI applications are still permitted for the purposes of law enforcement, there is still a limitation on the purpose and location for such applications, which prevents their misuse (intentional or otherwise) in this sector. This is a step in the right direction, and the proposed penalties will mean that the developers of such high-impact AI applications are rendered accountable for their outcomes. The positive impact this Act could have on creating a safe and trustworthy AI ecosystem within the EU will lead to an even wider adoption of the technology. To that extent, this regulation will encourage innovation within defined parameters, which will only benefit the AI industry at large.”

Is control becoming too complex?

However, despite this act being welcomed by some, others are concerned about the increasingly complex minefield of regulation that businesses are being faced with. Darren Thomson, Field CTO EMEAI at Commvault, expresses these concerns: “The EU AI Action Plan sets out a commendable vision for the future, but rather than being a positive sign of progress, this regulatory divergence is creating a complex landscape for organisations building and implementing AI systems. The lack of cohesion makes for an uneven playing field and conceivably, a riskier AI-powered future. Organisations will need to determine a way forward that balances innovation with risk mitigation, adopting robust cybersecurity measures and adapting them specifically for the emerging demands of AI.

Thomson highlights that business leaders will need to have strong protections and defences along with well-tested disaster recovery plans; “Effectively this means prioritising the applications that really matter and defining what constitutes a minimum viable business and acceptable risk posture.”

Darren Thomson, Field CTO EMEAI at Commvault
Darren Thomson - Field CTO EMEAI, Commvault
Hugh Scantlebury, CEO and Founder of Aqilla
Hugh Scantlebury - CEO and Founder, Aqilla

Hugh Scantlebury, CEO and Founder of Aqilla, echoes this apprehension, noting that “companies, individuals and governments around the world are working on an almost unimaginable range of AI-related projects. So, trying to regulate the technology right now is like trying to control the high seas or bring law and order to the Wild West. If we did attempt to introduce regulation, it would have to be global – and such an agreement seems unlikely any time soon. Otherwise, if one region, such as the EU – or one country, such as the UK – attempts to regulate AI and establish a “safe framework,” developers will just go elsewhere to continue their work.”

“The birth of AI is second only to the foundation of the Internet in terms of its power to fundamentally alter our lives” adds Scantlebury, “but AI is still in its infancy, and we have only scratched the surface of what it could achieve. So, right now, no one is in a position to legislate – and even if they were, AI is developing at such a pace that the legislation wouldn’t keep up.”

As the EU’s Artificial Intelligence Act begins to take hold, it represents a bold and necessary step towards shaping a more transparent and secure AI landscape. While the legislation is far from perfect – and concerns over regulatory fragmentation and complexity are valid – it still signals a crucial shift in how governments engage with emerging technologies. By prioritising risk mitigation, system security, and ethical boundaries, the Act sets a precedent for others to follow.

Yet the challenge remains in keeping regulation both robust and adaptable, ensuring it evolves in step with the pace of innovation. Whether this marks the start of a truly global regulatory movement or creates further divergence across regions, one thing is clear: the conversation around AI governance has moved beyond speculation and into concrete action.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE