EU AI Act: The First Phase Has Arrived – Are You Ready?

April 16, 2025
The EU Artificial Intelligence Act (EU AI Act) officially entered its first implementation phase on 2 February 2025, marking a major shift in how organisations across Europe and beyond must approach AI regulation, staff training, and compliance.

As the world’s first comprehensive AI law, the Act sets a precedent for responsible AI governance, ensuring that AI systems deployed in the EU are safe, transparent, and ethically sound. Additionally, as an EU regulation it is directly and uniformly applicable across all EU member states

Whether a business develops AI models or simply uses AI tools in day-to-day operations, these changes will have a significant impact. From stringent AI literacy requirements to strict prohibitions on certain AI practices, organisations must take proactive steps to align with these new obligations. The sooner businesses integrate compliance measures, the better positioned they will be to avoid penalties and build trust with stakeholders.

Key Provisions in Effect from 2 February 2025

The first phase of the EU AI Act introduces critical regulations, particularly focusing on AI literacy and prohibited AI practices.

1. AI Literacy Requirements

One of the most important changes introduced by the EU AI Act is the requirement for organisations developing or deploying AI systems to ensure their staff have an appropriate level of AI literacy.

According to Article 4 of the Act, companies must provide training on:

  • The risks associated with AI systems
  • Responsible deployment and ethical considerations
  • Compliance measures to ensure adherence to the AI Act

While there are no direct fines for non-compliance with AI literacy requirements, a lack of adequate training could contribute to heavier penalties if a company is found violating other parts of the Act. For example, if an organisation fails to prevent an AI system from breaching privacy laws or engaging in biased decision-making, poor staff training could be viewed as an aggravating factor in determining penalties.

This provision reflects the EU’s commitment to ensuring that AI is not just used, but understood, preventing misuse or unintentional harm due to a lack of knowledge.

2. Prohibited AI Practices

The Act also imposes an outright ban on AI systems that pose unacceptable risks to fundamental rights and public safety.

According to Article 5, prohibited practices include:

  • AI systems designed to manipulate human behaviour in a way that can cause harm
  • AI-powered real-time biometric surveillance in public spaces, except in narrowly defined security-related cases
  • AI-based emotion detection in workplaces and schools, except for medical or safety purposes

These prohibitions aim to protect individual freedoms and prevent AI-driven exploitation, particularly in sensitive environments such as employment, education, and public spaces. The Act recognises the potential dangers of AI when used to influence emotions, track individuals without consent, or manipulate decision-making processes.

Businesses that fail to comply with these restrictions could face severe penalties, including fines of up to €35 million or 7% of global turnover, whichever is higher depending on the offence. This highlights the importance of proactive risk assessments to ensure AI systems are fully compliant.

Implementation Timeline: Key Phases of the AI Act

The EU AI Act is being introduced in phases, allowing organisations time to adapt and implement necessary compliance measures.

  • February 2025 – AI literacy requirements and prohibitions on high-risk AI practices come into effect.
  • May 2025 – Applications of codes of practice for General Purpose AI systems
  • August 2025 – Regulations for general-purpose AI (GPAI) models will be enforced, requiring transparency and data governance measures.
  • August 2026 – Compliance requirements for high-risk AI systems (e.g., those used in healthcare, law enforcement, and critical infrastructure) will become mandatory.

Businesses that wait too long to prepare risk falling behind, facing legal and reputational consequences.

How Businesses Can Prepare for Compliance

To avoid regulatory penalties and maintain trust with customers and stakeholders, organisations should begin implementing compliance strategies now.

1. Establish Governance Frameworks

Strong AI governance policies should be implemented, covering risk management, data privacy, human oversight, and bias mitigation. Compliance teams should work closely with technical staff to ensure AI systems align with both legal requirements and ethical considerations.

2. Review Existing AI Practices

A full AI audit should be conducted to identify potential risks, particularly in automated decision-making, surveillance, and customer engagement. Any AI systems that collect biometric data or influence user behaviour should be carefully assessed for compliance with the Act’s restrictions.

3. Develop AI Training Programmes

Ensuring employees understand AI risks, compliance requirements, and ethical deployment is essential. Businesses should invest in ongoing AI literacy training, particularly for teams handling AI-based decision-making or customer interactions.

By taking these proactive steps, businesses can turn compliance into a competitive advantage, reinforcing their commitment to trustworthy AI practices.

Why Compliance Matters Now

The EU AI Act represents a turning point in AI regulation, setting a standard that will likely influence legislation in other jurisdictions. Businesses that fail to prepare risk operational disruption, financial penalties, and damage to their reputation. On the other hand, organisations that embrace compliance proactively will not only avoid legal trouble but also enhance their credibility and competitive standing.

At Bridgehouse, we understand the complexities of AI governance and compliance for both EU based Companies and for UK based Companies that also operate in the EU. Our expertise in corporate governance and regulatory strategy can help businesses navigate the AI Act, implement best practices, and future-proof their operations.

If your organisation needs support in aligning with AI regulations, developing training programmes, or reviewing governance frameworks, get in touch. The time to act is now.

Discuss Your EU AI Act Governance Needs with Bridgehouse

We would be pleased to answer any queries or have an informal chat to discuss your possible governance needs.