Skip to main content
August 15, 202511 min readAI Regulation & Compliance

EU AI Act Phase Two: GPAI Rules and What Your AI Stack Needs Now

The EU AI Act's General-Purpose AI obligations take effect on 2 August 2025, making the EU the first jurisdiction to regulate foundation models at scale. The AI Office is operational, major companies sign the Code of Practice, and compliance deadlines are no longer theoretical. Here is what your AI stack needs to satisfy the new rules.

EU AI ActGPAIAI RegulationComplianceAI GovernanceEuropeRisk ManagementFoundation Models
Giovanni van Dam

Giovanni van Dam

IT & Business Development Consultant

GPAI Obligations: What Changed on 2 August 2025

The EU AI Act entered its second major enforcement phase on 2 August 2025, activating obligations for providers of General-Purpose AI (GPAI) models. This meant that any company making a GPAI model available in the EU — whether through direct API access, embedded in a product, or offered through a cloud platform — was now subject to specific regulatory requirements.

The obligations fell into two tiers:

  • All GPAI providers must maintain technical documentation, comply with EU copyright law, provide detailed model summaries, and implement transparency measures so that downstream deployers understand the model's capabilities and limitations.
  • GPAI models with systemic risk (those trained with compute exceeding 10^25 FLOPs, or designated by the AI Office) face additional requirements: adversarial testing, incident reporting, cybersecurity measures, and energy consumption disclosure.

The scope was broader than many anticipated. If your business uses a GPAI model — even one provided by a third party — you may have obligations as a deployer to ensure your use complies with the Act's requirements. Ignorance of your AI supply chain is no longer a defensible position.

The EU AI Office: A New Regulatory Body

The EU AI Office, established within the European Commission, became fully operational in 2025. It serves as the central enforcement authority for GPAI model obligations, coordinates with national regulators, and manages the development of standards, codes of practice, and guidance documents.

The AI Office's role is significant because it creates a single point of regulatory authority for foundation models across the entire EU. Unlike GDPR, which is enforced by 27 national data protection authorities with varying interpretations, the AI Act's GPAI provisions are centrally overseen. This provides more consistency but also more concentrated enforcement capability.

For businesses operating across multiple EU member states, the AI Office's centralised approach simplifies compliance — one set of rules, one primary authority. But it also means that enforcement actions, when they come, will carry EU-wide impact rather than being limited to a single jurisdiction.

The GPAI Code of Practice: Voluntary but Strategic

Major AI companies — including OpenAI, Google, Anthropic, Meta, and Microsoft — signed the GPAI Code of Practice, a voluntary framework that provides detailed guidance on how to comply with the AI Act's GPAI obligations. While the Code is not legally binding, compliance with it creates a presumption of conformity with the Act's requirements — a significant legal advantage.

The Code covers transparency, copyright compliance, risk assessment, safety testing, and incident reporting. It was developed through a multi-stakeholder process involving industry, civil society, and academic experts, and it represents the most detailed guidance available on practical GPAI compliance.

For enterprise AI buyers, the Code of Practice provides a useful due diligence tool. When evaluating AI model providers, asking whether they have signed and comply with the Code is a shortcut to assessing their regulatory preparedness. Providers who have signed are more likely to maintain the documentation, testing, and transparency measures that downstream deployers need to satisfy their own obligations.

What Your AI Stack Needs: A Practical Compliance Checklist

Whether you provide or deploy GPAI models in the EU, these are the practical steps required by August 2025:

  • AI inventory: Document every AI model used in your products and operations — provider, version, training data summary, intended use, and risk classification.
  • Supplier due diligence: Verify that your AI model providers comply with GPAI obligations. Request technical documentation, model cards, and copyright compliance information.
  • Transparency measures: Ensure that AI-generated content is appropriately labelled and that users are informed when they are interacting with an AI system.
  • Risk assessment: Evaluate each AI use case against the Act's risk categories (unacceptable, high-risk, limited-risk, minimal-risk) and implement the corresponding requirements.
  • Incident response: Establish procedures for identifying, reporting, and remediating AI-related incidents, particularly for high-risk applications.
  • Documentation: Maintain records sufficient to demonstrate compliance to regulators, including model selection rationale, testing results, and deployment decisions.

This is not a one-time exercise. The AI Act requires ongoing monitoring and documentation as models are updated, use cases evolve, and the regulatory framework matures. Discuss EU AI Act compliance for your specific AI stack.

Compliance as Competitive Advantage

The natural instinct when faced with new regulation is to treat it as a cost centre — a burden to be minimised. This is a mistake. The EU AI Act creates a competitive moat for businesses that embrace compliance early:

  • Market access: The EU's 450 million consumers represent one of the world's largest digital markets. Compliance is the price of entry — but early compliance is a first-mover advantage.
  • Customer trust: In an era of AI scepticism, demonstrable compliance with the world's most comprehensive AI regulation is a powerful trust signal for customers, partners, and investors.
  • Operational rigour: The documentation, testing, and governance requirements of the AI Act force operational disciplines that improve AI reliability and reduce risk regardless of regulatory requirements.
  • Global influence: The Brussels Effect means that EU AI Act compliance positions you well for similar regulations emerging in the UK, Canada, Australia, and other jurisdictions.

The businesses that will navigate the AI Act most successfully are those that integrate compliance into their AI development and deployment processes from the outset — not as an afterthought. Learn how embedded technology leadership builds AI governance that scales.

Frequently Asked Questions

Further Reading

Related Articles

Giovanni van Dam

Giovanni van Dam

MBA-qualified entrepreneur in IT & business development. I help founder-led businesses scale through technology via GVDworks and build AI-powered SaaS at Veldspark Labs.