The EU AI Act: Don't Over(re)act

Nina Žižakić
Machine Learning Engineer

AI is evolving, and so are the rules. On 2 August 2025, the second implementation phase of the EU AI Act comes into effect. While the tech giants face intense scrutiny (and potential €35M fines), every company using AI needs to get real about transparency and risk assessment. Whether you're building chatbots or crunching numbers, here's what's actually changing and what it means for your business.

Have you ever wondered what 'large' really means in 'large language models'?

Well, the EU has an answer for it: 1025. That's ten million, million, million, million floating-point operations, and it's the computational budget you need to burn through for the European Union to officially consider your AI model a heavyweight. This isn't just a number for bragging rights; it's the threshold set by the new EU AI Act to identify general-purpose AI models with "high-impact capabilities" or "systemic risk." Crossing this line, which becomes official on 2 August 2025, means developers face a whole new world of stringent regulatory obligations, from rigorous risk assessments to full transparency about their training data. As you might expect, this new 1025 Floating Point Operations Club includes the usual suspects like the models from OpenAI, Google, and Anthropic. Meanwhile, a host of popular open-source models and smaller variants are, for now, left outside the regulatory velvet rope.[1][2]

So, what if you're the one who actually built one of these behemoths? If you're a provider of a general-purpose AI (GPAI) model that crosses this computational threshold, you've got a busy time ahead. 

Your new to-do list includes: 

  • Conducting and documenting model evaluations
  • Assessing and mitigating potential systemic risks
  • Implementing cybersecurity protection
  • Reporting serious incidents to the EU's AI Office
  • Preparing extensive technical documentation
  • Disclosing your model's energy consumption
  • Providing detailed summaries of training content, particularly copyrighted material 

In short, it’s a significant responsibility reserved for the very biggest players in the AI world.

What This Means For Your Business: Key Questions Answered

The regulations we've outlined above have major implications for businesses using AI. Here are the most common questions we're hearing from our clients:

Do we have to disclose to our customers what we're using AI for?

Yes, but it depends on how you're using it. The EU AI Act is big on transparency. If you deploy a system that interacts with people, like a customer service chatbot, you must make it clear they are talking to an AI, not a human. Similarly, if you're generating audio, image, or video content that is a "deep fake," it must be labelled as artificially generated. Beyond the legal requirements, being open about your use of AI is simply good business. It builds trust with your customers and shows you're using technology responsibly.

Does this apply to our company and our AI systems?

For most companies, the harshest rules for those giant 1025 floating point operations models do not apply to you directly. Those obligations fall on the providers—like OpenAI or Google. Your responsibilities come into play based on how you deploy an AI system. The key distinction is whether your application is considered 'high-risk'[3]. This category includes AI used in critical areas like recruitment, employee management, credit scoring, critical infrastructure, and medical devices. If you're just using AI to analyse marketing data or automate internal workflows, you're likely in the clear from the most demanding rules.

What kind of documentation do we need to prepare, and how detailed does it have to be?

Think of it in tiers. For a high-risk system, the documentation is extensive. You'll need to maintain records of your risk assessments, data handling procedures, and clear instructions of use for your staff. The good news is that the provider of the high-risk AI system is obligated to give you comprehensive documentation to help with this. For low-risk AI applications, the burden is far lower. 

While there are no formal documentation requirements mandated by the Act, keeping good internal records of what systems you're using and for what purpose is simply good governance and highly recommended.

How will this impact our internal processes?

This is less about red tape and more about building a responsible AI culture. The EU AI Act encourages you to think before you build or deploy. You'll likely need to establish a simple internal process to assess the risk level of any new AI project from the outset. This might involve creating an 'AI register' to keep track of your systems and appointing a person or team to oversee AI governance. Adopting this "quality by design" approach doesn't just ensure compliance; it leads to safer, more effective, and more reliable AI solutions.

Who bears responsibility when In The Pocket develops a custom AI solution for us?

In a partnership like this, the responsibilities are shared and clearly defined. 

As the developers, we are the 'provider' of the custom AI system. Our role is to build it according to the Act's requirements for its risk level, conduct the necessary conformity assessments, and provide you with all the technical documentation. 

You, as our client, are the 'deployer'. Your responsibilities include using the system as intended, ensuring proper human oversight is in place, and monitoring its performance in the real world. We make sure this is all clearly laid out in our agreements, so everyone knows exactly what they need to do.

What are the financial implications and potential penalties for non-compliance?

The EU has established substantial penalties as effective deterrents.Fines can reach up to €35 million or 7% of a company's total worldwide annual turnover, whichever is higher, for the most serious infringements (like using prohibited AI). 

Other violations, such as failing to meet the obligations for high-risk systems, can attract fines of up to €15 million or 3% of turnover. The message is clear: the investment in compliance is tiny compared to the potential cost of non-compliance.

Ready to use AI to make your processes better and more efficient?

Our team is here to help you leverage AI while maintaining full regulatory compliance. Our expertise in strategy, design, product and platform development positions us perfectly to guide your organisation through this complex landscape.

Footnotes

[1]:The line between a "high-impact" model and a regular one is still a bit fuzzy, as calculating the exact training compute isn't always straightforward. However, based on official reports and expert analysis, here's a likely breakdown:

Models likely above the 1025 floating point operations (FLOPs) threshold:

  • OpenAI GPT-4 (~2.15 x 10^25 FLOPs): Widely estimated by sources like Epoch AI to be just over the line.
  • Google Gemini Ultra (~5.0 x 10^25 FLOPs): Estimated by Epoch AI and others to be one of the most computationally intensive models to date.
  • Anthropic Claude 3 Opus (~1.6 x 10^25 FLOPs): Also estimated by Epoch AI to be in the high-impact category.
  • Meta Llama 3.1 405B (~3.8 x 10^25 FLOPs): Meta officially reported this figure in their research paper for their largest Llama 3.1 model.
  • Mistral Large (~1.1 x 10^25 FLOPs): Estimated by Epoch AI to just scrape into the regulated category.

Models likely below the 10^25 FLOPs threshold:

  • Meta Llama 3 70B (~7.9 x 10^24 FLOPs): The smaller, very popular version of Llama 3 falls comfortably below the threshold, according to Epoch AI estimates.
  • Mistral's Mixtral 8x7B (~7.7 x 10^23 FLOPs): This popular open-source Mixture-of-Experts model is well under the limit.
  • Google's Gemma 7B (~2.5 x 10^23 FLOPs): A smaller open model from Google, its compute is orders of magnitude below the threshold based on its parameter count and training data size.
  • xAI's Grok-1 (~2.9 x 10^24 FLOPs): The initial version of Grok is estimated by Epoch AI to be below the line, though its successors are expected to far exceed it.

[2]:An important distinction: a model itself might not have 'systemic risk', but the way it's used can still be classified as 'high-risk'. For example, using a small open-source model for credit scoring or recruitment would make the entire system subject to the strict rules for high-risk applications.

[3]:It's worth noting the different deadlines here. The rules for GPAI models with 'systemic risk' apply from 2 August 2025. The rules for the 'high-risk' systems discussed in this paragraph, however, apply later: mostly from 2 August 2026, with specific cases applying from 2 August 2027.

Stay ahead
of the game.

Sign up for our monthly newsletter and stay updated on trends, events and inspiring cases.