AI ACT: what should companies using AI solutions prepare for?

AI Act introduces new obligations for companies using AI. Find out what levels of risk the regulation provides for, who is realistically affected.

Jakub Karonski

Table of contents

In this article:

Introduction


The EU AI Act is the world’s first comprehensive regulation of artificial intelligence. The first elements of the regulation begin to take effect as early as 2025, and full enforcement will come into effect gradually by 2026-2027. Companies operating in the European market regardless of industry must prepare for the new obligations, especially if they use AI for data processing, document analysis or process automation. The regulations aim to increase user security, transparency of system operations and reduce the risk of fraud.

What is the AI Act?


The AI Act introduces rules for the responsible use of artificial intelligence. Its foundation is a risk-based approach Responsibilities increase as the potential threat that an AI system may pose to citizens or organizations increases.

Risk levels in the AI Act


The regulation divides AI systems into four levels of risk, which can be illustrated by a pyramid.

Unacceptable risks – systems that threaten human rights (e.g., mass biometric surveillance). They are banned.
High risk – systems used in areas such as health, education, transportation, recruitment, critical infrastructure. They require certification, conformity assessment, full documentation and close human supervision.
Limited risk – AI in user interactions, such as chatbots. Obligation to report on AI use and ensure transparency.
Minimal risk – systems that do not pose risks to the user. They can be used freely.

AI Act-who is affected in practice?


Most companies are using AI tools operating at low or minimal risk levels, which means no heavy regulatory burden, but transparency and compliance with the general principles of responsible data processing are still necessary.

Anonymization vs. AI Act


Anonymization of documents and data is becoming crucial for two reasons. First, it reduces legal risk if data is anonymized, not subject to the rigors of data protection. Second, the AI Act requires that data used to train models be legal, secure and minimized. Anonymization is therefore one of the most effective ways to comply with this requirement and still use the documents to build your own models.

Does Bluur meet the requirements of the AI Act?


Bluur falls into the category of minimal-risk systems, meaning it does not require the heavy compliance or certification characteristic of high-risk systems. Nonetheless, the solution implements AI Act-compliant best practices such as those that enhance transparency, process control and data security:

  • Transparency of AI and human collaboration – the user always knows when the algorithm works and when it works.
  • Logging operations – the system records who worked on the document (human or AI) and when.
  • Lack of autonomous decisions – AI in Bluur never acts independently at key moments; the final decision is up to the user.
  • Full documentation of use and monitoring – allows auditing, analysis of work and compliance with regulatory requirements.

The combination of these elements means that Bluur not only meets AI Act standards for low risk, but actually exceeds them.

Bluur to support AI Act compliance in model training


The act also introduces obligations for training datasets. If a company wants to create or train AI models based on its own documents, it must ensure that the data does not violate the privacy or rights of individuals. Bluur allows documents to be fully anonymized before they are used to train models. This allows companies to legally create their own domain-specific models while remaining in compliance with EU regulations.

Summary


The AI Act is a new development that is expected to increase user security and organize the market for artificial intelligence systems. While systems operating in high-risk areas will be most affected by the regulation, all companies using AI should prepare for greater transparency and accountability in data processing. Bluur is an example of a solution that fits naturally into these requirements thanks to its document anonymization, auditability and control over the process, and can be a real help in building compliance with the act, especially in downstream processing projects or model training.

Jakub Karonski

Knowledge

Keep Reading: Explore More Articles!

Are you looking for more detailed information and deeper insights? Our blog is filled with comprehensive articles that go beyond the surface.

Latest Articles

Bluur

Document redaction with Bluur

Embrace the power of AI-driven precision and streamline your document handling process today. 

Anonymize your documents with Bluur!

-30%

on Bluur for 6 months with the code:

BLUUR-FIRST-2025

Code valid until 31/12/2025