The EU AI Act is the world's first comprehensive legislation regulating AI systems. It imposes obligations depending on risk level — from minimal transparency requirements to outright bans. We help organizations inventory their AI systems, classify them, and implement required compliance measures.
AI system providers (companies developing or adapting AI models) — regardless of location, if the system is used in the EU
AI system deployers — organizations using AI systems in their operations
Technology companies offering AI tools as a service (SaaS, API, platforms)
Public institutions using AI systems for decision-making
Importers and distributors of AI systems on the EU market
From AI system inventory to a full management system compatible with ISO 42001
AI inventory — identifying all AI systems in the organization, including external tools, embedded models, and AI-powered processes.
Risk classification — assigning each AI system to a risk category (prohibited, high, limited, minimal) per AI Act criteria.
Gap analysis — assessing gaps against AI Act requirements for identified risk categories. Action prioritization.
Compliance measures — technical documentation, risk assessment, human oversight mechanisms, data quality testing, transparency procedures.
AI management system — building or integrating an AI management system compliant with ISO 42001, covering policies, procedures, and responsibilities.
Validation and monitoring — completeness verification, staff training, establishing continuous compliance monitoring processes.
The AI Act takes effect in phases: bans (February 2025), obligations for general-purpose AI (August 2025), full requirements for high-risk systems (August 2026). It's worth starting preparations now.
Yes — the AI Act has extraterritorial reach, similar to GDPR. It applies to any provider or deployer of an AI system whose outputs are used within the EU.
AI systems used in areas such as: recruitment, credit scoring, medical diagnostics, critical infrastructure management, education, justice, and migration. The full list is in Annex III of the AI Act.
Yes — as a deployer, the organization has transparency obligations (informing users) and must ensure that AI system use complies with AI Act requirements for the given risk category.
ISO 42001 is the AI management system standard. The AI Act explicitly references harmonized standards as a path to presumption of conformity. Implementing ISO 42001 provides a solid foundation for meeting AI Act requirements.