Risk-based approach
What is the risk-based approach
The risk-based approach is a principle that permeates all key EU regulations. It means that the level of obligations and measures corresponds to the level of risk — the higher the risk, the stricter the requirements.
This is not “one-size-fits-all” compliance, but a proportionate response to real risks.
Risk-based approach in the AI Act
The AI Act defines four risk levels for AI systems:
| Risk level | Examples | Regulation | Effective date |
|---|---|---|---|
| UNACCEPTABLE RISK | Social scoring, manipulation | PROHIBITED (Art. 5) | Since 2 February 2025 |
| HIGH RISK | HR, healthcare, justice | STRICT REQUIREMENTS (Art. 6-49) | From 2 August 2026 |
| LIMITED RISK | Chatbots, deepfakes | TRANSPARENCY (Art. 50) | Inform users |
| MINIMAL RISK | Spam filter, gaming AI | VOLUNTARY | Recommended best practices |
Why it matters
Your obligations depend on where your AI systems fall. Most companies operate AI with minimal or limited risk — less strict requirements apply to them.
But: AI literacy (Art. 4) applies to ALL risk levels without exception.
Risk-based approach in other regulations
NIS2
- Essential entities (energy, transport, healthcare) — stricter requirements
- Important entities (manufacturing, postal services, food) — less strict
- Risk is assessed based on the impact of an outage on society
GDPR
- High-risk processing requires a DPIA (Data Protection Impact Assessment)
- Automated decision-making triggers special rights for data subjects
- Risk is assessed based on the impact on the rights and freedoms of individuals
Data Act
- Risk associated with data sharing between entities
- Protective measures proportional to data sensitivity
- Special regime for trade secrets
How to apply the risk-based approach
Step 1: Inventory
Map all AI systems in the organization — not just the “official” ones, but also Shadow AI.
Step 2: Classification
For each system, determine:
- Purpose — what the AI is used for
- Context — in which sector/process
- Impact — what happens if the AI fails or makes a wrong decision
- Regulation — which regulations apply
Step 3: Measures
Set measures proportional to the risk:
- Minimal risk — basic policy, employee awareness
- Limited risk — transparency, monitoring
- High risk — complete compliance program (conformity assessment, human oversight, monitoring)
Key deadlines
| Date | Regulation | What applies |
|---|---|---|
| 2 February 2025 | AI Act | AI literacy + prohibited practices |
| 2 August 2025 | AI Act | GPAI rules |
| 12 September 2025 | Data Act | Main provisions |
| 2 August 2026 | AI Act | High-risk AI — full applicability |
| 11 November 2026 | NIS2 | Full implementation |
Further reading
- AI Literacy — obligation for all risk levels
- Shadow AI — risks of unauthorized AI