Detailed guide for classifying AI systems according to AI Act risk categories.
Category Overview
Prohibited Practices (Article 5)
These AI systems are PROHIBITED and must not be used.
List of Prohibited Practices
| # | Practice | Description | Exceptions |
|---|
| 1 | Subliminal manipulation | AI manipulating human behaviour without awareness | None |
| 2 | Exploiting vulnerabilities | Exploiting vulnerabilities (age, disability) | None |
| 3 | Social scoring | Evaluating persons based on behaviour | None |
| 4 | Real-time biometric ID | Biometric identification in public spaces | Exceptions for law enforcement |
| 5 | Predictive policing (individual) | Predicting criminality of individuals | None |
| 6 | Emotion recognition (workplace/school) | Emotion recognition in the workplace/school | Health and safety reasons |
| 7 | Biometric categorisation | Categorisation by race, religion, orientation | None |
| 8 | Facial recognition scraping | Building databases from public sources | None |
Self-assessment Checklist
If anything is checked -> Immediately STOP + consult with Legal
High-Risk AI (Article 6, Annex III)
Annex III --- High-Risk Areas
| Area | Examples | Article |
|---|
| 1. Biometrics | Biometric identification, categorisation | III.1 |
| 2. Critical infrastructure | AI in energy, transport, water | III.2 |
| 3. Education | Admissions, grading, proctoring | III.3 |
| 4. Employment | Recruitment, HR decisions, performance | III.4 |
| 5. Essential services | Credit, insurance, social benefits | III.5 |
| 6. Law enforcement | Profiling, polygraph, risk assessment | III.6 |
| 7. Migration | Visa decisions, border control | III.7 |
| 8. Justice | Legal research, sentencing | III.8 |
Detailed High-Risk AI Examples
High-Risk Obligations
| Obligation | Description | Deadline |
|---|
| Risk Management | Documented risk management process | Before deployment |
| Data Governance | Data quality, bias testing, documentation | Ongoing |
| Technical Documentation | Model card, training info, limitations | Before deployment |
| Record Keeping | Audit log, min. 5 years | Ongoing |
| Transparency | Information for users | Before deployment |
| Human Oversight | Human-in-the-loop | Always |
| Accuracy & Robustness | Testing, monitoring | Ongoing |
| Conformity Assessment | Self-assessment or third-party | Before deployment |
Limited Risk / Transparency (Article 50)
Systems Requiring Transparency
| System | Obligation |
|---|
| Chatbots | Inform users they are communicating with AI |
| Generative AI | Label AI-generated content |
| Deepfakes | Clearly label as artificial |
| Emotion recognition | Inform affected persons |
| Biometric categorisation | Inform affected persons |
Implementing Transparency
Examples of correct labelling:
"You are communicating with an AI assistant. To speak
with a human, type 'operator'."
"This text was created using AI."
"This content was created or modified using AI."
Minimal Risk
Systems with Minimal Obligations
| System | Examples |
|---|
| Spam filters | Email spam detection |
| Internal analytics | Business intelligence, dashboards |
| Recommendation | Product recommendations (no impact on rights) |
| Search | Internal search, document search |
| Automation | Workflow automation, scheduling |
Recommended (Not Mandatory) Practices
Classification Decision Tree
Classification Table
For each AI system from the inventory:
| AI ID | Name | Annex III? | Safety component? | User-facing? | Classification | Note |
|---|
| AI-001 | | [ ] Yes [ ] No | [ ] Yes [ ] No | [ ] Yes [ ] No | | |
| AI-002 | | [ ] Yes [ ] No | [ ] Yes [ ] No | [ ] Yes [ ] No | | |
Next Steps
After classification:
- Prohibited -> Immediately deactivate, consult Legal
- High-Risk -> DPIA + Full compliance
- Transparency -> Implement notifications
- Minimal -> Basic documentation
Resources