Skip to content
TECHNOMATON | Docs SAI Certified Trainers

AI Act | Risk Classification

Detailed guide for classifying AI systems according to AI Act risk categories.


Category Overview


Prohibited Practices (Article 5)

These AI systems are PROHIBITED and must not be used.

List of Prohibited Practices

#PracticeDescriptionExceptions
1Subliminal manipulationAI manipulating human behaviour without awarenessNone
2Exploiting vulnerabilitiesExploiting vulnerabilities (age, disability)None
3Social scoringEvaluating persons based on behaviourNone
4Real-time biometric IDBiometric identification in public spacesExceptions for law enforcement
5Predictive policing (individual)Predicting criminality of individualsNone
6Emotion recognition (workplace/school)Emotion recognition in the workplace/schoolHealth and safety reasons
7Biometric categorisationCategorisation by race, religion, orientationNone
8Facial recognition scrapingBuilding databases from public sourcesNone

Self-assessment Checklist

  • We do not use subliminal manipulation
  • We do not exploit vulnerabilities of persons
  • We do not perform social scoring
  • We do not use real-time biometric ID in public
  • We do not perform individual predictive policing
  • We do not recognise emotions of employees/students (without exception)
  • We do not categorise persons by sensitive attributes
  • We do not scrape biometric data from public sources

If anything is checked -> Immediately STOP + consult with Legal


High-Risk AI (Article 6, Annex III)

Annex III --- High-Risk Areas

AreaExamplesArticle
1. BiometricsBiometric identification, categorisationIII.1
2. Critical infrastructureAI in energy, transport, waterIII.2
3. EducationAdmissions, grading, proctoringIII.3
4. EmploymentRecruitment, HR decisions, performanceIII.4
5. Essential servicesCredit, insurance, social benefitsIII.5
6. Law enforcementProfiling, polygraph, risk assessmentIII.6
7. MigrationVisa decisions, border controlIII.7
8. JusticeLegal research, sentencingIII.8

Detailed High-Risk AI Examples

High-Risk Obligations

ObligationDescriptionDeadline
Risk ManagementDocumented risk management processBefore deployment
Data GovernanceData quality, bias testing, documentationOngoing
Technical DocumentationModel card, training info, limitationsBefore deployment
Record KeepingAudit log, min. 5 yearsOngoing
TransparencyInformation for usersBefore deployment
Human OversightHuman-in-the-loopAlways
Accuracy & RobustnessTesting, monitoringOngoing
Conformity AssessmentSelf-assessment or third-partyBefore deployment

Limited Risk / Transparency (Article 50)

Systems Requiring Transparency

SystemObligation
ChatbotsInform users they are communicating with AI
Generative AILabel AI-generated content
DeepfakesClearly label as artificial
Emotion recognitionInform affected persons
Biometric categorisationInform affected persons

Implementing Transparency

Examples of correct labelling:
CHATBOT:
"You are communicating with an AI assistant. To speak
with a human, type 'operator'."
GENERATED CONTENT:
"This text was created using AI."
"AI-generated image"
DEEPFAKE:
"This content was created or modified using AI."

Minimal Risk

Systems with Minimal Obligations

SystemExamples
Spam filtersEmail spam detection
Internal analyticsBusiness intelligence, dashboards
RecommendationProduct recommendations (no impact on rights)
SearchInternal search, document search
AutomationWorkflow automation, scheduling
  • Basic documentation
  • Internal ownership
  • Basic monitoring
  • Incident response

Classification Decision Tree


Classification Table

For each AI system from the inventory:

AI IDNameAnnex III?Safety component?User-facing?ClassificationNote
AI-001[ ] Yes [ ] No[ ] Yes [ ] No[ ] Yes [ ] No
AI-002[ ] Yes [ ] No[ ] Yes [ ] No[ ] Yes [ ] No

Next Steps

After classification:

  1. Prohibited -> Immediately deactivate, consult Legal
  2. High-Risk -> DPIA + Full compliance
  3. Transparency -> Implement notifications
  4. Minimal -> Basic documentation

Resources