Skip to content
TECHNOMATON | Docs SAI Certified Trainers

AI Act: Compliance Checklist

Deadline: Parts applicable from 2.2.2025; full applicability 2.8.2026


Part A: AI Act Compliance

#ActivityReferencePhaseDescription
A1AI InventoryArt. 2, 3PreparationList all AI/ML systems (in-house, third-party, embedded)
A1.1— Internal AI systemsArt. 3(1)PreparationData pipeline, recommender, fraud detection
A1.2— Third-party AI (GPAI, APIs)Art. 3(63)PreparationClaude, ChatGPT, Perplexity, custom models
A1.3— Embedded AIArt. 3(1)PreparationAI in product for end-users (incl. open-source)
A2Risk ClassificationArt. 5, 6, Annex IIIAnalysisAssign risk: Prohibited / High-Risk / Medium / Low
A2.1— Prohibited practices?Art. 5AnalysisIdentify prohibited AI (emotion recognition of children, etc.)
A2.2— High-Risk AIArt. 6, Annex IIIAnalysisCredit scoring, employment, health, education, etc.
A2.3— GPAI Usage ClassificationArt. 51-56AnalysisDo you use GPAI? How? (fine-tuning, RAG, direct queries?)
A3AI Governance & QMSArt. 16, 17GovernanceQuality Management System for AI
A3.1— AI Policy documentArt. 16(a)GovernancePolicy: How we use AI, roles, approval process
A3.2— AI governance structureArt. 17GovernanceResponsibilities, roles, reporting lines
A3.3— AI ownership assignedArt. 17(1)(a)GovernanceOwner for each AI system
A4Risk Management SystemArt. 9ImplementationRisk management for high-risk AI
A4.1— Risk identificationArt. 9(2)(a)ImplementationIdentify risks for each high-risk AI
A4.2— Risk mitigationArt. 9(2)(b)ImplementationMitigation measures
A4.3— Residual risk assessmentArt. 9(4)ImplementationAcceptance of residual risks
A5Data GovernanceArt. 10ImplementationTraining data quality, representativeness, completeness
A5.1— Training data qualityArt. 10(2)ImplementationRelevant, representative, error-free
A5.2— Data bias assessmentArt. 10(2)(f)ImplementationAssessment of potential biases in data
A5.3— Data governance proceduresArt. 10(5)ImplementationProcesses for managing training data
A6Technical DocumentationArt. 11, 18DocumentationModel card, training info, limitations (5-year retention)
A6.1— Model documentationArt. 11, Annex IVDocumentationArchitecture, training process, performance
A6.2— Testing resultsArt. 11(1)(c)DocumentationTesting results, benchmarks
A6.3— Incident logArt. 18(1)DocumentationLog of incidents and corrective actions
A7Logging & Record-KeepingArt. 12, 19ImplementationAutomatic logging of AI system operations
A7.1— Automatic loggingArt. 12(1)ImplementationEvents, inputs, outputs, decisions
A7.2— Log retention (min. 6 months)Art. 12(2)ImplementationLog retention
A8Transparency & User InfoArt. 13, 50ImplementationInformation for users: “We use AI”
A8.1— AI disclosureArt. 50(1)ImplementationLabel AI interaction for users
A8.2— Instructions for useArt. 13ImplementationDocumentation for deployers
A9Human OversightArt. 14ImplementationHuman-in-the-loop mechanisms for high-risk AI
A9.1— Override capabilityArt. 14(4)(a)ImplementationAbility to override AI decisions
A9.2— Stop mechanismArt. 14(4)(e)ImplementationAbility to stop the AI system
A10Accuracy & RobustnessArt. 15TestingTesting, monitoring, drift detection
A10.1— Accuracy testingArt. 15(1)TestingRegular accuracy testing
A10.2— Robustness testingArt. 15(4)TestingRobustness testing (adversarial, edge cases)
A10.3— Drift detectionArt. 15(3)TestingPerformance degradation monitoring
A11Conformity AssessmentArt. 43, 47-48ConformityConformity assessment for high-risk AI
A11.1— Internal assessmentArt. 43(2)ConformityInternal conformity assessment (for most high-risk)
A11.2— CE markingArt. 48ConformityCE marking before placing on market
A12EU Database RegistrationArt. 49, 71ConformityRegistration of high-risk AI in the EU database
A12.1— Provider registrationArt. 49(1)ConformityProvider registration
A12.2— System registrationArt. 49(2)ConformityRegistration of each high-risk AI system
A13FRIA (Deployers)Art. 27GovernanceFundamental Rights Impact Assessment
A13.1— FRIA for high-risk AIArt. 27(1)GovernanceAssessment of impact on fundamental rights
A13.2— FRIA notificationArt. 27(4)GovernanceNotification of FRIA results (public authorities)
A14Third-Party AI (GPAI) AuditArt. 51-56GovernanceAudit Claude, ChatGPT: Privacy, security, ToS
A14.1— DPA with all GPAI providersArt. 51GovernanceData Processing Agreements
A14.2— ToS review (acceptable use)Art. 53GovernanceReview of terms of use
A14.3— No PII policy enforcementGDPR + AI ActGovernanceProhibition on sending PII without encryption
A14.4— Audit logging enabledArt. 12GovernanceLogging of GPAI usage
A15AI Incident ManagementArt. 72, 73MonitoringProcedure for reporting AI incidents
A15.1— Incident classificationArt. 73MonitoringClassification of AI incident severity
A15.2— Serious incident reportingArt. 73(1)MonitoringReporting serious incidents to authorities
A16Post-Market MonitoringArt. 72MonitoringContinuous monitoring after deployment
A16.1— Monitoring planArt. 72(1)MonitoringPost-market monitoring plan
A16.2— Feedback collectionArt. 72(2)MonitoringCollection of user feedback
A17Bias Testing (High-Risk)Art. 10, 15TestingRegular bias testing for high-risk AI
A17.1— Bias testing setupArt. 10(2)(f)TestingSetup of bias testing
A17.2— Regular bias auditsArt. 15TestingRegular audits (monthly/quarterly)

Critical Path

  1. A1 AI Inventory -> complete by 31.1.2026
  2. A2 Risk Classification -> complete by 28.2.2026
  3. A3 AI Governance & QMS -> complete by 31.3.2026
  4. A4 Risk Management System -> complete by 31.5.2026
  5. A5 Data Governance -> complete by 30.4.2026
  6. A11 Conformity Assessment -> complete by 30.6.2026
  7. A12 EU Database Registration -> before placing on market

Implementation Phases


Resources