AI Act: Compliance Checklist
Deadline: Parts applicable from 2.2.2025; full applicability 2.8.2026
Part A: AI Act Compliance
| # | Activity | Reference | Phase | Description |
|---|---|---|---|---|
| A1 | AI Inventory | Art. 2, 3 | Preparation | List all AI/ML systems (in-house, third-party, embedded) |
| A1.1 | — Internal AI systems | Art. 3(1) | Preparation | Data pipeline, recommender, fraud detection |
| A1.2 | — Third-party AI (GPAI, APIs) | Art. 3(63) | Preparation | Claude, ChatGPT, Perplexity, custom models |
| A1.3 | — Embedded AI | Art. 3(1) | Preparation | AI in product for end-users (incl. open-source) |
| A2 | Risk Classification | Art. 5, 6, Annex III | Analysis | Assign risk: Prohibited / High-Risk / Medium / Low |
| A2.1 | — Prohibited practices? | Art. 5 | Analysis | Identify prohibited AI (emotion recognition of children, etc.) |
| A2.2 | — High-Risk AI | Art. 6, Annex III | Analysis | Credit scoring, employment, health, education, etc. |
| A2.3 | — GPAI Usage Classification | Art. 51-56 | Analysis | Do you use GPAI? How? (fine-tuning, RAG, direct queries?) |
| A3 | AI Governance & QMS | Art. 16, 17 | Governance | Quality Management System for AI |
| A3.1 | — AI Policy document | Art. 16(a) | Governance | Policy: How we use AI, roles, approval process |
| A3.2 | — AI governance structure | Art. 17 | Governance | Responsibilities, roles, reporting lines |
| A3.3 | — AI ownership assigned | Art. 17(1)(a) | Governance | Owner for each AI system |
| A4 | Risk Management System | Art. 9 | Implementation | Risk management for high-risk AI |
| A4.1 | — Risk identification | Art. 9(2)(a) | Implementation | Identify risks for each high-risk AI |
| A4.2 | — Risk mitigation | Art. 9(2)(b) | Implementation | Mitigation measures |
| A4.3 | — Residual risk assessment | Art. 9(4) | Implementation | Acceptance of residual risks |
| A5 | Data Governance | Art. 10 | Implementation | Training data quality, representativeness, completeness |
| A5.1 | — Training data quality | Art. 10(2) | Implementation | Relevant, representative, error-free |
| A5.2 | — Data bias assessment | Art. 10(2)(f) | Implementation | Assessment of potential biases in data |
| A5.3 | — Data governance procedures | Art. 10(5) | Implementation | Processes for managing training data |
| A6 | Technical Documentation | Art. 11, 18 | Documentation | Model card, training info, limitations (5-year retention) |
| A6.1 | — Model documentation | Art. 11, Annex IV | Documentation | Architecture, training process, performance |
| A6.2 | — Testing results | Art. 11(1)(c) | Documentation | Testing results, benchmarks |
| A6.3 | — Incident log | Art. 18(1) | Documentation | Log of incidents and corrective actions |
| A7 | Logging & Record-Keeping | Art. 12, 19 | Implementation | Automatic logging of AI system operations |
| A7.1 | — Automatic logging | Art. 12(1) | Implementation | Events, inputs, outputs, decisions |
| A7.2 | — Log retention (min. 6 months) | Art. 12(2) | Implementation | Log retention |
| A8 | Transparency & User Info | Art. 13, 50 | Implementation | Information for users: “We use AI” |
| A8.1 | — AI disclosure | Art. 50(1) | Implementation | Label AI interaction for users |
| A8.2 | — Instructions for use | Art. 13 | Implementation | Documentation for deployers |
| A9 | Human Oversight | Art. 14 | Implementation | Human-in-the-loop mechanisms for high-risk AI |
| A9.1 | — Override capability | Art. 14(4)(a) | Implementation | Ability to override AI decisions |
| A9.2 | — Stop mechanism | Art. 14(4)(e) | Implementation | Ability to stop the AI system |
| A10 | Accuracy & Robustness | Art. 15 | Testing | Testing, monitoring, drift detection |
| A10.1 | — Accuracy testing | Art. 15(1) | Testing | Regular accuracy testing |
| A10.2 | — Robustness testing | Art. 15(4) | Testing | Robustness testing (adversarial, edge cases) |
| A10.3 | — Drift detection | Art. 15(3) | Testing | Performance degradation monitoring |
| A11 | Conformity Assessment | Art. 43, 47-48 | Conformity | Conformity assessment for high-risk AI |
| A11.1 | — Internal assessment | Art. 43(2) | Conformity | Internal conformity assessment (for most high-risk) |
| A11.2 | — CE marking | Art. 48 | Conformity | CE marking before placing on market |
| A12 | EU Database Registration | Art. 49, 71 | Conformity | Registration of high-risk AI in the EU database |
| A12.1 | — Provider registration | Art. 49(1) | Conformity | Provider registration |
| A12.2 | — System registration | Art. 49(2) | Conformity | Registration of each high-risk AI system |
| A13 | FRIA (Deployers) | Art. 27 | Governance | Fundamental Rights Impact Assessment |
| A13.1 | — FRIA for high-risk AI | Art. 27(1) | Governance | Assessment of impact on fundamental rights |
| A13.2 | — FRIA notification | Art. 27(4) | Governance | Notification of FRIA results (public authorities) |
| A14 | Third-Party AI (GPAI) Audit | Art. 51-56 | Governance | Audit Claude, ChatGPT: Privacy, security, ToS |
| A14.1 | — DPA with all GPAI providers | Art. 51 | Governance | Data Processing Agreements |
| A14.2 | — ToS review (acceptable use) | Art. 53 | Governance | Review of terms of use |
| A14.3 | — No PII policy enforcement | GDPR + AI Act | Governance | Prohibition on sending PII without encryption |
| A14.4 | — Audit logging enabled | Art. 12 | Governance | Logging of GPAI usage |
| A15 | AI Incident Management | Art. 72, 73 | Monitoring | Procedure for reporting AI incidents |
| A15.1 | — Incident classification | Art. 73 | Monitoring | Classification of AI incident severity |
| A15.2 | — Serious incident reporting | Art. 73(1) | Monitoring | Reporting serious incidents to authorities |
| A16 | Post-Market Monitoring | Art. 72 | Monitoring | Continuous monitoring after deployment |
| A16.1 | — Monitoring plan | Art. 72(1) | Monitoring | Post-market monitoring plan |
| A16.2 | — Feedback collection | Art. 72(2) | Monitoring | Collection of user feedback |
| A17 | Bias Testing (High-Risk) | Art. 10, 15 | Testing | Regular bias testing for high-risk AI |
| A17.1 | — Bias testing setup | Art. 10(2)(f) | Testing | Setup of bias testing |
| A17.2 | — Regular bias audits | Art. 15 | Testing | Regular audits (monthly/quarterly) |
Critical Path
- A1 AI Inventory -> complete by 31.1.2026
- A2 Risk Classification -> complete by 28.2.2026
- A3 AI Governance & QMS -> complete by 31.3.2026
- A4 Risk Management System -> complete by 31.5.2026
- A5 Data Governance -> complete by 30.4.2026
- A11 Conformity Assessment -> complete by 30.6.2026
- A12 EU Database Registration -> before placing on market