AI Governance
Version: 1.0 | Effective from: 1 January 2026
1. Purpose
This directive defines how the organization manages the development, deployment, and use of AI systems in compliance with the AI Act (EU 2024/1689).
2. Scope
This directive applies to:
- All AI/ML systems developed internally
- Third-party AI (Claude, GPT, etc.)
- AI embedded in products
- AI used in internal processes
3. AI Classification & Risk Assessment
3.1 AI systems inventory
Every AI system must be documented:
| Field | Description |
|---|---|
| AI System ID | Unique identifier |
| Name | Descriptive name |
| Type | Internal / Third-party / Embedded |
| Purpose | Business use case |
| Responsible person | Who is responsible |
| Risk Level | Prohibited / High / Medium / Low |
| Status | Development / Testing / Production / Retired |
3.2 Risk Classification
Step 1: Identify the AI system
Step 2: Classify by risk:
| Category | Examples | Action |
|---|---|---|
| Prohibited | Real-time biometric ID, social scoring, manipulation | STOP — do not use |
| High-Risk | Credit scoring, employment, health, education | Full compliance |
| Medium-Risk | Chatbots, deepfakes, emotion recognition | Transparency |
| Low-Risk | Spam filters, analytics, recommendations | Minimal |
Step 3: Document the decision
4. High-Risk AI — Mandatory Controls
4.1 Pre-deployment
| Control | Description |
|---|---|
| Design Audit | Is AI necessary? Is there an alternative? |
| Data Assessment | Training data quality, bias check |
| DPIA | Data Protection Impact Assessment |
| Technical Documentation | Model card, limitations, testing |
| Human Oversight Design | Appeal process, override capability |
4.2 Post-deployment
| Control | Frequency |
|---|---|
| Performance Monitoring | Continuous |
| Bias Testing | Monthly |
| Accuracy Validation | Quarterly |
| Incident Review | On incident |
| Re-certification | Annually |
5. GPAI (Third-Party AI)
5.1 Permitted use
| Vendor | Permitted use cases | Restrictions |
|---|---|---|
| Claude (Anthropic) | Content, analysis, coding | No PII without encryption |
| GPT-4 (OpenAI) | Content, analysis, coding | No PII without encryption |
| Perplexity | Research, search | No confidential data |
5.2 Mandatory Controls
Before using GPAI:
- DPA signed
- Terms of Service reviewed
- Data classification: NO PII/confidential
- Audit logging enabled
- User informed about AI use
5.3 Prohibited practices with GPAI
- Sending PII (names, emails, numbers)
- Sending health data
- Sending financial data
- Sending confidential business data
- Automated decision-making without human review
6. AI Development Lifecycle
6.1 Development phases
7. AI Incident Management
7.1 AI incident definition
| Type | Example | Severity |
|---|---|---|
| Bias | Discriminatory outputs | High |
| Hallucination | Factually incorrect information | Medium |
| Privacy breach | PII in outputs | Critical |
| Malfunction | System not working | Medium |
| Security | Adversarial attack | Critical |
7.2 Response Process
8. User Transparency
8.1 Mandatory disclosures
| Situation | Disclosure |
|---|---|
| Chatbot | ”You are communicating with an AI assistant” |
| AI recommendation | ”This content is recommended by AI” |
| AI decision | ”This decision was supported by AI” |
| Deepfake/synthetic | ”This content was generated by AI” |
8.2 Right to Explanation
If AI affects decisions about a person:
- The data subject has the right to know that AI was used
- The data subject has the right to an explanation of the factors involved
- The data subject has the right to human review
9. Documentation Requirements
9.1 Model Card (for each AI system)
| Section | Content |
|---|---|
| Overview | Purpose, owner, status |
| Training Data | Sources, size, preprocessing |
| Architecture | Model type, parameters |
| Performance | Accuracy, limitations |
| Fairness | Bias testing results |
| Limitations | Known issues, edge cases |
| Intended Use | Approved use cases |
| Prohibited Use | What NOT to use it for |
9.2 Retention
| Document | Retention |
|---|---|
| Model Card | Lifetime of model + 5 years |
| Training Data Info | Lifetime of model + 5 years |
| Testing Results | 5 years |
| Incident Reports | 5 years |
| Audit Logs | 5 years |
10. Training & Awareness
| Role | Training | Frequency |
|---|---|---|
| All employees | AI basics, acceptable use | Annually |
| Data Science | Bias testing, fairness | Quarterly |
| Product | AI transparency, UX | Semi-annually |
| Legal | AI Act requirements | Annually |
| Leadership | AI governance, risk | Annually |
11. Policy Review
- Quarterly: Review AI inventory, incidents
- Semi-annually: Update per regulatory changes
- Annually: Full policy review + board approval
Next review: Q2 2026