Skip to content
TECHNOMATON | Docs SAI Certified Trainers

AI Governance

Version: 1.0 | Effective from: 1 January 2026


1. Purpose

This directive defines how the organization manages the development, deployment, and use of AI systems in compliance with the AI Act (EU 2024/1689).


2. Scope

This directive applies to:

  • All AI/ML systems developed internally
  • Third-party AI (Claude, GPT, etc.)
  • AI embedded in products
  • AI used in internal processes

3. AI Classification & Risk Assessment

3.1 AI systems inventory

Every AI system must be documented:

FieldDescription
AI System IDUnique identifier
NameDescriptive name
TypeInternal / Third-party / Embedded
PurposeBusiness use case
Responsible personWho is responsible
Risk LevelProhibited / High / Medium / Low
StatusDevelopment / Testing / Production / Retired

3.2 Risk Classification

Step 1: Identify the AI system

Step 2: Classify by risk:

CategoryExamplesAction
ProhibitedReal-time biometric ID, social scoring, manipulationSTOP — do not use
High-RiskCredit scoring, employment, health, educationFull compliance
Medium-RiskChatbots, deepfakes, emotion recognitionTransparency
Low-RiskSpam filters, analytics, recommendationsMinimal

Step 3: Document the decision


4. High-Risk AI — Mandatory Controls

4.1 Pre-deployment

ControlDescription
Design AuditIs AI necessary? Is there an alternative?
Data AssessmentTraining data quality, bias check
DPIAData Protection Impact Assessment
Technical DocumentationModel card, limitations, testing
Human Oversight DesignAppeal process, override capability

4.2 Post-deployment

ControlFrequency
Performance MonitoringContinuous
Bias TestingMonthly
Accuracy ValidationQuarterly
Incident ReviewOn incident
Re-certificationAnnually

5. GPAI (Third-Party AI)

5.1 Permitted use

VendorPermitted use casesRestrictions
Claude (Anthropic)Content, analysis, codingNo PII without encryption
GPT-4 (OpenAI)Content, analysis, codingNo PII without encryption
PerplexityResearch, searchNo confidential data

5.2 Mandatory Controls

Before using GPAI:

  • DPA signed
  • Terms of Service reviewed
  • Data classification: NO PII/confidential
  • Audit logging enabled
  • User informed about AI use

5.3 Prohibited practices with GPAI

  • Sending PII (names, emails, numbers)
  • Sending health data
  • Sending financial data
  • Sending confidential business data
  • Automated decision-making without human review

6. AI Development Lifecycle

6.1 Development phases


7. AI Incident Management

7.1 AI incident definition

TypeExampleSeverity
BiasDiscriminatory outputsHigh
HallucinationFactually incorrect informationMedium
Privacy breachPII in outputsCritical
MalfunctionSystem not workingMedium
SecurityAdversarial attackCritical

7.2 Response Process


8. User Transparency

8.1 Mandatory disclosures

SituationDisclosure
Chatbot”You are communicating with an AI assistant”
AI recommendation”This content is recommended by AI”
AI decision”This decision was supported by AI”
Deepfake/synthetic”This content was generated by AI”

8.2 Right to Explanation

If AI affects decisions about a person:

  • The data subject has the right to know that AI was used
  • The data subject has the right to an explanation of the factors involved
  • The data subject has the right to human review

9. Documentation Requirements

9.1 Model Card (for each AI system)

SectionContent
OverviewPurpose, owner, status
Training DataSources, size, preprocessing
ArchitectureModel type, parameters
PerformanceAccuracy, limitations
FairnessBias testing results
LimitationsKnown issues, edge cases
Intended UseApproved use cases
Prohibited UseWhat NOT to use it for

9.2 Retention

DocumentRetention
Model CardLifetime of model + 5 years
Training Data InfoLifetime of model + 5 years
Testing Results5 years
Incident Reports5 years
Audit Logs5 years

10. Training & Awareness

RoleTrainingFrequency
All employeesAI basics, acceptable useAnnually
Data ScienceBias testing, fairnessQuarterly
ProductAI transparency, UXSemi-annually
LegalAI Act requirementsAnnually
LeadershipAI governance, riskAnnually

11. Policy Review

  • Quarterly: Review AI inventory, incidents
  • Semi-annually: Update per regulatory changes
  • Annually: Full policy review + board approval

Next review: Q2 2026