Skip to content
TECHNOMATON | Docs SAI Certified Trainers

GPAI --- Deployer Obligations

A practical guide for every organisation that uses general-purpose AI tools.


Who is a GPAI deployer?

A deployer is any organisation that uses a general-purpose AI (GPAI) system in the course of its activities --- under its own responsibility.

Examples

ScenarioYour company’s role
Employees use ChatGPT to draft emailsDeployer
Developers write code with GitHub CopilotDeployer
Marketing generates copy in ClaudeDeployer
Customer service deployed an AI chatbotDeployer
HR screens CVs through an AI toolDeployer
An accountant uses Copilot in ExcelDeployer

Deployer vs. Provider --- key distinction

If your employees use any AI tool at work, you are a GPAI deployer. It does not matter whether you have 5 or 500 employees. It does not matter whether you pay for the AI or use it for free.


Timeline --- What applies and when?

DateWhat appliesArticleStatus
2.2.2025AI literacy obligationArt. 4ALREADY IN EFFECT
2.2.2025Prohibited practicesArt. 5ALREADY IN EFFECT
2.8.2025GPAI rules, governance, penaltiesArt. 53—56ALREADY IN EFFECT
2.8.2026High-risk AI full applicabilityArt. 6—49Upcoming

Article 26 --- Deployer obligations

Article 26 of Regulation (EU) 2024/1689 defines eight key obligations for every deployer of an AI system. Below is a breakdown of each, with practical implications for your organisation.

(1) Technical and organisational measures

Deployers shall take appropriate technical and organisational measures to ensure that they use AI systems in accordance with the instructions for use.

What this means in practice:

  • Read the Terms of Service and documentation for each AI tool
  • Follow the limits and recommendations of the provider (acceptable use policy)
  • Do not use AI tools for purposes for which they are not intended
  • Create an internal AI Acceptable Use Policy (AUP) defining permitted and prohibited uses

(2) Competent persons for oversight

Deployers shall ensure that the persons responsible for overseeing AI systems have sufficient AI literacy.

What this means in practice:

  • Designate specific persons responsible for AI in your organisation (need not be full-time)
  • These persons must undergo AI literacy training (Art. 4)
  • Recommended: CTO/CISO plus heads of departments that actively use AI
  • Document competences and completed training

(3) Relevance of input data

Deployers shall ensure that input data is relevant with regard to the intended purpose of the AI system.

What this means in practice:

  • Do not submit data into AI systems that does not belong there (personal data, health data, financial data without encryption)
  • Define rules for what data may and may not be fed into AI
  • Pay particular attention to GDPR --- personal data in prompts constitutes processing
  • Create a data classification: public / internal / confidential / strictly confidential plus rules for AI

(4) Monitoring AI operation

Deployers shall monitor the operation of the AI system on the basis of the instructions for use.

What this means in practice:

  • Regularly check the quality of AI outputs (accuracy, hallucinations, bias)
  • Record incidents --- when AI provided incorrect, misleading or harmful output
  • Establish a process for escalation (who resolves it, how quickly)
  • Recommended: quarterly review of AI output quality

(5) Obligation to inform employees

Deployers shall inform employees and their representatives about the deployment of the AI system in the workplace.

What this means in practice:

  • Inform employees that AI tools are used in the organisation
  • Communicate which tools they are and what they are used for
  • If AI affects decisions about employees (performance reviews, planning), inform them in advance
  • Consult with trade unions / employee representatives (if applicable)
  • Recommended: internal announcement plus FAQ document

(6) Retention of logs

Deployers shall retain logs automatically generated by the AI system for a period appropriate to the intended purpose, but no less than 6 months.

What this means in practice:

  • Retain the history of interactions with AI (prompts plus responses) for at least 6 months
  • Enterprise licences (ChatGPT Enterprise, Claude for Business, Copilot for Microsoft 365) typically retain logs
  • Free tier and personal accounts typically do not retain logs sufficiently --- this is a problem
  • Consider centralised access through enterprise licences instead of individual accounts
  • Ensure logs do not contain personal data, or handle them in accordance with GDPR

(7) DPIA obligation

Before deploying a high-risk AI system, the deployer shall carry out a Data Protection Impact Assessment (DPIA) pursuant to GDPR.

What this means in practice:

  • If AI processes personal data on a large scale, carry out a DPIA (Art. 35 GDPR)
  • Particularly relevant for: AI in HR, customer service, marketing personalisation
  • For standard GPAI use (internal assistance without personal data), a DPIA is typically not required
  • In case of doubt, consult your DPO / GDPR specialist

(8) Cooperation with supervisory authorities

Deployers shall cooperate with the relevant supervisory authorities at their request.

What this means in practice:

  • Provide information about your AI deployment on request
  • Maintain documentation so that it can be presented (inventory, AUP, logs, training records)
  • Prepare for the possibility that the authority may request access to logs, documentation, contracts

Article 50 --- Transparency obligations

Article 50 supplements deployer obligations with transparency requirements towards persons who interact with AI.

(1) Informing about interaction with AI

If your customers or users communicate with an AI system, they must be informed that they are interacting with AI, not with a human.

ChannelCorrect implementationWrong
Website chatbot”You are communicating with an AI assistant. To connect with a human, type ‘operator’.”No indication
Voice bot on a phone line”This call is being handled by the AI assistant of company XY.”Playing an AI voice without warning
Email”This email was prepared with AI assistance and reviewed by [name].”Sending an AI response as if written by a person

(2) Labelling AI-generated content

AI-generated text, images, audio and video intended for the public must be labelled as such.

Examples of correct labelling:

MARKETING:
"Illustration created using AI" (on an AI-generated image)
BLOG POST:
"This article was created with AI assistance and edited by the editorial team."
SOCIAL MEDIA:
Use available platform tools to label AI content.

(3) Automated decision-making

If an AI system makes decisions affecting natural persons, those persons must be informed.

Relevant scenarios for organisations:

  • AI-powered customer sorting systems (routing, prioritisation)
  • Automated supplier evaluation
  • AI scoring within business processes

(4) Exceptions

The transparency obligation does not apply to:

  • AI used in the context of artistic creation where it is apparent from the context
  • Satire and parody using AI
  • AI tools for internal analytics without impact on the rights of third parties

DPA requirements --- Contracts with AI providers

Using GPAI tools requires contractual data protection arrangements, especially where any company or personal data enters the AI.

Provider overview

ProviderProductDPA / contractual documentWhere to find it
OpenAIChatGPT, GPT APIDPA, Enterprise Agreementplatform.openai.com/policies
AnthropicClaude, APITerms of Service, Commercial Agreementanthropic.com/terms
MicrosoftCopilot (M365, GitHub)DPA (part of M365 Enterprise Agreement)Microsoft Trust Center
GoogleGemini, Vertex AICloud DPA, Enterprise Agreementcloud.google.com/terms
GitHubCopilotGitHub Customer Agreement + DPAgithub.com/customer-terms

What a DPA must contain

AreaRequirementWhy it matters
Purpose of processingClear definition of what the provider processes data forGDPR Art. 28
Security measuresEncryption, access controls, incident responseProtection of company data
Geographic restrictionsWhere data is processed (EU / US / other)GDPR transfer rules, Schrems II
Right to auditAbility to verify DPA complianceCompliance demonstrability
Sub-processorsList and approval process for sub-processorsControl over data flow
Incident reportingDeadlines and procedure for reporting security incidentsGDPR Art. 33 --- 72 hours
Retention and deletionHow long data is retained and how it is deletedRight to erasure
Model trainingWhether your data is used to train the modelKey --- enterprise licences typically do not, free tier does

Practical steps --- Implementation checklist

Phase 1: Inventory (Week 1)

  • Map all AI tools used in the organisation (formal and informal)
  • Identify who uses them, for what purpose and how often
  • Classify usage: internal assistance / customer contact / decision-making about persons
  • Check existing DPAs and contracts with AI providers
  • Identify Shadow AI --- tools used without management’s knowledge
  • Document results in the AI inventory

Phase 2: Documentation (Week 2)

  • Create an AI Acceptable Use Policy (AUP) --- what is permitted, what is prohibited
  • Define an approval process for introducing new AI tools
  • Set up logging and archiving of AI interactions (min. 6 months per Art. 26(6))
  • Prepare a notification for employees about AI deployment (Art. 26(5))
  • Define data classification in relation to AI (what may/may not go into a prompt)
  • Document persons responsible for AI oversight (Art. 26(2))

Phase 3: Training (Week 3)

  • AI literacy training for all employees (Art. 4) --- what AI is, how it works, what the risks are
  • Specific training for active AI tool users --- prompt engineering, security, limitations
  • Training for management --- governance, responsibility, compliance
  • Documentation of training completion (attendance sheet, certificate)
  • Knowledge verification (quiz, test, practical demonstration)
  • Plan for training recurrence (min. once per year, upon tool changes)

Phase 4: Contractual arrangements (Week 4)

  • Enter into DPAs with all AI providers where a DPA is missing
  • Migrate from free tier to enterprise licences (where relevant)
  • Check whether providers use data for model training
  • Verify geographic location of data processing (EU preference)
  • Check sub-processors for each provider

Phase 5: Monitoring (Ongoing)

  • Set up regular AI inventory review (quarterly)
  • Track incidents and near-misses (poor AI outputs, data leaks)
  • Update the AUP as new tools and regulations emerge
  • Conduct AUP compliance audits (spot checks)
  • Monitor changes in provider terms (ToS updates)
  • Prepare documentation for potential supervisory audits

GPAI vs. High-Risk --- When you need more

Most organisations are GPAI deployers --- they use general-purpose AI tools to support their operations. The obligations are manageable. But if you use AI for decision-making about people, you enter the high-risk regime with substantially stricter requirements.

Comparison of obligations

ObligationMinimal riskGPAI DeployerHigh-Risk
AI literacy (Art. 4)YesYesYes
AUP / internal policyRecommendedYesYes
DPA with providerRecommendedYesYes
Logging (6 months)NoYesYes (5 years)
Informing employeesRecommendedYesYes
Transparency (Art. 50)NoYes (if user-facing)Yes
DPIANoWhen processing personal dataYes
Risk Management SystemNoNoYes
Conformity AssessmentNoNoYes
FRIANoNoYes
EU registrationNoNoYes

Most common mistakes

1. “GPAI doesn’t apply to us, we don’t have our own AI”

You don’t have your own AI model? That is irrelevant. If employees use ChatGPT, Copilot or Claude --- even on a free tier, even on their personal phone for work purposes --- you are a deployer. Using is not developing. The AI Act regulates both.

2. “We have an NDA, that’s enough”

An NDA protects confidentiality of information between two parties. An NDA is not a DPA. A Data Processing Agreement (DPA) is a specific contractual document required by GDPR (Art. 28) that addresses the processing of personal data. You need both.

3. “We’ll ban AI and be done with it”

Banning AI won’t solve the problem --- it will create a new one. Employees will use AI anyway, just covertly (Shadow AI). Shadow AI is worse than regulated AI because you have no visibility into what data enters the AI. A better strategy: regulate, train and monitor.

4. “IT will handle it”

AI governance is not an IT project. It concerns the entire organisation --- HR (training, informing employees), Legal (contracts, compliance), Management (strategy, accountability), Marketing (content transparency). IT handles the technical implementation, but the owner is company leadership.

5. “We’re a small company, it doesn’t apply to us”

The AI Act has no exemption for small companies regarding deployer obligations. Art. 4 (AI literacy) and Art. 26 (deployer obligations) apply to all organisations regardless of size. SMEs have certain accommodations for high-risk obligations, but the basic deployer obligations apply universally.

6. “Free ChatGPT is enough for business use”

The free tier of most GPAI tools uses your data to train the model, does not provide a DPA, lacks enterprise-grade logging and does not allow centralised management. For business deployment, you need an enterprise or business licence that addresses these issues.


Penalties for non-compliance

Type of violationMaximum penalty
Violation of deployer obligations (Art. 26)up to EUR 15 million or 3% of global turnover
Violation of transparency (Art. 50)up to EUR 15 million or 3% of global turnover
Non-fulfilment of AI literacy (Art. 4)up to EUR 10 million or 2% of global turnover
Providing false information to an authorityup to EUR 7.5 million or 1% of global turnover

Example: A company with EUR 50 million turnover --- maximum penalty for violating Art. 26 is EUR 1.5 million (3% of turnover).


Next steps

  1. You have understood deployer obligations
  2. -> Conduct an AI inventory
  3. -> Classify risks
  4. -> Go through the compliance checklist

Sources