Skip to content
TECHNOMATON | Docs SAI Certified Trainers

L1 -- Align Guide

What is L1 — Align

L1 — Align is the governance layer of the AI-Native Entry Framework. Its goal is to ensure your organization uses artificial intelligence in compliance with legislation and has functional internal rules, trained employees, and an audit trail.

L1 belongs to the six-level N . A . T . I . V . E methodology:

LevelNameFocus
L0NavigateContext and orientation in the regulatory landscape
L1AlignGovernance, legislative compliance
L2TransformAI-Native foundations, building organizational DNA
L3InnovateChange management, AI culture adoption
L4VerifyAudit, certification, compliance verification
L5ExecuteImplementation, deployment, operations

Why AI governance right now

Since 2 February 2025, the AI literacy obligation under Art. 4 of the AI Act and the prohibition of certain AI practices under Art. 5 have been in effect. Every organization that uses AI tools (ChatGPT, Copilot, Gemini…) must ensure its employees understand the basic rules and risks.

By 2 August 2026, organizations must be prepared for the full applicability of the AI Act — including the classification of high-risk systems, documentation, and technical requirements.

L1 will help you meet these obligations before sanctions apply.

Who is L1 for

  • Any organization that uses AI tools in day-to-day operations
  • Companies from 1 employee to corporates — the tool adapts to your size
  • You don’t need to be a tech company — it’s enough that your people use ChatGPT

What you get

After completing L1, you have:

  1. 9 ready-made documents — policies, directives, templates (-> details)
  2. Trained employees — presentation + quiz with certified results
  3. Audit trail — structured export of all responses, decisions, and results
  4. Organization assessment — readiness scan, gap analysis, sovereignty check
  5. Implementation plan — 54 items distributed across 8 phases

Two delivery modes

DIY (Self-Service)Managed (Consultant-Led)
Who drivesYouCertified consultant
ToolSelf-Assessment ToolSelf-Assessment Tool + consultant
DurationAt your own paceTypically 2—4 weeks
For whomSmaller companies, tech-savvy organizationsLarger organizations, regulated sectors

10 process phases

L1 consists of 10 phases that build on each other logically. Each phase addresses a specific part of the compliance puzzle:

1. Setup

What it addresses: Organization identification and environment configuration.

You fill in basic details — company name, registration number, sector, number of employees. These details are reflected in all generated documents. If you have a consultant, their profile is also configured.

Why you need it: Documents must be personalized for your organization. Generic templates without company identification carry no legal weight.

2. Discovery

What it addresses: Initial mapping of your current readiness.

Includes three tools (-> details):

  • Readiness Scan (20 questions, 100 points) — quick overview
  • Gap Analysis (32 items, 6 areas) — where your gaps are
  • Sovereignty Quick Scan (5 questions) — vendor dependency

Why you need it: Without diagnostics, you don’t know what you’re missing. Gap analysis reveals whether you have the basics and need to formalize (DIY), or have significant gaps and need a systematic approach (Managed).

3. AI systems inventory

What it addresses: An inventory of all AI tools your organization uses.

The registry includes system name, vendor, purpose of use, types of data processed, and number of users. For common GPAI tools (ChatGPT, Copilot…), it offers quick add.

Why you need it: AI Act Art. 6 requires the classification of AI systems by risk. Without an inventory, you have nothing to classify. The registry is the foundation for all subsequent steps — from assessment to documentation.

4. Classification

What it addresses: Determining your role under AI Act Art. 3 and the risk level of each system.

You assign each system a role (Provider / Deployer / Downstream / GPAI Provider) and a risk level (Prohibited / High / Limited / Minimal). The tool assists with an interactive decision tree.

Why you need it: Your role and risk level determine your obligations. A deployer of a high-risk system has different obligations than a downstream user of a minimal-risk tool.

5. Assessments

What it addresses: Detailed evaluation for specific types of systems.

  • Annex IV Assessment (51 items) — for high-risk systems
  • GPAI Assessment (20 questions) — for general-purpose AI models
  • Supply Chain Assessment (11 questions) — supply chain evaluation

Why you need it: These assessments generate documentation for the regulator. If you have a high-risk system, Annex IV is mandatory.

6. Documents

What it addresses: Generating 9 documents tailored to your organization (-> details).

Documents are automatically populated based on data from previous steps. They cover AI policy, data classification, incident reporting, training materials, and more.

Why you need it: The AI Act, GDPR, and labour law all require documented rules. Without a written AI policy, you have no way to demonstrate compliance.

7. Training

What it addresses: Employee training with verifiable evidence (-> details).

24-slide presentation + 18-question quiz with an 80% pass threshold. Two modes: facilitated (presenter-led) or distributed (link for employees).

Why you need it: AI Act Art. 4 explicitly requires that providers and deployers of AI ensure “a sufficient level of AI literacy” among their personnel. The quiz serves as audit evidence.

8. Communication

What it addresses: Internal communication templates for introducing AI governance.

Includes templates for CEO announcement, training invitation, go-live message, and more.

Why you need it: Rules without communication don’t work. Employees need to know what’s changing and why. Proper communication reduces resistance and increases adoption.

9. Implementation

What it addresses: Structured rollout of 54 items in 8 phases.

From preparation and governance through technical controls, training, and monitoring to continuous improvement.

Why you need it: Documents and training are not enough. The implementation checklist ensures that rules are actually put into practice — from IT settings to management reporting.

10. QA & Handoff

What it addresses: Quality gate, final check, and export.

Automated checks (document completeness, placeholder check, training results) + manual checks (formal requirements, delivery plan). Electronic signature and export of a complete ZIP package.

Why you need it: The QA gate ensures nothing is left incomplete. The export creates an archival copy for audit. The signature formalizes acceptance.

What L1 does not cover

Some areas of AI governance are intentionally not covered by L1, as they require a specialized approach:

  • DPIA for AI (GDPR Art. 35) — mentioned in the framework, but requires DPO consultation
  • ROPA for AI (GDPR Art. 30) — records of processing activities are outside L1 scope
  • Full AI inventory as PDF output — the inventory is in the tool, but exports as part of JSON, not as a standalone PDF document
  • Technical implementation — L1 addresses “what” and “why”, the technical “how” is in L5 — Execute

Next steps after L1

LevelWhat it bringsWhen
L2 — TransformAI-Native foundations, building organizational DNAAfter completing L1
L3 — InnovateChange management, AI culture adoptionIn parallel with L2
L4 — VerifyAudit, certification, compliance verificationBefore regulatory deadlines
L5 — ExecuteImplementation, deployment, operationsOngoing