Skip to content
TECHNOMATON | Docs SAI Certified Trainers

NIST AI 600-1: Generative AI Profile

July 2024


1. Document Overview

AttributeValue
IdentifierNIST AI 600-1
TitleAI Risk Management Framework: Generative AI Profile
PublishedJuly 2024
InstitutionNIST (National Institute of Standards and Technology)
Legal basisExecutive Order 14110 (Biden) on Safe, Secure, and Trustworthy AI
NatureVoluntary framework for GAI risk management
Scope64 pages, cross-sector profile

2. Generative AI (GAI) Definition

Per EO 14110: “The class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.”

Foundation models (dual-use): AI models trained on broad data, using self-supervision, containing at least tens of billions of parameters, applicable across a wide range of contexts.


3. Overview of 12 GAI-Specific Risks

3.1 CBRN Information or Capabilities

  • Definition: Facilitated access to information about chemical, biological, radiological, or nuclear weapons
  • Trustworthy AI characteristics: Safe, Explainable and Interpretable

3.2 Confabulation (Hallucination)

  • Definition: Production of confidently presented but erroneous or untruthful content
  • Cause: Statistical prediction of the next token
  • Trustworthy AI characteristics: Fair with Harmful Bias Managed, Safe, Valid and Reliable, Explainable and Interpretable

3.3 Dangerous, Violent, or Hateful Content

  • Definition: Facilitated production of violent, radicalizing, threatening content
  • Risk: Jailbreaking — prompt manipulation to bypass safety controls
  • Trustworthy AI characteristics: Safe, Secure and Resilient

3.4 Data Privacy

  • Risks: Leakage of personal data from training data, inference of sensitive information
  • Trustworthy AI characteristics: Accountable and Transparent, Privacy Enhanced, Safe, Secure and Resilient

3.5 Environmental Impacts

  • Facts: High energy consumption for both training and inference
  • Trustworthy AI characteristics: Accountable and Transparent, Safe

3.6 Harmful Bias and Homogenization

  • Manifestations: Stereotypical outputs, underrepresentation of minorities, model collapse
  • Trustworthy AI characteristics: Fair with Harmful Bias Managed, Valid and Reliable

3.7 Human-AI Configuration

  • Risks: Algorithmic aversion, automation bias, anthropomorphization
  • Trustworthy AI characteristics: Accountable and Transparent, Explainable and Interpretable, Fair, Privacy, Safe, Valid

3.8 Information Integrity

  • Risks: Misinformation, disinformation, deepfakes, erosion of trust
  • Trustworthy AI characteristics: Accountable and Transparent, Safe, Valid and Reliable, Interpretable

3.9 Information Security

  • Aspects: GAI for cyberattacks (phishing) vs. attacks on GAI (prompt injection)
  • Trustworthy AI characteristics: Privacy Enhanced, Safe, Secure and Resilient, Valid and Reliable

3.10 Intellectual Property

  • Risks: Copyright infringement, memorization
  • Trustworthy AI characteristics: Accountable and Transparent, Fair, Privacy

3.11 Obscene, Degrading, and/or Abusive Content

  • Risks: CSAM, NCII
  • Trustworthy AI characteristics: Fair, Safe, Privacy

3.12 Value Chain and Component Integration

  • Issues: Non-transparent integration of third-party components, unverified datasets
  • Trustworthy AI characteristics: All characteristics

4. Risk Management Actions (GOVERN, MAP, MEASURE, MANAGE)

The document contains over 200 specific actions. Key examples:

  • GV-1.3-007: Plan for halting development/deployment of GAI system with unacceptable risk.
  • MP-2.3-005: Regular adversarial testing.
  • MS-2.6-007: Assessment of vulnerabilities to bypassing safety measures.
  • MG-2.2-009: Responsible use of synthetic data.

5. Mapping to the EU AI Act

NIST GAI ProfileEU AI Act
12 risk categoriesRisk-based classification (Annex III)
GOVERN functionArticles 9, 16, 17 (QMS, Governance)
Pre-deployment testingArticle 9 (Testing), Article 15 (Accuracy)
Content provenanceArticle 50 (Transparency obligations)
Human oversightArticle 14 (Human oversight)
Incident disclosureArticle 62 (Serious incidents reporting)