EU AI Act on AWS: A Practical Compliance Guide for High-Risk AI on Bedrock and SageMaker
Quick summary: EU AI Act compliance on AWS — risk classification, prohibited practices, GPAI obligations, the high-risk Annex III framework (enforceable 2 August 2026), and the AWS-native control mapping using Bedrock Guardrails, SageMaker Model Cards, and Audit Manager governance.

Table of Contents
The EU AI Act (Regulation (EU) 2024/1689) is the world’s first comprehensive AI regulation, in force in stages since August 2024. The next big enforcement milestone is 2 August 2026, when high-risk Annex III obligations become binding with penalties up to €35M or 7% of worldwide annual turnover for prohibited practices and €15M or 3% for non-compliance with high-risk obligations.
If you deploy AI on AWS into the EU market — or build AI products that EU customers will use — the AI Act is on your critical path. This guide is for product owners, ML engineers, security architects, and compliance leads. It covers the risk classification, the GPAI obligations that apply to model providers, the eight obligation areas for high-risk Annex III systems, and the AWS-native control mapping using Bedrock Guardrails, Bedrock Model Evaluation, SageMaker Model Cards, SageMaker Clarify, and AWS Config conformance packs.
Need help with EU AI Act readiness on AWS? FactualMinds runs AI compliance engagements that map ISO 27001, SOC 2, NIST AI RMF, and EU AI Act controls onto a single evidence pipeline. See our compliance services and Cyber-Led AI, or talk to our team.
Step 1: Classify Your AI System
Risk classification is the first decision and shapes everything downstream. The Act defines four tiers.
Unacceptable-risk practices (Article 5) are prohibited from 2 February 2025:
- Subliminal or manipulative techniques that distort behaviour and cause significant harm.
- Exploitation of vulnerabilities (age, disability, socio-economic situation).
- Social scoring by public or private actors leading to detrimental treatment.
- Untargeted scraping of facial images from internet/CCTV to build face-recognition databases.
- Emotion recognition in workplace and education settings (with narrow medical and safety exceptions).
- Biometric categorisation that infers sensitive attributes (race, political opinions, trade-union membership, religious beliefs, sexual orientation).
- Real-time remote biometric identification in publicly accessible spaces by law enforcement (with narrow, judicially-authorised exceptions).
- Predictive policing based solely on profiling.
If your system touches any of these, redesign or stop. There is no compliance pathway.
High-risk systems (Annex III) are permitted but carry the full obligations:
- Biometric identification and categorisation of natural persons.
- Management and operation of critical infrastructure (road traffic, water, gas, electricity, heating, digital infrastructure).
- Education and vocational training (admissions, examinations, monitoring of prohibited behaviour).
- Employment, workers management, and access to self-employment (CV screening, performance assessment, work allocation, monitoring).
- Access to and enjoyment of essential private services and essential public services (credit scoring, social benefits eligibility, emergency call dispatching, life-and-health insurance pricing).
- Law enforcement.
- Migration, asylum, and border control.
- Administration of justice and democratic processes.
Plus AI used as a safety component of products covered by listed Union harmonisation legislation (medical devices under MDR, machinery, toys, lifts, recreational craft, automotive, civil aviation, marine equipment).
Limited-risk systems carry transparency obligations: chatbots must disclose they are AI, AI-generated content must be labelled, deepfakes must be disclosed unless protected by law (artistic, satirical, etc.).
Minimal-risk systems carry no AI Act obligations.
For most enterprise deployments, the bright lines are: HR screening, performance assessment, credit/insurance pricing, and biometric workforce systems = high-risk. Most other internal AI (productivity copilots, customer-support assistants for non-critical services, internal search) is limited or minimal.
Step 2: Understand GPAI vs Deployer Obligations
The Act distinguishes between providers of AI systems (place them on the market) and deployers (use them under their authority). For foundation models there is a third role: General-Purpose AI model providers.
If you call Anthropic Claude, Amazon Nova, Meta Llama, or Mistral models through Bedrock and use them in your own application:
- The model provider carries the GPAI obligations (Article 53): technical documentation per Annex XI, an EU-aligned summary of training data, a copyright-compliance policy, and cooperation with the AI Office. Models with systemic risk (cumulative training compute >10^25 FLOPs) carry the heavier Article 55 obligations.
- You as the deployer carry the obligations attached to your specific use-case classification — minimal, limited, or high-risk.
If you fine-tune a model and put it on the market under your own name, you may become a downstream provider with your own GPAI-like obligations. The threshold is whether the fine-tuning constitutes a “substantial modification” — currently a case-by-case judgement under the GPAI Code of Practice.
Step 3: Implement the Eight High-Risk Obligations
For each Annex III system, you implement eight obligation areas (Articles 9-15). This is the section your compliance audit will spend the most time on.
1. Risk-management system (Article 9)
A documented, lifecycle-spanning risk-management process. Identify, estimate, evaluate, and mitigate risks from intended use and reasonably foreseeable misuse. Update on change, on incident, on new threat. NIST AI RMF maps cleanly to this — if you have NIST AI RMF, you are 70% of the way to AI Act Article 9.
2. Data and data governance (Article 10)
Training, validation, and test datasets must be relevant, representative, free of errors, and complete. Bias examination is mandatory. Document data sources, collection methods, intended population, and statistical properties.
AWS controls: Amazon SageMaker Data Wrangler for preparation lineage, SageMaker Feature Store for versioned feature definitions, Amazon DataZone for governance metadata, AWS Glue Data Catalog for the source-to-feature lineage, Amazon Bedrock Data Automation for ingestion lineage, and SageMaker Clarify for bias metrics across protected attributes.
3. Technical documentation (Article 11 + Annex IV)
A system description, intended purpose, design choices, hardware and software dependencies, training methodology, validation results, monitoring metrics, risk-assessment decisions, and human-oversight measures. Maintained throughout the system’s life and provided to authorities on request.
AWS controls: SageMaker Model Cards for the structured technical card, AWS Service Catalog for deployment artefact lineage, Amazon Bedrock Studio for model-version lineage on Bedrock, an internal model registry (SageMaker Model Registry or third-party MLflow on AWS) for the approval-workflow record.
4. Record-keeping (Article 12)
Automatic logging of operations sufficient to allow post-market monitoring and traceability. Logs must cover at least: time, identification of input data, validation against test set, identification of natural persons potentially affected. Retention at least six months — longer if other regulations apply (HIPAA, GDPR, sector-specific).
AWS controls: CloudTrail for API events, Amazon Bedrock model-invocation logging to S3, SageMaker Endpoint logs to CloudWatch, application-side structured logs with the user identity propagated through IAM Identity Center, S3 Object Lock with compliance mode for retention enforcement.
5. Transparency and provision of information (Article 13)
Deployers receive instructions for use covering: provider identity, system characteristics, capabilities and limitations, intended purpose, expected accuracy and robustness levels, known and reasonably foreseeable risks, performance on specific persons or groups, training-data characteristics, expected lifetime, and human-oversight measures.
AWS controls: SageMaker Model Cards as the canonical instruction-for-use document; published as part of the provider deliverable. For Bedrock-based applications, generate the model card from the Bedrock Studio metadata + your application-side risk assessment.
6. Human oversight (Article 14)
Design measures that allow a human to monitor operation, understand capabilities and limitations, remain alert to automation bias, interpret outputs correctly, decide not to use the system or override its output, and intervene or interrupt operation.
This is partly a UX decision and partly an architectural one. AWS-side controls: human-in-the-loop endpoints (Amazon SageMaker Augmented AI / A2I), Bedrock Guardrails on every inference, observable streaming with the option to halt mid-response, IAM-gated override actions logged in CloudTrail.
7. Accuracy, robustness, and cybersecurity (Article 15)
Declared accuracy levels, robustness against errors and adversarial inputs, cybersecurity controls.
AWS controls: Bedrock Model Evaluation for accuracy benchmarks (built-in tasks plus custom datasets), Bedrock Guardrails contextual grounding for grounding accuracy, Bedrock Automated Reasoning checks for math-validated factuality (~99% on the public benchmark), adversarial red-team testing using garak / PyRIT, IAM least-privilege on inference endpoints, KMS-CMK encryption with ML-KEM hybrid TLS for long-lived prompt logs, AWS WAF in front of inference APIs, and Inspector v2 vulnerability scanning across the deployment chain.
8. Quality management system (Article 17)
A formal QMS covering design control, testing, post-market monitoring, document control, training, and incident management. ISO 9001 or ISO 13485 (medical-device QMS) maps closely; ISO/IEC 42001 (AI management systems) is the AI-specific equivalent.
Step 4: CE Conformity Assessment + Post-Market Monitoring
High-risk Annex III systems require a conformity assessment before market entry. For most Annex III categories, this is an internal control assessment by the provider — no notified body needed. Exceptions: biometric identification systems and certain safety components require third-party assessment by a notified body.
After deployment, post-market monitoring (Article 72) is mandatory: collect data on system performance, document drift, log incidents, and feed lessons back into the risk-management system. SageMaker Model Monitor is the AWS-native data-drift and model-drift detection service; pair it with custom CloudWatch metrics for the AI Act-specific metrics (false-positive rate by demographic, accuracy by region).
Serious incidents — defined as a death, serious damage to health or property, irreversible disruption of critical infrastructure, or breach of fundamental rights — must be reported to the market surveillance authority within 15 days (or shorter, depending on type). Build the incident-classification logic into your AI observability layer.
Step 5: Map to NIST AI RMF, ISO 42001, and Existing Compliance
Most teams already operate under one or more adjacent regimes. The good news: ~60-80% of EU AI Act high-risk obligations are covered by adjacent frameworks.
- NIST AI RMF — Govern, Map, Measure, Manage functions cover Articles 9, 10, 13, 14, 15, 17. Use NIST AI RMF as the operational framework and add the EU-specific documentation (Annex IV technical doc, conformity assessment) on top.
- ISO/IEC 42001:2023 (AI management systems) — formal QMS that covers Article 17 directly.
- ISO/IEC 23894:2023 (AI risk management) — process guidance covering Article 9.
- GDPR — already covers data-subject rights, lawful basis, and DPIAs that overlap with AI Act data governance.
- EU regional certifications (ENS, C5, IT-Grundschutz) — provide infrastructure-layer evidence.
Common Pitfalls
- Assuming GPAI obligations apply to you. If you call a foundation model through Bedrock, you are a deployer, not a GPAI provider. Do not adopt obligations that do not apply.
- Mis-classifying employment AI. A CV-screening tool, a candidate-ranking model, an automated promotion-decision system — all are Annex III. Treating them as limited-risk is a fast path to enforcement action.
- Missing the post-market monitoring loop. Many teams build the launch artefacts (technical documentation, conformity assessment) and stop. Post-market monitoring is mandatory and the cheapest way to prove ongoing compliance.
- Insufficient human oversight design. “We can pull the plug” is not a Article 14 control. The system needs designed-in checkpoints.
- Treating bias examination as optional. Article 10(2)(f) is explicit — you must examine for bias and document the results. SageMaker Clarify produces the metrics; the documentation is on you.
Where to Go Next
- Read the EU AI Act consolidated text and the implementation timeline.
- Study the GPAI Code of Practice if you provide or substantially modify foundation models.
- Implement Bedrock Guardrails (guide) and Automated Reasoning (guide).
- Browse the AWS Security & Compliance hub, the AI Security subtopic, and the Compliance Frameworks subtopic.
The EU AI Act becomes the global benchmark for AI governance the way GDPR did for data protection — non-EU companies serving EU markets are bound by it, and many adjacent jurisdictions (UK, Brazil, South Korea, several US states) are adopting compatible frameworks. Building the controls now means you are not retrofitting them in 2027 under enforcement pressure.
AWS Cloud Architect & AI Expert
AWS-certified cloud architect and AI expert with deep expertise in cloud migrations, cost optimization, and generative AI on AWS.


