Working with AWS European Sovereign Cloud (ESC): Terraform, IaC, and what’s different



If you manage AWS infrastructure with code, the European Sovereign Cloud adds a new partition to think about. Different endpoints, separate IAM, its own console. This guide covers what works out of the box, what needs changes, and the patterns that help when you deploy across both ESC and commercial AWS.

Why This Exists

AWS has had European regions since 2007. Ireland came first, then Frankfurt, London, Paris, Stockholm, Milan, Zurich, Spain. Eight regions across Europe. Data stays in Europe. GDPR compliant. Problem solved, right?

Not quite.

Here’s the thing about eu-central-1 (Frankfurt) — your data sits in Germany, sure. But AWS operations? Support tickets? Billing metadata? That stuff flows through global systems. American employees can access it. The control plane lives in the US. When you call support at 3am, someone in Seattle might answer.

For plenty of companies, that’s fine. You’re running a SaaS product, your customers don’t care where the ops team sits. But for German government agencies processing citizen data? French hospitals handling patient records? Banks under BaFin scrutiny? They’ve been asking harder questions.

The US Cloud Act made it worse. Passed in 2018, it lets American authorities compel US companies to hand over data, even if that data sits on servers in Frankfurt. Doesn’t matter where the bits are physically stored — if an American company controls them, American courts can demand them. AWS has always pushed back on these requests, but “trust us, we’ll fight it” isn’t the same as “technically impossible.”

Then came Schrems II in 2020, when the EU Court of Justice invalidated Privacy Shield. Suddenly every European company using American cloud providers had to justify why their data transfers were legal. Standard contractual clauses helped, but the legal uncertainty never fully went away.

That’s the gap ESC fills. Not just “data in Europe” but “everything in Europe” — operations, support, billing, leadership, legal jurisdiction.

What’s Actually Different

The European Sovereign Cloud is a separate partition entirely. Not a region — a partition. Like how AWS GovCloud is separate from commercial AWS, or how China regions are isolated. Different domain (amazonaws.eu instead of amazonaws.com), different IAM system, different control plane.

The region code is eusc-de-east-1, sitting in Brandenburg, Germany. The partition identifier is aws-eusc. When you construct ARNs, it’s arn:aws-eusc: not arn:aws:.

AWS set up a new German parent company to run it — AWS European Sovereign Cloud GmbH — with three subsidiaries handling infrastructure, certificates, and employment. The managing directors are Stéphane Israël (former CEO of Arianespace) and Stefan Hoechbauer (VP of AWS Germany), both EU citizens based in the EU. The board includes independent third-party representatives specifically for sovereignty oversight. Not Amazon employees — actual independent oversight.

Only EU residents work there. Not just “based in Europe” — actually residing in the EU with EU contracts. And going forward, they’re only hiring EU citizens. The transition is gradual, but the end state is clear: EU citizens only, no exceptions. No “follow-the-sun” support routing your ticket to Virginia at 3am.

When AWS says the infrastructure has “no critical dependencies on non-EU infrastructure”, they mean it literally. The system can keep running even if someone cuts the transatlantic cables. Billing systems, metering engines, security operations center — all contained within the EU. Metadata created in ESC stays in ESC. Your usage data doesn’t flow to a US billing system.

The Security Foundation

This matters more than the org chart stuff, honestly. Legal structures can change. Technical architecture is harder to undo.

ESC runs on the Nitro System, same as regular AWS. But the Nitro architecture is what makes the sovereignty claims credible. It’s not just policy — it’s hardware design.

The Nitro System was built with zero operator access as a design goal. There’s no SSH into the hypervisor. No console access. No mechanism for AWS employees — or anyone — to access EC2 instance memory or customer data on encrypted storage. When they say “no backdoors”, it’s not a policy promise, it’s a constraint enforced by the silicon.

Administrative access happens through authenticated, authorized, and logged APIs that provide no path to customer data. You can audit operations without giving operators data access. These restrictions are built into the Nitro firmware itself. Not a software toggle someone can flip during an emergency or under legal pressure.

NCC Group, an independent security firm, validated these claims in an audit published May 2023. They specifically looked for gaps that would let someone access customer data or memory. Found none. That audit applies to Nitro everywhere, including ESC.

For ESC specifically, AWS added the Sovereignty Reference Framework (ESC-SRF). It’s an independently validated framework with third-party auditor reports documenting the sovereignty controls. Your compliance team can hand these reports to regulators instead of trying to explain AWS architecture themselves.

The Catch (There’s Always a Catch)

You can’t just add ESC to your existing AWS Organization and call it a day. This is a separate cloud, and that separation creates friction.

Separate console, separate login. ESC has its own management console on the amazonaws.eu domain, separate from console.aws.amazon.com. Different URL, different accounts, different credentials. You can’t switch between ESC and commercial AWS with the account dropdown — they’re completely separate consoles. Bookmark both if you work in both.

No cross-partition IAM. Can’t assume roles from your regular AWS account into ESC. If you have workloads in both places, you need separate identity management. Set up federation through a third-party IdP like Okta or Azure AD, maintain separate credentials, design your CI/CD to handle both partitions. Your developers need two sets of AWS credentials.

No VPC peering. Want to connect eu-central-1 to ESC? Treat it like connecting to on-premises infrastructure. VPN, Direct Connect, or application-level APIs. You’re bridging two clouds, not two regions. Network architects used to multi-region deployments need to reset their mental model.

Separate accounts entirely. Different accounts, different Organizations, different invoices, different cost allocation tags. If your finance team tracks cloud spend by AWS account ID, they need new processes. Your existing FinOps dashboards won’t see ESC spend.

ECR isolation. You can’t pull container images from your existing ECR repos in eu-central-1. ESC’s isolation means no cross-partition image pulls. Push your images to ECR in eusc-de-east-1, use a public registry, or set up replication through your CI/CD pipeline.

Terraform works, but check your version. Terraform 1.14+ and AWS provider 6.x support ESC natively — endpoints resolve correctly without manual configuration. Just set the region:

provider "aws" {
  region = "eusc-de-east-1"
}

If you’re on an older version, you’ll need to upgrade or configure endpoints manually. The S3 backend for state storage also requires Terraform 1.14+.

What Services Are Available

AWS didn’t launch this with five services and a “coming soon” page. You get 90+ services from day one. That matters because previous sovereign cloud offerings often meant accepting a skeleton service catalog.

Containers: ECS, EKS, ECR. Full Fargate support. If you’re running containers anywhere on AWS today, same capabilities.

Compute: EC2 with multiple instance families, Lambda for serverless. Enough instance types for most workloads.

AI/ML: Bedrock, SageMaker, Amazon Q. All available from day one.

Database: Aurora (MySQL and PostgreSQL compatible), DynamoDB, RDS for managed databases. All the usual engines.

Storage: S3 with full feature parity, EBS for block storage.

Networking: VPC, Direct Connect, Route 53 for private hosted zones. Transit Gateway for complex topologies.

Security: KMS for encryption keys, Secrets Manager, Private CA, IAM with all the normal features.

If you’re running containers on Fargate in Frankfurt today, you can run the same workloads on ESC. Same task definitions, same service configs, just different region and endpoints.

What’s Missing

90 services sounds good until you remember AWS has 240+. Some gaps matter more than others:

CloudFront — No CDN at launch. If your architecture relies on edge caching, you’ll need alternatives. Expected end of 2026.

IAM Identity Center — The modern way to manage SSO across an Organization isn’t there yet. You can still use IAM with external identity providers, but you’ll configure it per-account instead of centrally. Expected Q1 2026.

Shield Advanced & Firewall Manager — DDoS protection and centralized firewall rules aren’t available. Basic Shield is included, but advanced protections aren’t.

Amazon Inspector — No automated vulnerability scanning for workloads yet.

GuardDuty — Available but limited. No Organization-level management, missing some newer detection capabilities.

IoT Services — IoT Core, Greengrass, and related services aren’t included. If you’re running IoT workloads, ESC isn’t ready for them.

Organizations features — You get AWS Organizations, but delegated administration isn’t supported. StackSets and other governance tools must run from the Management Account.

Also worth noting: S3 Block Public Access isn’t enabled by default like it is in commercial AWS. Enable it manually.

Pricing: Expect 10-15% premium over Frankfurt (eu-central-1) for comparable services.

Deploying Containers — The Practical Bits

The patterns are identical to regular AWS. I’m not going to paste hundreds of lines of Terraform — you know how to deploy ECS. The differences are configuration, not architecture:

  1. Region: eusc-de-east-1
  2. ARNs use aws-eusc partition: arn:aws-eusc:iam::aws:policy/...
  3. ECR images must come from ESC or public registries
  4. Tag resources with compliance markers for your auditors

A minimal ECS task definition:

resource "aws_ecs_task_definition" "app" {
  family                   = "my-app"
  network_mode             = "awsvpc"
  requires_compatibilities = ["FARGATE"]
  cpu                      = "256"
  memory                   = "512"
  execution_role_arn       = aws_iam_role.execution.arn

  container_definitions = jsonencode([{
    name  = "app"
    image = "your-ecr.eusc-de-east-1.amazonaws.eu/app:latest"
    portMappings = [{ containerPort = 80 }]
    logConfiguration = {
      logDriver = "awslogs"
      options = {
        "awslogs-group"  = "/ecs/my-app"
        "awslogs-region" = "eusc-de-east-1"
      }
    }
  }])
}

VPC setup is standard — public subnets for load balancers, private subnets for tasks, NAT gateways for outbound traffic. Security groups, ALB config, service definitions — all identical to what you’d write for Frankfurt.

Infrastructure as Code: The Real Story

If you’re managing infrastructure with code (and you should be), here’s what actually works with ESC right now.

Terraform and OpenTofu

As mentioned, Terraform 1.14+ handles ESC out of the box. But there’s more to it than just setting the region. The aws_partition data source correctly returns aws-eusc, which is useful when you’re building partition-aware modules:

data "aws_partition" "current" {}

# Returns "aws-eusc" in ESC, "aws" in commercial
output "partition" {
  value = data.aws_partition.current.partition
}

For multi-partition deployments, use provider aliases:

provider "aws" {
  alias  = "esc"
  region = "eusc-de-east-1"
}

provider "aws" {
  alias  = "commercial"
  region = "eu-central-1"
}

# Deploy to ESC
resource "aws_s3_bucket" "sovereign_data" {
  provider = aws.esc
  bucket   = "my-sovereign-bucket"
}

# Deploy to commercial
resource "aws_s3_bucket" "public_assets" {
  provider = aws.commercial
  bucket   = "my-public-bucket"
}

OpenTofu 1.11+ also supports ESC natively, including the S3 backend in eusc-de-east-1. Confirmed working by community testing in December 2025. If you’ve switched to OpenTofu, same patterns apply.

AWS CDK

CDK supports ESC since August 2025. Region registration for eusc-de-east-1 and VPC endpoint handling were added in PR #34860. No workarounds needed — just set the region:

const app = new cdk.App();
const stack = new cdk.Stack(app, 'EscStack', {
  env: {
    account: '123456789012',
    region: 'eusc-de-east-1',
  },
});

ARNs, service endpoints, and partition references resolve correctly out of the box.

CloudFormation

Works as expected. CloudFormation is partition-aware by design, so templates deploy without modification. The AWS::Partition pseudo parameter returns aws-eusc automatically.

Resources:
  MyRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Effect: Allow
            Principal:
              Service: ecs-tasks.amazonaws.com
            Action: sts:AssumeRole
      ManagedPolicyArns:
        # Automatically uses aws-eusc partition
        - !Sub "arn:${AWS::Partition}:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"

One exception: Landing Zone Accelerator doesn’t work. LZA maps to a single AWS Organization and can’t span partitions. You’ll need separate LZA deployments for ESC and commercial, with duplicated configurations.

Multi-Partition Patterns

Running workloads in both ESC and commercial AWS? Here are patterns that work:

Shared modules with partition-aware variables:

variable "partition" {
  description = "AWS partition (aws or aws-eusc)"
  type        = string
}

variable "region" {
  description = "AWS region"
  type        = string
}

locals {
  is_sovereign = var.partition == "aws-eusc"

  # Adjust for service availability
  enable_cloudfront    = !local.is_sovereign  # Not available in ESC yet
  enable_guardduty_org = !local.is_sovereign  # Limited in ESC
}

Separate state files per partition:

# ESC backend
terraform {
  backend "s3" {
    bucket = "my-tfstate-esc"
    key    = "infrastructure/terraform.tfstate"
    region = "eusc-de-east-1"
  }
}

Don’t try to share state across partitions. The isolation is the point.

CI/CD branching strategy:

Some teams run completely separate pipelines per partition. Others use a single pipeline with partition as a variable. The right choice depends on how different your ESC and commercial configurations are. If they’re mostly identical, one pipeline with environment variables works. If they diverge significantly, separate pipelines prevent accidents.

Planning Your Architecture

If you’re considering ESC, think about workload segmentation early. Not everything needs sovereignty guarantees, and putting everything in ESC when it doesn’t need to be there adds cost and complexity.

Tier 0 — Sovereign (ESC): Sensitive data requiring sovereignty guarantees. Patient health records, citizen personal data, financial records under regulatory requirements, classified government workloads. This is your ESC tier.

Tier 1 — Standard (Commercial AWS or ESC): Business data without special regulatory requirements. Internal tools, development environments, public-facing websites, marketing systems.

The hard part is the boundary. Your sovereign tier probably needs data from the standard tier sometimes. Options:

API gateways at the boundary. ESC workloads call commercial AWS through a controlled API layer. Strict authentication, audit logging, minimal data exposure. The API becomes your compliance checkpoint.

Data diodes for one-way flow. ESC can pull data from commercial AWS on a schedule. Commercial can’t push to ESC. Useful for reference data that needs to be in ESC but originates elsewhere.

Message queues with encryption. Async communication through something like SQS or external message brokers. Decouples the systems while maintaining the boundary.

Don’t try to architect this like multi-region. It’s multi-cloud, practically speaking. Your eu-central-1 workloads can’t directly call your ESC workloads over private networking. Plan for that from day one, not as an afterthought.

Migration Path

If you’re moving existing workloads to ESC, here’s a rough sequence:

Phase 1: Assessment. Which workloads actually need sovereignty? Many teams discover only 20-30% of their infrastructure handles truly sensitive data. Don’t move everything just because you can.

Phase 2: Identity setup. Get your IAM structure in ESC before anything else. Set up federation, create roles, establish your permission model. Test authentication flows.

Phase 3: Network foundation. VPC, subnets, NAT gateways, security groups. If you need connectivity back to commercial AWS, set up the VPN or Direct Connect tunnel.

Phase 4: Container registry. Push your images to ECR in ESC. Update your CI/CD to build and push to both registries if you’re running in both partitions.

Phase 5: Workload deployment. Start with non-critical workloads to validate your Terraform and deployment pipelines. Work through the endpoint configuration issues before touching production.

Phase 6: Data migration. This is usually the hardest part. How do you move data without downtime? Often involves running parallel systems temporarily, with replication from source.

Phase 7: Cutover. Switch traffic to ESC workloads. Keep the old deployment running until you’re confident, then decommission.

Cost Reality

ESC pricing follows standard AWS models — you pay for what you use. But the isolation adds costs:

NAT Gateways: ~€0.045/hour each plus data processing. High availability means two gateways, roughly €65/month before data charges. You’re paying this in Frankfurt too, but now you’re paying it twice if you have workloads in both partitions.

Data transfer between partitions: Not free internal transfer. Treat it like cross-region or internet egress. If your architecture involves heavy data movement between ESC and commercial AWS, model those costs.

Operational overhead: Managing two partitions means duplicated effort. Two sets of IAM policies, two CI/CD pipelines, two monitoring dashboards, two on-call rotations if you have partition-specific issues. That’s engineering time.

Compliance tooling: You’ll probably want separate security scanning, compliance monitoring, and audit tooling for ESC. Or tools that understand both partitions. Either way, cost.

AWS has confirmed a 10-15% pricing premium over Frankfurt for comparable services — what they call the “sovereignty premium.” Combined with the hidden costs above, budget accordingly.

Who Should Actually Use This

Move to ESC if:

  • You handle data under strict EU sovereignty requirements — not just GDPR, but sector-specific rules that mandate operational control
  • Regulators or auditors have specifically asked about US Cloud Act exposure
  • You’re in public sector, healthcare (especially in Germany with patient data), or finance with explicit data residency mandates
  • Your contracts require EU-only operations and personnel — government contracts often do
  • You need to demonstrate sovereignty compliance with third-party validated reports

Stick with regular EU regions if:

  • Standard GDPR compliance is sufficient for your use case
  • You need services that haven’t launched in ESC yet
  • Cost optimization is priority over sovereignty guarantees
  • You’re already running multi-region and partition complexity doesn’t fit your operating model
  • Your compliance requirements don’t specifically call out operational sovereignty or personnel location

ESC isn’t “better” than Frankfurt. It solves a specific problem. If you don’t have that problem, you’re adding complexity and cost for no benefit. Frankfurt with proper encryption and access controls is fine for most workloads.

The Competitive Landscape

AWS isn’t alone here. Microsoft announced sovereign cloud offerings for EU customers. Google has Sovereign Controls for GCP. But the approaches differ.

Microsoft’s approach involves partnerships with local operators — like T-Systems in Germany running Azure infrastructure. Google focuses on software controls and key management.

AWS went further with complete partition isolation. New legal entities, new domain, separate IAM, the whole stack. Whether that matters depends on what your regulators care about.

The 90+ service catalog at launch also sets AWS apart. Competitors often launch sovereign offerings with limited services and catch up over time. ESC starts nearly feature-complete.

What’s Coming

AWS announced expansion plans. Local Zones in Belgium, Netherlands, and Portugal — same sovereignty model, lower latency for users in those countries. These extend ESC’s footprint without requiring new full regions.

The workforce transition continues. Current staff are EU residents; future hires will be EU citizens only. Over time, the entire operation shifts to citizen-only. That’s a commitment you can point to in RFPs.

More regions within ESC are likely but not announced. If demand justifies it, a second ESC region (France? Italy?) would add redundancy options.

The €7.8 billion investment through 2040 signals this isn’t an experiment. Amazon is building parallel infrastructure for the next fifteen years.

Bottom Line

The European Sovereign Cloud answers three questions that every regulated European organization has been asking. Where exactly is my data? Who can access it? What happens when a foreign government asks for it?

For workloads where those questions have regulatory or contractual weight, ESC provides answers backed by legal structure, organizational isolation, and hardware-level security design. The ESC-SRF gives you auditor reports to prove it.

For everything else, eu-central-1 works fine and doesn’t require rethinking your account structure, identity model, and network architecture.

Just remember: ESC is a different cloud, not a different region. The isolation that provides sovereignty guarantees also creates operational boundaries. That’s the point — but it’s also the cost.

Need Help?

Working on ESC infrastructure? If you need support with Terraform configurations, multi-partition architectures, or migration planning, tecRacer can help. Get in touch to discuss your requirements.


References

Similar Posts You Might Enjoy

Building a Cloud SIEM with AWS OpenSearch Security Analytics

In this blog post, I’ll show you how to build a lightweight SIEM system in AWS using AWS OpenSearch and its Security Analytics features, combined with Zeek. This solution enables real-time monitoring of cloud network traffic and generates alerts when potential threats are detected. - by Hendrik Hagen

Build a scalable IDS and IPS solution using Suricata and AWS Gateway Load Balancer

In this blog post, I will demonstrate how to leverage Suricata with the AWS Gateway Load Balancer and Terraform to implement a highly available, scalable, and cost-effective IDS/IPS solution in AWS. This approach will enable you to monitor network traffic, detect threats, and block them before they reach your systems. - by Hendrik Hagen

Automated ECS deployments using AWS CodePipeline

When developing applications, particularly in the realm of containerization, CI/CD workflows and pipelines play an important role in ensuring automated testing, security scanning, and seamless deployment. Leveraging a pipeline-based approach enables fast and secure shipping of new features by adhering to a standardized set of procedures and principles. Using the AWS cloud’s flexibility amplifies this process, facilitating even faster development cycles and dependable software delivery. In this blog post, I aim to demonstrate how you can leverage AWS CodePipeline and Amazon ECS alongside Terraform to implement an automated CI/CD pipeline. This pipeline efficiently handles the building, testing, and deployment of containerized applications, streamlining your development and delivery processes. - by Hendrik Hagen