How To Create S3 Bucket Using Ansible CloudFormation

AWS CloudFormation is a service that helps you model and set up your Amazon Web Services resources so that you can spend less time managing those resources and more time focusing on your applications that run in AWS. You create a template that describes all the AWS resources that you want (like Amazon EC2 instances or Amazon RDS DB instances), and AWS CloudFormation takes care of provisioning and configuring those resources for you. You don’t need to individually create and configure AWS resources and figure out what’s dependent on what; AWS CloudFormation handles all of that.

AWS CloudFormation gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion. Ansible is a radically simple IT automation engine that automates cloud provisioning, configuration management, application deployment, intra-service orchestration, and many other IT needs. Ansible uses no agents and no additional custom security infrastructure, so it’s easy to deploy, it uses a very simple language which allows you to describe your automation jobs in a way that approaches plain English.

Many applications using Amazon Web Services (AWS) will interact with the Amazon Simple Storage Service (S3) at some point, since it’s an inexpensive storage service with high availability and durability guarantees, and most native AWS services use it as a building block.

In this post, I’ll go over a few of the configuration settings that you can use to secure your S3 resources, with a base CloudFormation template at the end that you can play with and extend.

My provided examples are in YAML, while you can also use JSON in CloudFormation. Which one you use is largely a matter of personal preference.

It’s a good idea to encrypt your data wherever it’s stored so that only those with access to the keys can read it. Any sensitive data should always be encrypted, and it’s usually only acceptable to leave data unencrypted if it’s intended to be readable by everyone, for all time.

AWS S3 supports several mechanisms for server-side encryption of data:

  • S3-managed AES keys (SSE-S3)
    • Every object that is uploaded to the bucket is automatically encrypted with a unique AES-256 encryption key.
    • Encryption keys are generated and managed by S3.

It usually makes sense to use SSE-S3 or SSE-KMS unless you have a good reason to do otherwise. SSE-S3 is very simple to use since all the details are taken care of for you, while SSE-KMS provides additional auditing and credential rotation capabilities.

The decision for which one to use usually depends on your security requirements and support by the services that will be interacting with S3. Many AWS services natively support KMS encryption, while a few services only support SSE-S3.

If both SSE-S3 and SSE-KMS are options for you, then I’d recommend using SSE-KMS with custom keys generated in KMS, since this provides you with auditing by default and allows you to disable or rotate encryption keys with minimal effort.

Enabling encryption by default
You can enable encryption by default for your S3 bucket with either SSE-S3 or SSE-KMS.

S3 bucket properties for SSE-S3 encryption:

Copy to Clipboard

S3 bucket properties for SSE-KMS encryption using the default account KMS key:

Copy to Clipboard

S3 bucket properties for SSE-KMS encryption using a custom KMS key:

Copy to Clipboard

I’d suggest setting up a custom KMS key if you want to use KMS by default since this allows you to disable and rotate your key as needed, which is a helpful security capability.

Encrypting uploaded files

The following S3 bucket policy statement ensures that PutObject requests for uploading files to your S3 bucket use server-side encryption:

Copy to Clipboard

Sid stands for “statement identifier” and can be set to anything you like; this is primarily a label that can also be used as a sub-identifier within the policy. Sid values must be unique within a given policy, while they can be repeated across different policies.

Using a specific encryption mechanism

The following S3 bucket policy statement requires SSE-KMS to be used if a server-side encryption header is provided:

Copy to Clipboard

The following S3 bucket policy statement requires either SSE-S3 or SSE-KMS to be used if a server-side encryption header is provided:

Copy to Clipboard


Any sensitive data that is being stored in S3 should be uploaded and retrieved using encrypted connections, otherwise it’s possible for the data to be read and modified between endpoints.

Encrypting connections when accessing resources

The following S3 bucket policy statement requires encrypted connections when uploading or reading S3 resources:

Copy to Clipboard


While encrypting your data at rest and in transit is important, controlling who is able to view and download sensitive files is essential, and misconfiguring S3 buckets to allow public read or write access is a security risk if you’re working with confidential data.

Using the Access Control List (ACL) that grants limited permissions

You can optionally specify one of a set of predefined values for the AccessControl bucket property to use a predefined access control list to build on via IAM and S3 bucket policies. See the documentation for “canned” S3 ACLs for more information on the underlying permissions granted for each value.

By default, the most locked-down base ACL is used, Private, which only grants the account owner full control over the bucket and its resources by default.

BucketOwnerFullControl grants both the bucket owner and the object owner full control over an object (eg. file) that has been uploaded to the bucket, which may be helpful for some applications.

ACLs that grant public read or write access should be avoided for any buckets that store sensitive data.

Blocking public access by default

S3 bucket properties for blocking public access by default:

Copy to Clipboard

S3 bucket policy statements for preventing S3 requests that grant public access to resources:

Copy to Clipboard


Below is a starter CloudFormation YAML template which applies the discussed policies to

  • enforce encryption at rest,
  • enforce encryption in transit,
  • block public access by default, and
  • block access control list changes that grant public read permissions to resources.

This example uses SSE-S3 as the default encryption algorithm and allows either SSE-S3 or SSE-KMS encryption to be used when specified, while you can use alternative values from previous sections to use SSE-KMS by default and restrict resources to using a single encryption mechanism.

Copy to Clipboard

You can add additional policy statements to whitelist specific IAM users to perform specific actions on specific resources. Depending on your needs, you can also set up auditing and alarming for when S3 resources are accessed or when access policies are modified, update your bucket configuration to prevent data from being overwritten, and set up additional mechanisms to control where S3 queries can be initiated from.



  • CloudFormation features change often, and this module tries to keep up. That means your botocore version should be fresh. The version listed in the requirements is the oldest version that works with the module as a whole. Some features may require recent versions, and we do not pinpoint a minimum version for each feature. Instead of relying on the minimum version, keep botocore up to date. AWS is always releasing features and fixing bugs.
  • If parameters are not set within the module, the following environment variables can be used in decreasing order of precedence AWS_URL or EC2_URL, AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY or EC2_ACCESS_KEY, AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY or EC2_SECRET_KEY, AWS_SECURITY_TOKEN or EC2_SECURITY_TOKEN, AWS_REGION or EC2_REGION

Step 1: Install Ansible for AWS Management

Copy to Clipboard

Step 2: Create a cloudformation template

Example for cloudformation.j2  

Copy to Clipboard

Step 3: Create an ansible-playbook

Example Ansible CloudFormation module:

Copy to Clipboard

About The Author

Sarathsankar RS

Cloud DevOps Engineer