5

I understand this is a common question, but I was unsuccessful with their solutions.

I have two websites hosted on S3:

  • bucket.ca (Allows *.bucket.ca)
  • dev.bucket.ca (Allows dev.bucket.ca)

They are publicly accessible, but I do not want anyone to access their S3 url. So I edited the bucket policy with StringEquals.

One for each of the buckets:

arn:aws:s3:::bucket.ca/* arn:aws:s3:::dev.bucket.ca/*

{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadForGetBucketObjects", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::dev.bucket.ca/*", "Condition": { "StringEquals": { "aws:UserAgent": "random-hash" } } } ] } 

Here is the part that gets frustrating. Every time I make a change to Cloudfront or make a change to the bucket, the behavior is highly inconsistent.

dev.bucket.ca appears to be working as intended where its S3 url results in Access Denied, and I can access any of its child paths dev.bucket.ca/*. So I replicated this configuration for bucket.ca, but the result is always 403.

Checking back with dev.bucket.ca, I am no longer able to access it's child paths. dev.bucket.ca/404 results in 403.

Is there a reliable way to test S3 and Cloudfront configurations? Each time Cloudfront is edited, the changes are slow to update. Should I be wiping my browser cache and reopen an incognito mode each time to get a more reliable result?

3
  • The normal method is to Restrict Access to Amazon S3 Content by Using an Origin Access Identity - Amazon CloudFront. Commented Jul 7, 2019 at 7:38
  • 3
    I explored this method. It doesn't work for Static Websites hosted on S3 according to the docs. I don't understand why either, so it would be nice to know. Commented Jul 8, 2019 at 14:36
  • 1
    @Dan I think this is because accesses through S3 endpoints are direct and under control of IAM while accesses through S3 Web HTTP Endpoints are public HTTP accesses like any other, preventing IAM from being able to determine whether they come from a given AWS service. Commented Aug 18, 2020 at 14:35

1 Answer 1

7

To debug this issue, I reverted the policy to allow all public:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadForGetBucketObjects", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::dev.bucket.ca/*", } ] } 

This policy works fully for bucket.ca and dev.bucket.ca. So it looks like it's my conditional that is the issue.

"aws:UserAgent": "random-hash"

It turns out the custom header needed to be User-Agent and not UserAgent. So Cloudfront should use User-Agent for its custom header and S3 policy to use UserAgent.

I tested this after Cloudfront has been deployed, and both websites worked correctly. S3 buckets cannot be accessed either, which is great.


The working policy:

{ "Version": "2012-10-17", "Statement": [ { "Sid": "PublicReadForGetBucketObjects", "Effect": "Allow", "Principal": "*", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::dev.bucket.ca/*", "Condition": { "StringEquals": { "aws:UserAgent": "RandomHash" } } } ] } 

With the following cloudfront setting:

Header Name: User-Agent

Value: MatchingRandomHash

Sign up to request clarification or add additional context in comments.

3 Comments

What was your full (redacted) policy that got this working?
@JohnathanElmore Updated above to what I am currently using for S3 and Cloudfront.
Did you block public access to the bucket ? ( Using the Block public access Checkbox )

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.