101

I created a distribution in cloudfront using my files on S3. It worked fine and all my files were available. But today I updated my files on S3 and tried to access them via Cloudfront, but it still gave old files.

What am I missing ?

3
  • 2
    The old files may still be cached in Cloudfront, depending on their expiry headers. How long do you expect Cloudfront to cache them? Commented May 10, 2015 at 17:40
  • 3
    Caches cache files. The length of time is configurable. Commented May 10, 2015 at 18:53
  • 5
    For those who come across this now, you have to invalidate /, not index.html. If you do index.html, it'll only work if someone goes to example.com/index.html. Since you want it to work on example.com, /will give you what you want. Commented Jun 30, 2020 at 14:30

14 Answers 14

112

Just ran into the same issue. At first I tried updating the cache control to be 0 and max-age=0 for the files in my S3 bucket I updated but that didn't work.

What did work was following the steps from @jpaljasma. Here's the steps with some screen shots.

First go to your AWS CloudFront service.

Then click on the CloudFront distrubition you want to invalidate. enter image description here

Click on the invalidations tab then click on "Create Invalidation" which is circled in red. enter image description here

In the "object path" text field, you can list the specific files ie /index.html or just use the wildcard /* to invalidate all. This forces cloudfront to get the latest from everything in your S3 bucket.

Once you filled in the text field click on "Invalidate", after CloudFront finishes invalidating you'll see your changes the next time you go to the web page.

enter image description here

Note: if you want to do it via aws command line interface you can do the following command

aws cloudfront create-invalidation --distribution-id <your distribution id> --paths "/*"

The /* will invalidate everything, replace that with specific files if you only updated a few.

To find the list of cloud front distribution id's you can do this command aws cloudfront list-distributions

Look at these two links for more info on those 2 commands:

https://docs.aws.amazon.com/cli/latest/reference/cloudfront/create-invalidation.html

https://docs.aws.amazon.com/cli/latest/reference/cloudfront/list-distributions.html

Sign up to request clarification or add additional context in comments.

4 Comments

You rock the roll! I like this answer!
Can you leave this invalidation after using it?
Do you have to create a new invalidation every time your files update? I already have an invalidation item in cloudfront which worked the first time but updating newer files does not occur.
Just to answer those ^^ comment questions; yes, everytime after updating files you need to re-run the invalidation. This tells CloudFront 'as of now, the files need re-caching'. Each time they're cached, and then later updated, you'll need to re-invalidate them.
25

You should invalidate your objects in CloudFront distribution cache. Back in the old days you'd have to do it 1 file at a time, now you can do it wildcard, e.g. /images/*

https://aws.amazon.com/about-aws/whats-new/2015/05/amazon-cloudfront-makes-it-easier-to-invalidate-multiple-objects/

Comments

13

How to change the Cache-Control max-age via the AWS S3 Console:

  • Navigate to the file whose Cache-Control you would like to change.
  • Check the box next to the file name (it will turn blue)
  • On the top right click Properties
  • Click Metadata
  • If you do not see a Key named Cache-Control, then click Add more metadata.
  • Set the Key to Cache-Control set the Value to max-age=0 (where 0 is the number of seconds you would like the file to remain in the cache). You can replace 0 with whatever you want.

enter image description here

2 Comments

While this is definitely possible, it won't change behavior of already cached files. Furthermore, CloudFront has option to override minimum TTL.
This is a good way but I want to use the performance benefits of caching also. Can we call an API to set this Cache-control to 0 and then back to 1 day (or whatever it was earlier) after a day ?
12

The main advantage of using CloudFront is to get your files from a source (S3 in your case) and store it on edge servers to respond to GET requests faster. CloudFront will not go back to S3 source for each http request.

To have CloudFront serve latest fiels/objects, you have multiple options:

Use CloudFront to Invalidate modified Objects

You can use CloudFront to invalidate one or more files or directories manually or using a trigger. This option have been described in other responses here. More information at Invalidate Multiple Objects in CloudFront. This approach comes handy if you are updating your files infrequently and do not want to impact the performance benefits of cached objects.

Setting object expiration dates on S3 objects

This is now the recommended solution. It is straight forward:

  • Log in to AWS Management Console
  • Go into S3 bucket
  • Select all files
  • Choose "Actions" drop down from the menu
  • Select "Change metadata"
  • In the "Key" field, select "Cache-Control" from the drop down menu.
  • In the "Value" field, enter "max-age=300" (number of seconds)
  • Press "Save" button The default cache value for CloudFront objects is 24 hours. By changing it to a lower value, CloudFront checks with the S3 source to see if a newer version of the object is available in S3.

I use a combination of these two methods to make sure updates are propagated to an edge locations quickly and avoid serving outdated files managed by CloudFront.

AWS however recommends changing the object names by using a version identifier in each file name. If you are using a build command and compiling your files, that option is usually available (as in react npm build command).

2 Comments

The object expiration approach works well for files that can't be versioned like index.html
It didn't work for me and for Iphone , I am still getting the old image.
10

For immediate reflection of your changes, you have to invalidate objects in the Cloudfront - Distribution list -> settings -> Invalidations -> Create Invalidation.

This will clear the cache objects and load the latest ones from S3.

If you are updating only one file, you can also invalidate exactly one file.

It will just take few seconds to invalidate objects.

Distribution List -> settings -> Invalidations -> Create Invalidation

Distribution List - Invalidations Tab enter image description here

1 Comment

Do note that once you add an invalidation you won't be able to delete that invalidation docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/…
6

I also faced similar issues and found out its really easy to fix in your cloudfront distribution

Step 1.

Login To your AWS account and select your target distribution as shown in the picture below

enter image description here

Step 2.

Select Distribution settings and select behaviour tab

enter image description here

Step 3.

Select Edit and choose option All as per the below image

enter image description here

Step 4.
Save your settings and that's it

Comments

3

I also had this issue and solved it by using versioning (not the same as S3 versioning). Here is a comprehensive link to using versioning with cloudfront

Invalidating Files In summary:

When you upload a new file or files to your S3 bucket, change the version, and update your links as appropriate. From the documentation the benefit of using versioning vs. invalidating (the other way to do this) is that there is no additional charge for making CloudFront refresh by version changes whereas there is with invalidation. If you have hundreds of files this may be problematic, but its possible that by adding a version to your root directory, or default root object (if applicable) it wouldn't be a problem. In my case, I have an SPA, all I have to do is change the version of my default root object (index.html to index2.html) and it instantly updates on CloudFront.

Comments

2

Thanks tedder42 and Chris Heald

I was able to reduce the cache duration in my origin i.e. s3 object and deliver the files more instantly then what it was by default 24 hours. for some of my other distribution I also set forward all headers to origin in which cloudfront doesn't cache anything and sends all request to origin.

thanks.

2 Comments

Forwarding all headers to your origin tells CloudFront not to cache anything. Why not, instead, set your Cache-Control headers to something like 5 minutes?
i've got hundreds of files inside 2 folders, and the option to change cache-control headers has to be done for each file. I didn't find any option to apply it in bulk to all. so i for the time being i chose the above option to send all request to origin.
2

Please refer to this answer this may help you.

What's the difference between Cache-Control: max-age=0 and no-cache?

Adding a variable Cache-Control to 0 in the header to the selected file in S3This worked me

1 Comment

It probably be a late comment but add metadata max-age=0 , adding 0 doesn't make any sense and not effective.
1

How to change the Cache-Control max-age via the AWS S3 Console:

  1. Go to your bucket
  2. Select all files you would like to change (you can select folders as well, it will include all files inside them
  3. Click on the Actions dropdown, then click on Edit Metadata
  4. On the page that will open, click on Add metadata
  5. Set Type to System defined
  6. Set Key to Cache-Control
  7. Set value to 0 (or whatever you would like to set it to)
  8. Click on Save Changes

Comments

1

Invalidate all distribution files:

aws cloudfront create-invalidation --distribution-id <dist-id> --paths "/*" 

If you need to remove a file from CloudFront edge caches before it expires docs

Comments

1

For CDK you can do this to create an invalidation:

new aws_s3_deployment.BucketDeployment(this, "MyBucketDeployment", { sources: [aws_s3_deployment.Source.asset("assetFolder")], destinationBucket: myBucket, distribution: myCloudFrontDistribution, distributionPaths: ["/*"], }); 

Or you can just set the cache for each object in the bucket:

new aws_s3_deployment.BucketDeployment(this, "MyBucketDeployment", { sources: [aws_s3_deployment.Source.asset("assetFolder")], destinationBucket: myBucket, cacheControl: [aws_s3_deployment.CacheControl.noCache()], }); 

Comments

0

The best practice for solving this issue is probably using the Object Version approach.

The invalidation method could solve this problem anyhow but it will bring you some side effects simultaneously. Such as cost increasing if exceeding 1000 times per month, or some object could not be deleted via this method.

Hope the official doc on "Why CloudFront is serving outdated content from Amazon" could help the poor guys.

1 Comment

I did my best to turn this into an answer. It was not strictly "Not An Answer" before. I suspect that this will not be peceived as a helpful answer, but I cannot find a flagging reason to use. Please consider editing to make this as much as possible according to How to Answer.
0

My Issue was: First I deployed ReactJS build with dev api URL, and then added with prod api url. At this situation, React APP was calling old dev api url and it causing an issue.

So, I get to know about the caching of AWS Cloudfront, and added * in section of invlidation under cloudfront.

By that way, I just invalidate all the files by adding * in aws Cloud front invalidation section and informed Cloudfront to not cache any of my files.

Thanks

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.