I have a file named sdk.js
. This file name is not versioned or hashed in any way because we don’t control the sites where it is embedded so the name must remain consistent.
The browser should cache this file but continually revalidate through Cloudfront first before using their copy. Which is the behavior specified by the Cache-Control: no-cache
directive as I understand.
I am uploading the file to S3 with the Cache-Control: no-cache
header so Cloudfront implements this behavior.
The problem I encounter is documented here: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/Expiration.html#stale-if-error:~:text=Origin%20adds%20Cache%2DControl%3A%20no%2Dcache%2C%20no%2Dstore%2C%20and/or%20private%20directives%20to%20the%20object
Basically saying that the presence of this directive will also make the CDN revalidate with the origin every single time before serving the file, even if the Cloudfront Minimum TTL is > 0.
In my research I noticed there are Cache-Control
directives to control the TTL for browser and CDN independently: s-maxage
and max-age
. So are there directives or settings where I can control the revalidation behavior of the browser and CDN separately?.
2
Answers
The solution is ‘s-maxage=31536000, max-age=1200, must-revalidate’.
Cloudfront will store for the ‘s-maxage’ which is arbitrarily high, and the browser will store for the lower ‘max-age’.
An item that is stale uses ‘must-revalidate’ to check the origin prior to being allowed to use their stored version. And since the browser loses freshness more rapidly than cloudfront, it will result in the browser asking cloudfront every ‘max-age’ but cloudfront won’t ask S3 until ‘s-maxage’.
You can defintiely achieve that by using both s-maxage and max-age cache control headers. from the http rfc –
it means that reverse proxy/caches like varnish, cloudfront, cloudflare can have a cache age different to browser cache. and you can set s-maxage to higher value compare to max-age so CDN will cache is for longer period compare to browser.