Is it good practice for node.js service containers running under AWS ECS to mount a shared node_modules volume persisted on EFS? If so, what’s the best way to pre-populate the EFS before the app launches?
My front-end services run a node.js app, launched on AWS Fargate instances. This app uses many node_modules. Is it necessary for each instance to install the entire body of node_modules within its own container? Or can they all mount a shared EFS filesystem containing a single copy of the node_modules?
I’ve been migrating to AWS Copilot to orchestrate, but the docs are pretty fuzzy on how to pre-populate the EFS. At one point they say, "we recommend mounting a temporary container and using it to hydrate the EFS, but WARNING: we don’t recommend this approach for production." (Storage: AWS Copilot Advanced Use Cases)
2
Answers
Asking if it is "good" or not is a matter of opinion. It is generally a common practice in ECS however. You have to be very cognizant of the IOPS your application is going to generate against the EFS volume however. Once an EFS volume runs out of burst credits it can really slow down and impact the performance of your application.
I have not ever seen an EFS volume used to store
node_modules
before. In all honesty it seems like a bad idea to me. Dependencies like that should always be bundled in your Docker image. Otherwise it’s going to be difficult when it comes time to upgrade those dependencies in your EFS volume, and may require down-time to upgrade.You would have to create the initial EFS volume and mount it somewhere like an EC2 instance, or another ECS container, and then run whatever commands necessary in that EC2/ECS instance to copy your files to the volume.
The quote in your question isn’t present on the page you linked, so it’s difficult to say exactly what other approach would the Copilot team would recommend.
Thanks for the question! This is pointing out some gaps in our documentation that have been opened as we released new features. There is actually a manifest field,
image.depends_on
which mitigates the issue called out in the docs about prod usage.To answer your question specifically about hydrating EFS volumes prior to service container start, you can use a sidecar and the
image.depends_on
field in your manifest.For example:
On deployment, you’d build and push your sidecar image to ECR. It should include either your packaged data or a script to pull down the data you need, then move it over into the EFS volume at
/var/copilot/common
in the container filesystem.Then, when you next run
copilot svc deploy
, the following things will happen:Hope this helps.