skip to Main Content

I recently deployed my first website on amazon ec2 which is a react website using a node backend. Both services are containerized with docker, along with a mongo db service. My question is, whats the best way to make changes after deployment? I’m assuming it would be to work on my local machine, push the changes, and then pull on the virtual machine but im not sure how to properly update the docker images or if this is correct. Right now, I am shelling into the service and using nano to update my code, obviously this needs to change.

My second question is, when I do such updates I delete my docker images and then regenerate them after updating my code. However, somewhere this accumulates more space on my device because after doing this about 5 times on my ec2, i got an error that the device is out of storage. So I’m unsure if my approach is wrong, or if there is a cache, or some other files somewhere that aren’t being removed automatically when I regenerate the images.

Thank you so much for your patience as this is my first deployment and first post here. I appreciate your time.

I tried to use git pull to update my code instead, but i still then assume id have to regenerate my docker images. I’m also new to git which doesn’t help.

2

Answers


  1. I guess you have some flavor on Linux installed on your EC2 instance, and you run docker build on this instance every time you want to deploy a new version.

    If that’s the case, you’re probably generating a lot of intermediate images which you never clean up. They might take up a lot of space.

    To solve your immediate problem, read up on docker prune. You do need to read up on it and understand how it works and what it does. I deliberately don’t post any exact commands here because it’s really easy to kill your server data with them if you don’t know what you’re doing.

    Building your docker images is something that should be done on your CI/CD pipeline (GitHub Actions, GitLab, AWS CodePipeline etc.), or at least on your laptop which you use for development.

    Once you or your pipeline has built the docker image, you put it on docker repository (Docker Hub, AWS ECR, Artifactory etc.). When it’s time to deploy your code, you have your docker host pull it from the repository.

    Using self-managed EC2 instances just to host docker containers has its uses, but for your task it’s probably an overkill. AWS offers managed hosting for docker containers as a service called ECS. It costs the same as vanilla EC2 with the same configuration, but you don’t have to administer the EC2 instances, AWS will do it for you.

    Login or Signup to reply.
  2. Your first paragraph does in fact describe the standard approach to updating a deployed Docker image:

    1. Build and test your code locally, possibly without Docker.
    2. docker build a new image out of it.
    3. docker push it to a registry (maybe Docker Hub, in an AWS context maybe ECR).
    4. On the remote system, docker pull the updated image.
    5. docker stop and docker rm the old container, and docker run a new one with the new image.
    6. Optionally docker rmi the old image.

    Often you will set up a continuous-integration system to build and test the image and push it to a repository, so you just need to commit your changes to source control and wait for the CI system to run.

    A good practice is to make sure to use a unique Docker tag for each build. A time stamp or source control commit ID both work well here. This has a couple of advantages, most notably that it’s very easy to roll back to a previous image just by redeploying the older image tag. If you (eventually) use Kubernetes this is all but required; if you’re using Docker Compose then you can use an environment variable to configure the tag and run a single command to do the update.

    You should almost never use docker exec to modify the code in a container, unless you’re trying to pick apart the file contents of the container locally. Tools like nano, vim, emacs, git, hg, … that only developers use probably shouldn’t be included in your image at all. Avoid using a bind mount to replace the image’s code (and for Node, don’t store the node_modules directory in a Docker volume), even if you’re using Docker locally for integration testing.

    This sequence of pushing and running new images does in fact cause the old images to stick around on the remote system, and you can see them with docker images. docker rmi can delete an individual image.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search