skip to Main Content

First of all, I am bit new dealing with Docker and whatnot.

So I have a Postgres databases running in a 3 different containers + pgAdmin.

To do this I have a docker-compose.yml and I perform a

docker compose up -d

Good!

But let’s say that I want to run a container in a different machine with the current status of the container so I save the container as a TAR file making a

docker save -o ...

And in the target PC I load the TAR file

docker load -i ...

The image it is loaded correctly but I am not being able to start the container again. I tried docker run, docker start, docker compose up, … As I said I’m a neewby with docker and what I found on internet don’t solve my problem.

Could you shed some light on this?

Thanks!

2

Answers


  1. You need to export the database data and restore it in some form. Don’t try to move the containers; that approach won’t work.

    For example, if you’ve started the database with data backed on the host system; maybe your Compose file looks like

    version: '3.8'
    services:
      database:
        volumes:
          - ./pgdata:/var/lib/postgresql/data
    

    then if you copy the ./pgdata directory to the remote system, it will have the same database data. You don’t need to directly copy the images or try to persist the containers.

    here$ docker-compose down
    hese$ scp docker-compose.yml pgdata there:
    here$ ssh there
    there$ docker-compose up
    

    The PostgreSQL pgdump and pgrestore tools could be used in a similar way: take a local database dump, copy it, start the database on the remote system, and restore it.

    Moving the raw files gets more complicated if you’re using named volumes for storage: you need to run a temporary container to export the volume content on one system, and a similar temporary container to restore it on the other. The Docker documentation has instructions on doing this. If you think you’re going to be doing this sort of migration often, and your host OS doesn’t have a significant performance penalty for Docker bind mounts, this could be a reason to prefer a bind mount (host directory) over a named volume.


    A container is a wrapper around a single process, with some kernel-level isolation features. In the same way that you don’t usually move a runnning process from one computer to another, you don’t usually move a container.

    On top of this, the container filesystem itself is intended to be temporary. In the same way that you can ^C a process and run it again, and it will lose its in-memory state but can usually reconstruct it, you should generally be able to docker stop && docker rm a container and then docker run it anew, without losing things.

    To support that, any data that needs to be persisted across container runs needs to be stored in some sort of storage outside the container-temporary filesystem. This is the bind-mount and named-volume mechanism described above. Volumes are managed separately from containers, and there aren’t dedicated commands to manage volume contents.

    While there are Docker commands to try to export a container filesystem, they ignore volume contents. docker export and docker commit (almost never best-practice commands) don’t see volumes at all. docker save only saves an image and not the container filesystem nor volumes (only needed if you need to transport images between machines without using a registry, like in an air-gapped environment).

    Conversely, since containers are intended to be kind of temporary, moving the volume is enough. If you docker rm && docker run a container with the same mounted storage, it should recover its state. But that also means the two commands don’t need to be on the same system: if you copy the state (the host directory or volume contents) to the remote system, and docker run a new container there, it should come back in the same state.

    Login or Signup to reply.
  2. First of all, lets clear that Docker Save just Packs Image along with its base image. No data of the container is shifted whatsoever.

    If you are encountering this issue, make sure you are using the actual name of the tar file when you save them like

    docker save -o container.tar <actual name of tar:tag>
    

    Example

    docker save -o container.tar <microservice:1.23.5>
    

    and not like this

     docker save -o container.tar <image id>
     docker save -o container.tar <48993212>
    

    as it will rename your image name when you load it back into the docker of other system. and as the docker compose might have the name in it. it may be be causing the issue. you might tell me about the container names and the yml you created along withthe name of the image you might see in the second computer by typing

     docker images
    

    hope that answers!!!

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search