skip to Main Content

I have a large file on my laptop (localhost). I would like to copy this file to a docker container which is located on a remote server. I know how to do it in two steps, i.e. I first copy the file to my remote server and then I copy the file from remote server to the docker container. But, for obvious reasons, I want to avoid this.

A similar question which has a complicated answer is covered here: Copy file from remote docker container

However in this question, the direction is reversed, the file is copied from the remote container to localhost.

Additional request: is it possible that this upload can be done piece-wise or that in case of a network failure I can resume the upload from where it stopped, instead of having to upload the entire file again? I ask because the file is fairly large, ~13GB.

3

Answers


  1. From https://docs.docker.com/engine/reference/commandline/cp/#corner-cases and https://www.cyberciti.biz/faq/howto-use-tar-command-through-network-over-ssh-session/ you would just do:

    tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | ssh you@host docker exec -i CONTAINER tar Cxf DEST_PATH -
    

    or

    tar Ccf $(dirname SRC_PATH) - $(basename SRC_PATH) | ssh you@host docker cp - CONTAINER:DEST_PATH
    

    Or untested, no idea if this works:

    DOCKER_HOST=ssh://you@host docker cp SRC_PATH CONTAINER:DEST_PATH
    
    Login or Signup to reply.
  2. This will work if you are running a *nix server and a docker with ssh server in it.

    You can create a local tunnel on the remote server by following these steps:

    mkfifo host_to_docker
    netcat -lkp your_public_port < host_to_docker | nc docker_ip_address 22 > host_to_docker &
    

    First command will create a pipe that you can check with file host_to_docker.

    Second one is the greatest network utility of all times that is netcat. It just accepts a tcp connection and forwards it to another netcat instance, receiving and forwarding underlying ssh messages to the ssh server running on docker and writing its responses to the pipe we created.

    last step is:

    scp -P your_public_port payload.tar.gz user@remote_host:/dest/folder
    
    Login or Signup to reply.
  3. You can use the DOCKER_HOST environment variable and rsync to archive your goal.

    First, you set DOCKER_HOST, which causes your docker client (i.e., the docker CLI util) to be connected to the remote server’s docker daemon over SSH. This probably requires you to create an ssh-config entry for the destination server.

    export DOCKER_HOST="ssh://<your-host-name>"
    

    Next, you can use docker exec in conjunction with rsync to copy your data into the target container. This requires you to obtain the container ID via, e.g., docker ps. Note, that rsync must be installed in the container.

    # 
    rsync -ar -e 'docker exec -i' <local-source-path> <container-id>:/<destintaion-in-the-container> 
    

    Since rsync is used, this will also allow you to resume (if the appropriated flags are used) uploads at some point later.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search