skip to Main Content

There may very well be a duplicate, however, I couldn’t really find an answer on the web that addresses the issue from this perspective.

Issue

Suddenly, docker has "lost" all my containers. I have looked here and there to find out that this is a relatively frequent phenomenon. I can recall that it has overcome me twice or thrice in the past years too, however, I cannot recall how I managed to fix it back then.

Yesterday, running docker container ls would work just fine and output a list of all the containers I have created over the past years. Today morning I woke up only to choke on my double espresso and see that all the containers vanished. Running docker container ls would output an empty list, as if I have never created a container before, even though the /var/lib/docker/containers dir contains all the (latest) data of containers (and everything else one level up in /var/lib/docker).

Context

  • OS: Kali GNU/Linux Rolling
  • OS version: 5.19.0-kali2-amd64
  • Docker version: 19.03.15
  • Docker daemon version: 20.10.19+dfsg1

What I Tried

I tried restarting the Docker daemon, which resulted in no change.

I also tried duplicating the /var/lib/docker dir, deleting the original and recreating it with the contents of the duplicate, as someone has said this resolved their issue. Didn’t work for me, though.

I tried explicitly stating the path to the data dir in the /etc/docker/daemon.json configuration file by setting data-root to /var/lib/docker and restarting the daemon, but that didn’t help either.

I tried reinstalling Docker by running:

sudo apt-get install --reinstall docker-ce

But that didn’t change anything.

Finally, I tried rebooting the system. With no difference.


P.S.:

2

Answers


  1. Chosen as BEST ANSWER

    Update

    I searched the web awhile and came across this comment from SO that suggested running docker container ls with the --all flag:

    docker container ls --all
    

    This resulted in a list populated with every container I have made in Docker (yippy).

    The next problem I encountered is when I tried to start any of these containers, I get the following error thrown:

    Error response from daemon: NanoCPUs can not be set, as your kernel does not support CPU cfs period/quota or the cgroup is not mounted
    Error: failed to start containers: <...>
    

    I can trace this error back to the logs of service docker status and when I run dockerd:

    Your kernel does not support swap memory limit
    Your kernel does not support memory reservation
    Your kernel does not support oom control
    Your kernel does not support memory swappiness
    Your kernel does not support kernel memory limit
    Your kernel does not support kernel memory TCP limit
    Your kernel does not support cgroup cpu shares
    Your kernel does not support cgroup cfs period
    Your kernel does not support cgroup cfs quotas
    Your kernel does not support cgroup rt period
    Your kernel does not support cgroup rt runtime
    Unable to find blkio cgroup in mounts
    

    Seems like a lot of stuff that's unsupported... I cannot really come up with a cause for this, as a few days ago everything worked just fine, nor can I recall touching anything related to cgroups or on the kernel level. I decided to check the logs of the Docker daemon with journalctl -xu docker and could not find any wrong behaviour before the moment I decided to restart the daemon (that was when I choked on my espresso).

    Either way, searching further I found this answer that gave me a clue on potential support incompatibility with cgroup v1 and v2 between the kernel and Docker. Using this answer I verified my kernel supports both cgroup v1 and v2 by running mount | grep cgroup. However, v2 was mounted to /sys/fs/cgroup.

    The version of Docker I have running is 19.03.15 (docker version), and, it turns out that support for cgroup2 was added in version 20.10.0 of Docker, according to these notes:

    Runtime

    So, I decided to update Docker to the latest version (24.0.0 for my case) following the documentation for Debian (since I'm on Kali). BTW, If you're on Kali too, you should replace $VERSION_CODENAME from the docs with bookworm, since that's what Kali is based off of.

    I updated successfully and Your kernel does not support ... logs no longer appear. I can verify the cgroup actually works by running the following as an example:

    docker run --cpus 1 hello-world
    

    This no longer throws the NanoCPUs can not be set error. However, docker no longer reads any data from /var/lib/docker. Running docker container ls --all, docker images, and docker volume ls gives me the default containers, images, and volumes you usually have with a clean Docker install. Despite the fact that /var/lib/docker still contains all my previous data.

    Running docker info I noticed this line:

    Docker Root Dir: /var/snap/docker/common/var-lib-docker
    

    So, without further ado, I opened Docker's unit file at /lib/systemd/system/docker.service and changed the ExecStart option from this:

    ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
    

    To this:

    ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --data-root=/var/lib/docker
    

    In other words, I added --data-root=/var/lib/docker to the command that starts the daemon.

    Now, running docker container ls --all, docker images, and docker volume ls will output all my containers, images, and volumes!

    Oh, and, I can even start them.

    Without errors.


  2. The contents of the /var/lib/docker directory vary depending on the driver Docker is using for storage.

    When you will fall back to overlay, overlay2, btrfs, devicemapper or zfs depending on your kernel support. In most places, this will be aufs, RedHat went with devicemapper.

    You can manually set the storage driver with the -s or --storage-driver= option to the Docker daemon.

    /var/lib/docker/{driver-name} will contain the driver specific storage for contents of the images.
    /var/lib/docker/graph/<id> now only contains metadata about the image, in the json and layersize files.

    In the case of aufs:

    /var/lib/docker/aufs/diff/<id> has the file contents of the images.
    /var/lib/docker/repositories-aufs is a JSON file containing local image information. This can be viewed with the command docker images.

    In the case of device-mapper:

    /var/lib/docker/devicemapper/devicemapper/data stores the images
    /var/lib/docker/devicemapper/devicemapper/metadata the metadata
    Note these files are thin provisioned "sparse" files so aren’t as big as they seem.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search