skip to Main Content

i have two volumes attached to my ec2 instance, one is /dev/sda1 which is root volume and it is 8 Gb while there’s another volume /dev/sdb which is 500GB. I can see both volumes when i run sudo fdisk -l. I have a django server running in docker instance running on this ec2 instance and when i upload some data to server docker “I/O error, no space left on device”. How can i fix this problem?

EDIT

Following is my docker-compose.yml

# Copyright (C) 2018-2020 Intel Corporation
#
# SPDX-License-Identifier: MIT
#
version: "2.3"

services:
  cvat_db:
    container_name: cvat_db
    image: postgres:10-alpine
    networks:
      default:
        aliases:
          - db
    restart: always
    environment:
      POSTGRES_USER: root
      POSTGRES_DB: cvat
      POSTGRES_HOST_AUTH_METHOD: trust
    volumes:
      - cvat_db:/var/lib/postgresql/data

  cvat_redis:
    container_name: cvat_redis
    image: redis:4.0-alpine
    networks:
      default:
        aliases:
          - redis
    restart: always

  cvat:
    container_name: cvat
    image: cvat
    restart: always
    depends_on:
      - cvat_redis
      - cvat_db
    build:
      context: .
      args:
        http_proxy:
        https_proxy:
        no_proxy:
        socks_proxy:
        TF_ANNOTATION: "no"
        AUTO_SEGMENTATION: "no"
        USER: "django"
        DJANGO_CONFIGURATION: "production"
        TZ: "Etc/UTC"
        OPENVINO_TOOLKIT: "no"
    environment:
      DJANGO_MODWSGI_EXTRA_ARGS: ""
      ALLOWED_HOSTS: '*'
    volumes:
      - cvat_data:/home/django/data
      - cvat_keys:/home/django/keys
      - cvat_logs:/home/django/logs
      - cvat_models:/home/django/models

  cvat_ui:
    container_name: cvat_ui
    restart: always
    build:
      context: .
      args:
        http_proxy:
        https_proxy:
        no_proxy:
        socks_proxy:
      dockerfile: Dockerfile.ui

    networks:
      default:
        aliases:
          - ui
    depends_on:
      - cvat

  cvat_proxy:
    container_name: cvat_proxy
    image: nginx:stable-alpine
    restart: always
    depends_on:
      - cvat
      - cvat_ui
    environment:
      CVAT_HOST: ""
      ALLOWED_HOSTS: "*"
    ports:
      - "8080:80"
    volumes:
      - ./cvat_proxy/nginx.conf:/etc/nginx/nginx.conf:ro
      - ./cvat_proxy/conf.d/cvat.conf.template:/etc/nginx/conf.d/cvat.conf.template:ro
    command: /bin/sh -c "envsubst '$$CVAT_HOST' < /etc/nginx/conf.d/cvat.conf.template > /etc/nginx/conf.d/default.conf && nginx -g 'daemon off;'"

volumes:
  cvat_db:
  cvat_data:
  cvat_keys:
  cvat_logs:
  cvat_models:

2

Answers


  1. There are several possibilities. Check out this article and see if it helps: https://www.maketecheasier.com/fix-linux-no-space-left-on-device-error/

    It says things like

    Deleted File Reserved by Process

    Occasionally, a file will be deleted, but a process is still using it.
    Linux won’t release the storage associated with the file while the
    process is still running. You just need to find the process and
    restart it.

    Not Enough Inodes

    There is a set of metadata on filesystems called “inodes.” Inodes
    track information about files. A lot of filesystems have a fixed
    amount of inodes, so it’s very possible to fill the max allocation of
    inodes without filling the filesystem itself. You can use df to check.

    sudo df -i /

    Compare the inodes used with the total inodes. If there’s no more
    available, unfortunately, you can’t get more. Delete some useless or
    out-of-date files to clear up inodes.

    Bad Blocks

    The last common problem is bad filesystem blocks. Filesystems get
    corrupt and hard drives die. Your operating system will most likely
    see those blocks as usable unless they’re otherwise marked. The best
    way to find and mark those blocks is by using fsck with the -cc flag.
    Remember that you can’t use fsck from the same filesystem that you’re
    testing.

    You might also want to post the question in a Linux exchange like https://unix.stackexchange.com/ or https://serverfault.com/

    Login or Signup to reply.
  2. I Think it’s not related to your machine, It’s related to your contained volume size limit As you can limit those. Can you inspect you container using docker container inspect <container-Id> and check the volumes associated to it, you can also inspect each volume by it’s Id using the docker volume inspect <volume-Id> command.

    After some research I’m more confident that these might be the issue check this thread it’s a bit related https://github.com/moby/moby/issues/5151

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search