I have an Ubuntu ec2 instance running nginx and Jenkins. There is no more space available to do updates, and every command I try to free up space doesn’t work. Furthermore, when trying to reach Jenkins I’m getting 502 Bad Gateway.
When I run sudo apt-get update
I get a long list of errors but the main one that stood out was E: Write error - write (28: No space left on device)
I have no idea why there is no more space, or what caused it but df -h
gives the following output:
Filesystem Size Used Avail Use% Mounted on
udev 2.0G 0 2.0G 0% /dev
tmpfs 394M 732K 393M 1% /run
/dev/xvda1 15G 15G 0 100% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/loop1 56M 56M 0 100% /snap/core18/1988
/dev/loop3 34M 34M 0 100% /snap/amazon-ssm-agent/3552
/dev/loop0 100M 100M 0 100% /snap/core/10958
/dev/loop2 56M 56M 0 100% /snap/core18/1997
/dev/loop4 100M 100M 0 100% /snap/core/10908
/dev/loop5 33M 33M 0 100% /snap/amazon-ssm-agent/2996
tmpfs 394M 0 394M 0% /run/user/1000
I tried to free up the space by running sudo apt-get autoremove
and it gave me E: dpkg was interrupted, you must manually run 'sudo dpkg --configure -a' to correct the problem.
I ran sudo dpkg --configure -a
and got pkg: error: failed to write status database record about 'libexpat1-dev:amd64' to '/var/lib/dpkg/status': No space left on device
Lastly, I ran sudo apt-get clean; sudo apt-get autoclean
and it gave me the following errors:
Reading package lists... Error!
E: Write error - write (28: No space left on device)
E: IO Error saving source cache
E: The package lists or status file could not be parsed or opened.
Any help to free up space and get the server running again will be greatly appreciated.
2
Answers
For a server (If you’re not testing Jenkins & Nginx) running Jenkins & Nginx you must manage the disk partition in a better way. Following are the few possible ways to fix your issue.
Expand the existing EC2 root EBS volume size from 15 GB to a higher value from the AWS EBS console.
OR
Find out the files consuming the high disk space and remove them if not required. Most probably log files consuming the disk spaces. You can execute the following commands to find out the locations that occupied with more space.
OR
In my case, I have an app with nginx, postgresql and gunicorn all containerized. I followed those steps to solve my issue,
docker system prune -a
I was able to reclaim about 4.4 GB at the end!