Disk Space Clearing on Ubuntu Server Tips and Tricks

Register or login to post to the forum.
22 Sep, 2015 20:49

Managing disk space, log files while also preserving your audit trail for ossec or cloudtrail etc.

Edited 01 Feb, 2016 17:06
22 Sep, 2015 20:51

Managing Disk Space on Ubuntu 14.04 (and probably other *nixs)

Running out of disk space on your VM and wondering where it's all going? Well, you probably need a logging solution like loggly, but that isn't an option if you are in a bind. Guess what? I got myself into a bind so I did some research for you on how to clean up disk space on a remote server so that you CAN do things like zip a folder for download. Please learn from my pain.

I'd like to thank the people who designed rm to NOT free up disk space and just delete the pointer for causing this pain. In their honor I give you a cheesy shortcut first. Most likely your /tmp folder is emptied on reboot so it may just be easier to move things like "mv blah.log /tmp/" and reboot than to deal with all of this. But I can't let stuff go so this was my journey.

Let's start with the easy stuff - APT.

sudo apt-get update && \
sudo dpkg --configure -a && \
sudo apt-get update && \
sudo apt-get upgrade && \
sudo apt-get dist-upgrade && \
sudo apt-get clean && \
sudo apt-get autoclean && \
sudo apt-get autoremove

This thing - ncdu - I can't believe I had not heard of before. It is slow, but it's the greatest thing since sliced bread for determining disk use from terminal. It just takes a while.

sudo apt-get install ncdu

Go find large files

sudo find / -name '*' -size +1G

If you need the file and don't want to have to reset the permissions, but do NOT need the contents you can write (cat) /dev/null into it with something like this:

root@li532-239:/var/log/postgresql# cat /dev/null > postgresql-9.1-main.log

If you don't need the file and truly want it destroyed - shred it. To shred a file - forcing permissions, freeing disk space, in verbose mode.

shred filename -f -u -v

To get the total size of a directory and all children

du -hs directory/*
du -h --max-depth=1  # this one is cleaner

Clean disk space tells us how much disk space we have


Update list of sources where we can get files

sudo apt-get update 
sudo rm -rf .local/share/Trash/*

Kill old linux versions laying around in your system

dpkg --get-selections | grep linux-image

# sudo apt-get remove --purge linux-image-X.X.XX-XX-generic
# example call

sudo apt-get remove --purge linux-image-3.2.0-40-generic-pae

Find users on the system you may not need.

sudo cat /etc/passwd

Delete a user and their home folder

sudo deluser --remove-home bob

Clean up orphaned files although this doesn't really work that well. I'd do apt-get autoclean a few times first.

sudo apt-get install deborphan
sudo deborphan | xargs sudo apt-get -y remove --purge

Get a clear view of your partition disk use

df -Th

Find all of your trash folders

sudo find / -type d -name '*Trash*' | sudo xargs du -h | sort

Now get legitimate data that can wait off of there! How? Glad you asked.

We can take some stuff off of the server using scp. Google for better exxamples. Just NOTE THE TRAILING "/" on directories or you might just overwrite a directory with a file deleting everything in it. Hypothetically.

scp user@example.net:foobar.txt /some/local/directory/
scp user@dev1.example.net:/home/ubuntu/documents/zipped-old-stuff.tar.gz /Users/ubuntu/backups/
scp user@example.net:/home/ubuntu/documents/README.md you@example.net:/Users/ubuntu/backups/

Create an S3 Bucket to offload them onto

Cloud to cloud transfers are much faster of course if you have another VM you can use. Or attach to an Amazon S3 bucket. First add s3cmd - first add s3 tool key repo. From http://s3tools.org/repositories

wget -O- -q http://s3tools.org/repo/deb-all/stable/s3tools.key | sudo apt-key add -
# debian stable is here if that is easier. http://s3tools.org/repo/deb-all/stable/
sudo wget -O/etc/apt/sources.list.d/s3tools.list http://s3tools.org/repo/deb-all/stable/s3tools.list
sudo apt-get update && sudo apt-get install s3cmd -y

Locked Files? We can fix that.

First let's realize it's probably easier to just reboot the VM. But sometimes you can't so here are a few more options. To clean up any running processes - ps aux lists them all with their PID

ps aux

or to get the pid of a specific process

pgrep bash

tree view of ps aux

ps axjf

to kill a process you can do one of these. The second one just escalates it

kill PID_of_target_process
kill -KILL PID_of_target_process


I read a ton of different man pages and was all over the ubuntu site. I found that when it comes to how to recover lost space - this help file was probably the most helpful. Even though I don't have a gui - just scroll down. Major credit goes to this source:

Edited 02 Nov, 2017 16:21
22 Sep, 2015 20:52

Another option

sudo apt-get install wipe
sudo chmod 777 directoryname
wipe -r directoryname