All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Which directory is taking all space
When I run df -H
then it shows this result:
[opc@arm /]$ df -H
Filesystem Size Used Avail Use% Mounted on
devtmpfs 13G 0 13G 0% /dev
tmpfs 13G 0 13G 0% /dev/shm
tmpfs 13G 161M 13G 2% /run
tmpfs 13G 0 13G 0% /sys/fs/cgroup
/dev/mapper/ocivolume-root 39G 19G 20G 49% /
/dev/sda2 1.1G 527M 537M 50% /boot
/dev/sda1 105M 6.2M 99M 6% /boot/efi
/dev/mapper/ocivolume-oled 11G 148M 11G 2% /var/oled
tmpfs 2.5G 0 2.5G 0% /run/user/0
tmpfs 2.5G 0 2.5G 0% /run/user/987
tmpfs 2.5G 0 2.5G 0% /run/user/1000
The ocivolume-root
has only 19GB used out of 39GB i.e. 20GB is available. I want to find which folder is taking so much space. So I ran sudo du -hs * | sort -rh | head -5
and this is the result:
[opc@arm /]$ sudo du -hs * | sort -rh | head -5
du: cannot access 'proc/1715139/task/1715139/fd/3': No such file or directory
du: cannot access 'proc/1715139/task/1715139/fdinfo/3': No such file or directory
du: cannot access 'proc/1715139/fd/3': No such file or directory
du: cannot access 'proc/1715139/fdinfo/3': No such file or directory
3.6G usr
2.6G tmp
1.4G var
469M boot
153M run
The total of above only equals around 8.2GB
. I wonder if the error above where it cannot access some files/directories are taking up all space?
I rebooted and checked drive space and this is the result:
[opc@arm ~]$ df -H
Filesystem Size Used Avail Use% Mounted on
devtmpfs 13G 0 13G 0% /dev
tmpfs 13G 0 13G 0% /dev/shm
tmpfs 13G 26M 13G 1% /run
tmpfs 13G 0 13G 0% /sys/fs/cgroup
/dev/mapper/ocivolume-root 39G 15G 24G 38% /
/dev/mapper/ocivolume-oled 11G 145M 11G 2% /var/oled
/dev/sda2 1.1G 557M 507M 53% /boot
/dev/sda1 105M 6.2M 99M 6% /boot/efi
tmpfs 2.5G 0 2.5G 0% /run/user/0
tmpfs 2.5G 0 2.5G 0% /run/user/987
tmpfs 2.5G 0 2.5G 0% /run/user/1000
As you can see now used space is 15GB down from 19GB i.e. it has freed 4GB. Now I ran the following command but still their total doesn't add up to 15GB.
[opc@arm /]$ sudo du -hs * | sort -rh | head -5
du: cannot access 'proc/7166/task/7166/fd/3': No such file or directory
du: cannot access 'proc/7166/task/7166/fdinfo/3': No such file or directory
du: cannot access 'proc/7166/fd/3': No such file or directory
du: cannot access 'proc/7166/fdinfo/3': No such file or directory
3.6G usr
1.4G var
497M boot
27M etc
25M run
By the way tmp
folder has at least 20MB space occupied but above command is not showing.
Comments
head -5
shows only top 5 entries. Remove it to see more.sudo apt install ncdu
Yeah removing
head -5
is showing all folders but they are not much and hardly 30MB each includingtmp
. I believe am in root folder which is/
This did the trick. It is showing 8GB swapfile!!!
Shall I delete this file? Great utility by the way.
swapfile is an alternative to swap partition. OS uses it when you run out of RAM. If it's not active, you can disable it and delete it I guess.
The glob in
du -hs *
excludes hidden files, instead dodu -hd1 /
Many small VPS's need a relatively big swap because they run out of RAM when they update.
DirectAdmin recommends 2GB RAM and 4GB swap. CustomBuild uses a lot of memory when it build server components.
So check it's requirement before you delete or resize the swap.
Sometimes I'll follow the path to where I find more space usage than I expect. So manually, on Linux, the process might look something like this:
And if you're using disk swap space regularly, you might want to consider getting more RAM if you can. Using disk swap space is probably killing overall system performance.
This is Oracle 24GB Arm instance so can I safely delete swap?
The only one that can answer that question is you, because you are the only one who knows your software requirements and the server's usage.
Server is empty. Only nginx and mariadb is installed running one Wordpress blog which is not live yet.
Here's a favourite of mine:
FS='/';clear;date;df -h $FS; echo "Largest Directories:"; du -hcx –max-depth=2 $FS 2>/dev/null | grep [0-9]G | sort -grk 1 | head -15 ;echo "Largest Files:"; nice -n 19 find $FS -mount -type f -print0 2>/dev/null| xargs -0 du -k | sort -rnk1| head -n20 |awk '{printf "%8d MB\t%s\n",($1/1024),$NF}'
It lists the top 10 largest files and directories.
I didn't think there's benefit to have Swap > RAM. If for whatever reason there was, you simply do not have enough RAM for normal use and "you're doing it wrong".