New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Best practice to add more disk space to a CentOS/WHM installation
Hello.
I have got control over a CentOS webserver with WHM/Cpanel. It's some hundred customer on this server.
The disk is beginning to be full, and I need to add more disk space.
Here is the disk setup:
What will be the best practice? Just take a backup of the whole WHM and move it to a new server with bigger disk, or are there any easy way to increase disk space?
The server is hosted, and there is no problems adding more disk space to the server.
Just never done this before.
Comments
Looks like you are using LVM for your /home, you can extend using this guide: https://www.tecmint.com/extend-and-reduce-lvms-in-linux/
Be careful so you don't loose your data.
Add a second disk, set as /home2 and create new accounts there?
Is it so simple as that? Where in WHM do I set that all new sites should use /home2?
https://documentation.cpanel.net/display/ALD/Rearrange+an+Account
WHM will pick up and create on a different partition on a "load balancing method"
Beat me to it
They're all trolling you. You made the mistake all failed hosts do. Never exceed 10% useage of disk space. You'll need to change your name becuase when it gets full, it will crash and never start up ever again. You can't slow it or migrate either. It just doesn't work like that. This isn't like in the movie Tron where you waive your hands and a city is created.
But how to solve this then, if what you are saying is true.
@myhken
It was a joke
You can use the WHM rearrange function to move over accounts to another partition, e.g /home2. I've seen hosts with 100% disk usage, and it didn't crash the server (albeit it was slow as molasses).
As said, add it to a pool with LVM. Also, why did you block out your mount point? Lawl.
its just a logical volume, add the second disk, pvcreate on the new disk, then vgextend volgroupname /dev/new-disk-name.
Then shutdown the server, boot in to recovery, lvextend, and done.
"Where'd my XXXX go??"
Ok my first try failed, so using my test server, what to do next now. When I first tried, I got the space from sdb into /home, but cpanel did not see the space. A df -h did not show the space either, but if i use lsblk I could see the new space.
So here is my after I have used pvcreate and lvextend, as you can see lsblk do show the correct new size, but df -h do not show the correct size:
Nobody? I'm sure I'm just missing a simple command or something? I have tried both in rescue mode and without rescue mode, with the same result.
you will probably need to use resize2fs?
That did the trick. Now...the next thing is to do this on the production server...it worked fine on my test server, but the production server has around 200 sites on it...
Of course, I have backup and a snapshot of the server...still, will it actual be so simple on the production server also?
It will be fine, if your not confident you could always just boot it up with gparted instead and have it do the work for you
On my test server unmounting the home partition worked fine, but what will happen on the production server? Will people just not get access to their sites until it's mounted again? And when mounting it again, will all just work?
You would be best doing maintenance and secduling the work, you would be best to stop certain services to prevent certain things from happening.
If a live server, you would be better getting someone to do it for you, or, simply, mount the new space as /home2
Also - you left the host name in putty in the windows bar.
Alexander
You signed your post.
Alexander
Sorry,
WSS
That's better.
Also - you left the host name in putty in the windows bar.
Fixed, and I just did it on the production server. I rebooted the server first, then did the commands, and it just worked, had to force a umount, but no issues beside that.
And now we have 50% free space, in stead of 7% free space.
The commands was:
pvcreate /dev/sdd
vgextend vg_cpanel1 /dev/sdd
lvextend -L+74G /dev/vg_cpanel1/lv_home
umount -f /dev/mapper/vg_cpanel1-lv_home
e2fsck -f /dev/mapper/vg_cpanel1-lv_home
resize2fs /dev/mapper/vg_cpanel1-lv_home
mount /dev/mapper/vg_cpanel1-lv_home