All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Virtualization setup
Hello LETers,
Until now we're mainly reselling VPSes and dedicated servers, but the moment to setup our own VPS environment is coming closer and I want to prepare myself.
We will not be catering to the LET community... But I could use some advise. We're not looking to buy a (cheap) dedicated server and virtualize it, as we would like a professional hardware/software setup.
We would prefer to use KVM virtualization and will only need linux VPSes. On the hardware side it needs to be full SSD or SSD cached at least.
The initial setup we're looking for would be for 50 VPSes with each VPS (on average) having the following resources:
3 cores (vcores)
4 GB RAM
100GB Disk
Summarizing:
1. What hardware setup would you suggest?
2. Which virtualization software/panel to use?
Thanks in advance!
Comments
How much load do you anticipate each VM having?
An E3 is out of the question for this kind of VMs. Straight to Dual E5s, probably start with 128GB RAM and upgrade if needed.
@quadhost on average a load of 1 with peaks to 2-3
You are also looking at 10x 1TB SSDs in RAID10 if looking for pure SSD storage. KVM images need dedicated space and can't be oversold, even if noone actually uses it.
@AlexBarakov thanks for your insight
It can be oversold (thin provisioning, nothing new), if I recall it's technically possible with solus as well.
SSD caching through CacheCade works well and reduces cost as well if these VMs are not running intensive read/writes. RAID10 8x HGST drives + RAID10 4x 128/256GB SSDs for CacheCade with the LSI 9361/CV controller and you have a beast of a node
50 x 4G is 200GB RAM, how much would actually be used on average/peak?
that's not exactly true, though it needs more control.
@kt > 50 x 4G is 200GB RAM, how much would actually be used on average/peak?
I would say 160GB
I guess you'd need to lay out provision/corporate policies to start with.
1/ Don't use Virtualizor, either SolusVM or another Panel
2/ Don't use raw/qcow2 files or Thin LVM, use standard LVM's
3/ Don't underestimate the I/O required for the amount of VM's running
4/ KVM by standard will allocate all memory, however KSM will save a small amount from duplicated pages
Do you plan to place the first 50 VPS on one physical server or split across multiple smaller servers?
@century1stop could you elaborate on what you mean exactly?
@AshleyUk thanks for your thoughts. That's exactly what am trying to get the comunities advise on:
1. What hardware setup would you suggest?
2. Which virtualization software/panel to use?
I would think the "larger" VPS providers will have various smaller servers for the CPU & RAM part of the VPSes and a big storage server which they connect to by a very fast protocol (not network) for the storage part of the VPSes.
If your looking at headless node's with shared storage then you will either need to have a larger budget to find this in a rented format, or look to buy the hardware upfront and colo due to the specific hardware required for the setup.
I would suggest looking to go for smaller less dense servers, a decent E5 and 64GB ram with 4 x SSD's In Raid10 would give you good performance and also allow you to only invest as you require. Instead of a straight up purchase of a large server able to handle the 50 VPS's target you have set.
We have some new packages with 20 Cores 40 Threads, which should be good for your setup ( 2x Intel Xeon E5-2650 V3 ) in our facility from Romania.
If you're interested about something like that, please let me know.
@Andreix please PM with what you have in mind. I don't know if Romania is a good location for me.
@AshleyUk please let me know what you would advise/propose
As my earlier comment, I would suggest SolusVM on a couple of E5's with 64GB Ram and 4 x SSD's in Raid 10.
Exact spec's core's and size of SSD's would be down to the resources you wish to place on each server.
However if you was looking to sell 4GB RAM servers, then taking 60GB available ram per a server, you could host 15 4GB VM's.
You spoke about CPU load of 1.0 so if you wish to guarantee a CPU thread per a VM and allow the user to burst for period's up to the 3 threads available then you would need atleast 2 x E5-2620V3's.
This would give you 24 thread's allowing for 15 dedicated to the VM's and 9 available for bursting of the VM's along with some CPU required for the host OS. If you wanted to dedicated all CPU's then you would need to look at something like 2 x E5-2660V3's.
Disk wise if we also take 100GB per a VM this would be 1500GB required, so 4 x 1TB SSD's in Raid 10 would give you a larger enough VG to hold all the VM's and some room for growth and host OS filesystem requirements.
Again the above is just an example of hardware resources required.
@AshleyUk as i see you're a provider, could you make an offer in the EU?
We don't sell dedicated servers directly, just happy to provide you with some idea of what resources you should look for while looking for a provider.
If you have any further questions feel free to pop me a PM.
Well as your obviously not setting these up to sell, use vmware ESX, that makes far more sense and has the greatest flexibility and Ram density while still running your own kernel, with that you could get away with an E3 with 32GB Ram and pure SSD HW Raid 10.
Or get an E3v5 and go with 64GB DDR4
We can also do similar deployments in London.
@quadhost please pm proposal
We have sent some base pricing over, feel free to let us know your thoughts and we can adjust the system config's accordingly.
Send you a PM.
Any other proposals?
dual E5's with SSD RAID10, hope it's within your startup budget
Corporate policies?
What's your reasoning for picking standard LVM over qcow2 or raw?
That's what I am looking at!
Guarantees no overprovising on disk layer
Better performance due to one less layer for the I/O to travel through from experience
Better management and overhead than QCOW2/RAW
All just personal experience and recommendations
that's right, every organization needs to lay out policies on how it should be run, hosting's no exception