New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
xen or kvm - providers point of view?
What would the provider pick when providing real virtualization service, xen or kvm?
My question is based on the questions below:
Signup you can ignore issue with centos 6;
In terms of management,
In terms of node overhead,
In terms of stability,
In terms of customer satisfaction?
Your inputs and explanations are highly appreciated.
Comments
Why xen-pv would be better than kvm for clients?
Yeah, SVM released it a few days ago in the stable version, was in beta for a month or so too.
iftop
Xen and KVM both use bridged network, maybe read the man page so you start it with the proper switches
Xen in my opinion, granted it is biased towards Xen because i have little experience with KVM.
Xen because, even if it is old, it's stable and "just works" most of the time. The node overhead is the RAM you have to set aside for the hyperviser, but depending on the node size, depends on the RAM you need.
Customer satisfaction has been good for 2 years now that we have been using it. We still get sales and recommendations for using Xen.
Just my 2 cents.
Can you give an example please?
Both are good and stable imho :P
http://wiki.xen.org/wiki/Tuning dom0_mem
I'm with the "it just works" thing as well. In my other company we run OpenVZ + Xen combination. Xen on CentOS6, pretty much "just works" for us since I kinda know someone who runs purely Xen and shares custom compiled kernels with me. We initally used Xen as a solution for customers who wanted Windows VPS, as well as some customers wanting to run VPN or other custom kernel stuff. I've probably ran it through CentOS 5.3 till date, so that's about 3-4 years of Xen.
For OneAsiaHost I took the OpenVZ + KVM route since pretty much covers most of the ground. Xen still has it's place beside OpenVZ and KVM, but when the pricing is so close to KVM I might as well just offer KVM alone.
xen.. but its a ram eater, but i like xen. just set the dom0_mem to 512mb
i vote for xen
Got a question, lets say on a server with 64gb ram, I want to setup 62 vps, will 2gb ram leftover be more then enough for xen?
it should be enough. when i maxed out my node there is only 512mb left
@Jack lets say something like 1tbx4 raid 10, xeon (forgot the model) those with 2.5ghz 8-12 core?
Plenty of CPU, not even close to enough IO, 64gb of ram, you'll want 12-16 disk array. Sure it might work well if you get lucky and load the node with inactive users, but how likely is that? Also, 64gb/2tb usable disk, that's 32gb of disk per user if everyone gets 1gb ram, not much space at all
@miTgib
Well I just trying to get an estimate. Node configs are most likely going to be
e3-1230 ,16GB ECC, 4x1tb raid 10,
Loading that with 25 VMS ( will use around 14 Gig Ram), Using KVM.
What would be the drawback here?
I usually use 10 IOPS per VM as a safe gauge, 5 if I'm feeling stingy. 4x SATA drives in RAID10 deliver about 200 IOPS (100 per drive, x2 since the other 2 are mirrored).
So 200/10 = 20 VMs, or 200/5 = 40 VMs. If your VMs are less active you can get by with maybe 50, but 4 drives is really cutting it close unless you're talking about 512MB each x 30VMs. That'll be pretty comfortable.
What about sas15k? That should give enough breathing space
This is the config of my E3 nodes, KVM I use the e3-1270 or e3-1230v2 and that will be great for your plans.
Have you priced 15k SAS yet? Please be seated if not... Look at SSD caching over 15k SAS2
So I guess, this would be nice setup for kvm node if I decide to use e3-1230v2 and load up around 30 VM?
I will be most likely using 300GB sas15k. For small VMS, that would have enough diskspace.
Tbh, I have never done SSD caching. So not sure how to set it up.
Well, I base my loading of nodes on ram sold, so 16gb I would sell 14.5gb, 24gb I sell 22gb and I've never put 32gb into an E3 node as I see the CPU too close to max with 24gb in it.
Makes a lot sense.
I don't think it would be bad to put 32gb into a node, but I'd try to sell it knowing me, and the customer experience would not be what I would want. But the caching linux does naturally with ram surely would be helpful.
My first KVM node was an AMD x6 1090T and when I moved from Rock Hill to Charlotte the difference in datacenter ambient temps was causing it to overheat in Charlotte so I converted it to an E3. The majority of my KVM nodes are E3's, either E3-1270 or E3-1230v2 with WD RE3/4's, Hitachi UltraStar's and Seagate Constellations and 3ware 9650 raid cards. I have 1 older KVM node with dual L5520's and 48gb ram, 8 WD RE4's and 9650 card, then the newest KVM node is dual E5-2620's 12 Toshiba 1tb SAS2 drives and LSI 9266-4i raid card with an Intel SAS expander (LSI based) with 128gb ram and a pair of Sansung 830's for SSD caching. I have a couple other E3 KVM nodes I used either the Toshiba or Seagate Constellation SAS2 drives in as well. All my nodes use Kingston or SuperTelant ram.
My prices are SG based, so they may be a bit different. Last year I bought nodes with 12x 300GB SAS 15k Seagates. Each costs S$400. Today's enterprise 1TB SATA drive costs S$130. 15k RPM drives do about 150-200 IOPS, 7.2k RPM about 100.
Assuming 4 SAS drives, we're looking at 600GB capacity with 400 IOPS max and a S$1600 hole in the pocket. With 8 SATA drives we're looking at 4TB capacity with 400 IOPS and a S$1040 hole. If you need to squeeze into a 1U node, then SAS is your only choice. But assuming you can fit 8-12 drives, the 2U option, ideally with E5s, looks a lot more solid in the long run especially since people today are talking about capacity. 128GB servers aren't really expensive when you go the E5 route, but then again I'm talking about owned servers and not rented so it may differ for other providers.
Assuming the 300GB 15k SAS Seagate prices have dipped over the year, I can now get the Intel 520 240GB SSD at similar price point (S$330). If I wanted to go the low storage capacity route, I'd rather do SSD and will probably never need to worry about IOPS. The 8-12 drive route is a no brainer for capacity + IOPS. Add the SSD caching as @miTgiB said to boost the IOPS while keeping capacity and you a nice balance. You can refer to my other thread where I posted the benchmarks on SSD as well as SSD caching.
Seagate Constellation 7.2k 1tb 2.5" and SuperMicro makes an 8 bay 1U chassis just to add that much needed monkey wrench
I'd put 8x SSDs instead and blow everyone away.
Ehhhhm. How much
Selective quoting, evil.
@Taz_NinjaHawk I would say that in order to be competitive you should offer both. We are running the entire shop, including internal servers on Xen PV. We have used KVM in the past for our shared hosting, but we found Xen to be easier to manage and it gave us more granular control over CPU usage.
Here would be a short list of pros and cons for both:
Xen PV pros:
Xen PV cons:
KVM pros:
KVM cons:
If I left out anything, please don't be to hard on me
Customer will pick what ever is easier to use and provides the best performance for them. If you can offer easy to use and fast KVM VPS server, then more power to you, that way you can advertise FreeBSD and Windows as well.