New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Sorry I missed your hardware question, Xen supports Itanium, and ARM architectures and older CPU's than KVM, KVM also constantly comes up short on BSD on a lot of hardware the bug list for that is significant.
Anyway Xen and KVM will co-exist for years, but to dismiss Xen because you assume KVM can do all the same things is not wise IMO.
You are entering "Eternal Debate" KVM vs XEN which leads to xxx pages topic. Booth are great and super fast. With good admin you can do whatever you want with booth. Personal preference is other thing and reality is other. Booth are great, booth are proven, both are popular. Maybe KVM is little better marketed, as XEN seems to be marketed as High End. Why ? I don't know but that is my point of view.
Honestly speaking, my Xen experience is very limited. I'm talking about the things I learned when I was researching about it. I'd love to setup a testing bed for all common virtualizations and generate report if someone help me out.
Well aside from the fact everything else was not a one liner your link is from 2010 and based on Xen 3.2
Look I was not expecting to have this conversation nor do I sit and benchmark a server with KVM then Xen before install so I dont have any bench marks to hand, I have done it in the past though and will make a point of recording it for the future.
But if we were having this discussion back in 2010 at the same time as your link I would have responded with: http://www.linux.com/news/enterprise/systems-management/327628-kvm-or-xen-choosing-a-virtualization-platform
linux.com official quote: "KVM isn't entirely on par with Xen"
we have digressed in to a HVM vs KVM agreement here though when the reality is Xen also offers PV, PVHVM, XCP, and very soon PVM, KVM has no answer for any of those.
You know, perhaps we should actually do this once and for all on a friendly competition basis, we lease a server give the details to someone independent, find a VMWare expert we take 10 days each, the community decides the benchmarks of relevance and we all have 10 days to configure and benchmark.
Maybe CC would provide the server even
After initial set up the independent person does the benchmarks obviously.
I'm sure they are doing great introducing new technologies because now they have a great competitor like you said. But I honestly cannot see a real world difference between Xen HVM and KVM. That's why I said, there are no practical advantages of Xen HVM over KVM. If it performs significantly better, then let's benchmark them when we both have free time, build reports and publish them so everyone can see the difference and stop asking same questions all over again.
I think we must do it on the same server one after another to see the real difference since I know there are performance differences on exact same builds.
I can provide the servers.
We need something completely independent and decide on what must stay common etc so as not to give an edge once way or the other that is not specific to the virt type, like the filesystem, bios remains unchanged, guest OS versions must be the same stuff like that.
Could be a fun and really interesting exercise and blog post, maybe @mpkossen would get involved to publish the results and run the benchmarks etc.
Would be nice to also include vmware and openvz for a fully rounded report.
Also no control panels installed as they all add their own crap (vmware being the oddball)
Of course. I'd really love this. I'm sick and tired of same topics getting discussed all over. And honestly, I doubt anyone has extended experience with both systems like you and that's why everyone is going for the KVM route since it's popular.
ESXi has specific requirements that are a pain in the butt to buy for. Every version (major and minor) has a different set of whitebox compatibility lists. And then, on top of that, while the vSphere downloadable client is nice - albeit sluggish at times - they go and change the newest VM format to require the web client. A client that requires vCenter Server which is a paid product. Ugh..
I had enough of it and pretty much burned down my ESXi lab. Technically, I still like VMWare the best, but they change it so often on a whim to satisfy some upstream changes in the vCenter suite that it's a no-go. It's doable if you're enterprise and were already paying for support and can continue rolling your deployments every so often - not so for everyone else.
One of my VPS's is with OVH, part of their VPS Cloud 2013 range. I specifically got it because it is run on VMware ESXi on the basis that it is enterprise grade virtualisation and wanted to see the benefits.
I think panel & automation are the problem. My friend's project team developed a customized vmware esxi panel which uses some internal APIs directly obtained from VMWARE. It has nearly everything SolusVM customer panel has.
What panel is that if you don't mind sharing?
It's a commercial panel made by his team. I can't share it. sorry. I'm just saying panel & automation are the main problems stopping people from using esxi and Xenserver as a commercial VPS deployment solution.
I am not asking for download link of the panel but just the name so that I can search & compare its features with solusvm.
It doesn't have a name yet. still in development
I talked today with the vmware sales team, they said me another version. You must sign VSPP program for service provider.
XenServer should be nice, it offers all enterprise features for free. It can be managed from the cluster api. ESXi cannot really be used for it. The api is read only so you have no access from external to it.
Oh yes, I would definitely help out with this as much as I can and would definitely make a blog post out of it.
If you guys want a server for this, tag @jbiloh ;-) I doubt he searches for 'Jon' here to see if he's mentioned. Anyway, even if there's no free server for this, I'm sure we can come up with a good deal for one and I'd be willing to contribute some $$ towards this. I think all we need is a server with a CPU that supports hardware virt, some raid and a bunch of RAM?
Yup, anything will do really as they will both be ran on the same physical server which is the important bit.
I would suggest we set ground rules like, no one gets bios access, EXT4, CentOS 6 for the host, no control panel, LVM2 based storage, same PE size 32 or 64mb.
Then a series of tests need to be ran or perhaps just 3 server bears and the avg is taken, then allocate 100% of the resources, set off all the VM's running server bears and server bear the host node while it is running.
So CPU does not matter so much 4 cores would be nice.
maybe 8GB Ram
HW Raid would be nice but not really needed, if we use mdadm they all need to use the same same stripe size and settings, no turning off NCQ etc.
Even a single disk would be fine actually.
Initial set-up to be done by the representative of the virt type and then everything handed over to someone such as yourself @mpkossen for validation and to run the bench marks.
That's what I think, maybe @qps can hook us up with something that has IPMI/KVMoIP if jon cant.
We also need an OpenVZ long term-er to take part really, maybe @Nick_A ?
It's not the one @kyaky mentioned but you can use CloudStack or OpenStack with ESXi via vCenter.
How can I help?
@AnthonySmith wants you(ColoCrossing) to provide the server for testing performance of different virtualizations if possible
Hi.
Sure we can do that.
That would be nice of you for settling these virtualization debates on the basis of same actual benchmarks.Would be interesting to see how KVM does against XEN HVM or VMware.
@jbiloh nice one, nothing special needed, just a CPU that supports full virt, even single disk is fine, 8GB Ram it does need IPMI/KVMoIP though that is remote media capable, bios locked down, don't give us access to that and needs to be for around 30 days.
Got anything that would fit?
If you are including vmware it doesn't support software raid.
Single disk or hardware raid.
I think we will probably skip vmware for this round, if there is time left I will config the vmware node too though I have enough experience to give it a fair crack.
One thing to keep in mind when installing vmware is always use the hardware vendors version if available.
I am dying to do this. PM if you need any help.