All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Virtualization Tech Feedback
Hi all!
We've been working on our own cloud based environment for the past ~2ish years. It's a very custom implementation sitting on top of OpenStack and we're very close to completion (currently in bug fix/QA). I'm not exactly thrilled with the complexity of it all, which seems to stem primarily from OpenStack and all the additional overhead it adds.
I'm considering nixing it as we may be potentially going down the wrong path and wanted to get some feedback on virtualization technologies.
I have been comparing KVM, LXD, OpenVZ, and a bunch of others. We would build an extensive automation layer on top of it. I wanted some feedback on your favorite virtualization tech before I commit to any decisions.
Must have features:
-Native support for snapshots/backups
-Native support for live migrations
-Native support for IPv4/IPv6
-Support for ISO mounting
-Support for Linux, BSD, Windows
Nice to have (but not required):
-Firewalling
-Port forwarding (IPv4/IPv6 sharing)
-Less overhead, more performance
Comments
KVM all the way. It's the standard now for good reason. A good VZ cloud like Wable used to be though, that would be interesting to see again.
Any reason why you wouldn't choose LXD over KVM? There seems to be major performance benefits with a very similar feature set.
Fucking hell, Wable, there’s a blast from the past.
Well.. ISO mounting (and OpenSource/Free) = KVM. Pretty much. Specially when you decided to choose OpenStack.
If you maybe went Canonicals LXD way you could have both worlds (Qemu and LXD) with exposed API for everything
Edit:
Please note LXD = "Hypervisor" ; LXC = Containers
Admittedly, only familiarity with the latter.
These list items only means lot of work is yet to be started.
I don't think you fully read my post.
Well in my opinion, ISO mounting is a must. Specially to be able to encrypt partition. KVM would be my choice.
I haven't used LXD, there might be a limitation in regards to ISO mounting.
But not everyone encrypt their VM's. In my opinion is a must, as a good security practice. Generally speaking if encrypted then you wouldn't have to worry about the hypervisor doing a mistake (bug) and assigning your previous disk to a new user.
My thought is KVM. The benefits of LXC that you're talking about matters if you're only willing to limit your customers to Linux-based OSes (and all the fun things that comes with container-based paravirtualization).
KVM has pretty much become industry standard and, in my opinion, mostly for the feature set and how "complete" it is. Full virtualization lets your clients be agnostic to the guest operating system they want to run. Since those other operating systems are part of your critical requirements, I'd say only real options to you are KVM and Xen HVM (as well as Virtualbox or VMWare). Most people write off Virtualbox because... well... It's useful as a tool but not useful as a production-ready environment. VMWare is not open-source (and requires high licensing fees). Xen HVM is an option. While I can't comment on the actual Xen technology, I will say the market force seems to be moving away from Xen and feeling snuggly within KVM technology.
I will say you probably do have some overhead from OpenStack and I can't really say I'm knowledgable over what difficulties or benefits that include. However, I know there are also some solutions built around Proxmox API as well (as well as I'm sure there's benefits and negatives associated with that as well, especially in relation to scaling).
I think what's more important here isn't to nix your OpenStack system completely. KVM and OpenStack are simply tools to get to an end. I'd suggest using the system you've already invested in significantly to then trial it out and gain customers. See what your customer feedback is and then use those data points to then inform your next decision. If your customer feedback is that "it's too clunkly and bloated" then you have your answer and you should look into an alternative solution. However, if your customers are more focused on locations your panel/solution supports, then you know your solution is good enough and diversity in destination is the real question to answer.
My 2 cents. cheers mate.
Thanks for the feedback, I mentioned LXD, not LXC.
I agree 100%, ISO mounting is a must have feature.
If a must have is windows then kvm is the only one out of these with support for that
I believe LXD supports Windows.
Oh I thought you mean LXC, never heard of LXD
I think KVM is probably the most desired by customers these days, but any other “full” virtualization will also suffice.
Again.. LXD = Hypervisor ; LXC = Container tech.
LXD supports Qemu/KVM and LXC
Oh shoot apologies for misreading it. I'm not familiar with LXD and I'd be happy to read more into it and give further assessment based on what I find.
However, my first gut-reaction (and others can feel free to correct me) is that it's new. It hasn't been tested yet. You can try using it in a production capacity, but we don't know how stable it'll be and where it'll go. I'm optimistic that LXD will be a major player in the future but for now, I'd rather suggest moving with KVM and keeping an eye on LXD (with an option to support LXD in the future).
LXD is pretty mature, LTS is already 4 major revisions in.
Fair enough, but how much of that is in actual production server use? From a short read of their technology overview, it seems like it was built for the user experience and to be LXC-language compatible. Seems like you can accomplish the same thing KVM and LXD, but KVM has more server-focused tools and use-cases in mind.
However, you definitely have a much better understanding of the technology than me. So I won't talk much about the actual technology, but instead from a marketing perspective.
I think on the surface, it might be interesting to see another new technology as an option for clients to choose and use. Simply speaking, you're offering the market another technology option to choose from and building your own uniqueness. I mean hell I'd probably try using an LXD server if given the chance.
Could work.
If you're looking at LXD you'd be amiss not to consider Proxmox. Meets all of your requirements, has a mature API, LXC & KVM support, clustering & live migration is super painless to setup, etc.
Does it have kernel limitations like openvz?
I'm not trying to defend one over the other, but people are making statements without much experience with anything outside of KVM.
I agree that Proxmox is easy to setup and has an API. However if ever the project takes a turn down a different road and they want to start charging for something. Then people will move.
I will be guessing that @MrRadic is wanting to fully build something from scratch for the most part. Thus in this case Proxmox would not be ideal.
The only part that kinda pokes that "hmm I'm not sure" spot is LXD is written from Canonical.
Personally I would say go KVM on top of a debian OS.
The project has been around for many years, is open source, and is just debian/qemu under the hood anyway so I don't think this poses that much of a risk
Clear concise explanation and suggestion on a difficult and complex deployment pipleline.
Thats a wrapper around OpenStack.
KVM plus custom lightweight kernel with virtio drivers in a guest.
Yeah pretty much KVM is what I think is selling here so I assume it's the most popular. At least for me, I ignore OpenVZ now even though I loved it previously so I feel a bit guilty since I like to keep my roots as best as I can.
Its good to be passionate about building custom solution over opensource virtualization tech but everything from your list is already there in Proxmox along with extensive API (already mentioned by @HalfEatenPie ) . Its tried and tested then why reinventing the wheel ?
And ofcourse KVM is way to go currently.
Definitely KVM as per my experience. You can also build a wrap around Proxmox using the API, it's very complete.
Anyone experienced with LXD willing to chime in and give a technical explanation or a comparison to KVM?
I think that'd be more helpful and interesting discussion.