Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Citrix Xen and Proxmox familiarity?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Citrix Xen and Proxmox familiarity?

pubcrawlerpubcrawler Banned
edited December 2012 in General

Have an old Citrix Xen install we use for "internal" machine. (Development, MySQL, few appliance apps). Was my first toe dip into virtualization.

Haven't been a fan of Citrix Xen as the GUI controller they put out is Windows-only at last check, plus the licensing on free lacks a bunch of features. Never really resonated with how their solution works, but it does work fairly good.

Interested in a new virtualization alternative Proxmox. Seems to be Debian friendly and OpenVZ compatible (which isn't important now, but would like to learn more in the future about).

Can anyone with real use experience of Proxmox share with me the simplicity of installs and basic administrative? Am I correct that Proxmox ships with a web based controller? Not interested in Windows-only controllers or tacking new complexity with commandline at first.

Comments

  • AnthonySmithAnthonySmith Member, Patron Provider

    yes it ships with a very nice detailed well planned web based GUI and supports KVM and OpenVZ out of the box.

    It is not just a click click done system you do need to have some idea what you are doing but if you have used any other panel and know basic networking you will be fine.

  • It's not "too" bad.

    Yeah folks will scream where's the SolusVM access, but if you have some "managed clients" (example: you would not want them having SolusVM access to mess things up) it works fine.

  • I have worked on it, Its very nice :)
    Also with the new versions user management is there and api too :)
    However the only problem with it is I have seen my windows VMs hanging on it, dunno why ! usually. It supports KVM and OpenVZ both.

  • geekalotgeekalot Member
    edited December 2012

    @pubcrawler your timing is funny. I decided to bite the bullet and migrate a bunch of personal VMs from vmware server v1 to proxmox 2.2-31 over the last few days. It has been very interesting to say the least:
    1) Yes non-OS specific web-based administration out-of-the-box. (It needs java runtime)
    2) Supports openvz & kvm
    3) Nice built-in converter for vmdk to qcow2
    4) Took me a bit to figure out the bridging (compared to the vmware HostOnly, NAT, & bridged model)
    5) All Linux vm's migrated relatively seamlessly
    6) An old Win2k vm was giving a hell of a time; got it to run, but one application on it has not been stable; not sure why, but all indicators are some sort of virtual disk corruption

    BTW, I set it up the "stubborn" way -- installed it as a package on top of a clean/running Debian 6 64bit install rather than doing their "bare-metal" install. That way I setup Software RAID, LVM, and other bare minimum packages I find necessary on the host.

    So far I like it.

    I still think VMware (even the free server version) is more sophisticated in some ways (i.e., sound, virtualizing COM ports etc), but the promise of the Backup capability and the "live migration" was too interesting to pass up in proxmox.

    One free tip I'll give you (the rest will cost you :) .. if migrating a windoze vm to Proxmox, the vmWare VGA driver is awesome -- much better performance than the other display drivers I tried.

  • :) good info geekalot. Thanks.

    I don't do Windows. :) Has it's place, but not in my small operation.

  • i don't (usually) do windoze either ... but had to run something that was only available there; finally found a way to do it using Linux but involves cobbling together a bunch of opensource stuff ... and a lot of time/testing; and time = $

  • That little Windows something, let me guess @geekalot, didn't run in Wine either?

    Lots of headaches with the winDozers lately. Glad I don't need that OS for any reason, well other than the Citrix admin tool I want rid of. The open source version someone created is very alpha quality and quite broken.

  • MaouniqueMaounique Host Rep, Veteran

    I migrated long ago from vmware to XCP (free XenServer) at the job I used to have, used vmware server before being discontinued.
    XCP did a great job, it is still in production, great networking, had a very weird setup across machines with a lot of bridging and internal routing, was flawless.
    KVM is good too, however, the bridging is not so versatile and if you have multiple servers talking to each other on different bridged lans can quickly become a nightmare.
    Dont get me wrong, I like Proxmox a lot, but I like xen more than OVZ and dont have a PV choice on Proxmox ovz doesnt really cut it.
    I also feel like xcp is leaner and runs "rounder" than Proxmox.

  • yup, no dice with Wine for what I had to run. It wasn't all that bad running windoze on a VM (the only way I will run it). moot point now since it didn't come over to proxmox all that well (plus needs COM port access).

    @pubcrawler said:

    That little Windows something, let me guess @geekalot, didn't run in Wine either?

    Lots of headaches with the winDozers lately. Glad I don't need that OS for any reason, well other than the Citrix admin tool I want rid of. The open source version someone created is very alpha quality and quite broken.

  • yeah, IMHO the networking setup for proxmox was a bit of a pain for exactly this reason; VMware beats it for ease of setup with that. I was quite content running the "old" VMware because it was rock-solid stable (and I always heard that v1 was better than v2).

    @Maounique said:

    KVM is good too, however, the bridging is not so versatile and if you have multiple servers talking to each other on different bridged lans can quickly become a nightmare.

    Dont get me wrong, I like Proxmox a lot, but I like xen more than OVZ and dont have a PV choice on Proxmox ovz doesnt really cut it.

  • MaouniqueMaounique Host Rep, Veteran

    @geekalot said: I was quite content running the "old" VMware because it was rock-solid stable (and I always heard that v1 was better than v2).

    Well, v2 wasnt bad either, I really liked it, too bad they decided to kill it, but it is no wonder, was such a good product for free that was good for most small companies.
    Now they lost that fan-base to XCP, Proxmox, and similar products.
    Running such an old product other than behind a firewall for internal purposes is suicide without security updates.

  • geekalotgeekalot Member
    edited December 2012

    This -->
    @Maounique said:

    behind a firewall for internal purposes

    The host was never, ever exposed to the "outside world."

    BTW, I still think VMware was the best (that I tried anyway) for running "desktop" distributions - on a laptop or netbook for example.

    But I am loving proxmox for "server" stuff (except the COM port support).

    This move by VMware i.e., "killing" a great product (from a technical perspective), is so reminiscent of some companies of yore - IBM comes to mind.

  • Thanks everyone.

    Finally have Proxmox up and running remotely on a development server. Doing Debian installs in containers now.

    Works well/easy out of the box. Have to get up speed about making own custom images to quickly deploy (if possible). ISO installs take too long and are unnecessary for what I am working on.

  • Anyone seen network throughput issues within Proxmox containers?

    I am able to get throughput of 39.2M/s in shell on the controller IP to a remote server.

    Within Debian installed instance within Proxmox, I am seeing considerably less < 10 M/s

  • geekalotgeekalot Member
    edited December 2012

    Which network model did you use for the container (Intel, Realtec, VirtIO)?

    I believe (according to this that VirtIO should give the best performance). I am using Intel E1000 though for consistency with some other requirements.

    FWIW, in a quick test using E1000 on server class hardware I only got 18M/s copying from KVM container to host using scp (via virtual IP -not bridged- on host's NIC, factor in encryption overhead for scp).

    EDIT: Container running Debian Squeeze

  • @geekalot,

    Bridged mode: vmbr0
    Model: VirtIO (although was set to Realtek originally.. perhaps I need to restart the container)

    Doing a new container install now to see if anything messed up with the original container.

    It's a big slow noticeable change speed wise for network inside the container. If it was 10% or something it would be alright. But I've seen 32M/s from the controller and like 1M/s from inside container...

  • geekalotgeekalot Member
    edited December 2012

    Yeah, it has some issues with recognizing hardware changes within the VM.

    I believe you have STOP the container for any changes to take effect -- not just reboot. The only hardware change that can take effect immediately is ISO image.

    IMHO the documentation ain't exactly great or detailed. And yes, I am trolling their wiki and everything else I can get my hands on.

    And yes, the performance you indicate is not great.

  • geekalotgeekalot Member
    edited December 2012

    FYI, after some brief testing, VirtIO definitely makes a difference in disk speed!
    SATA: 3340 iops, 13.0 mb/s
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync -> 40.4MB/s

    VirtIO: 4261iops, 16.6 mb/s
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync -> 71.4MB/s
    (keep in mind host is running "blue"/"eco" type drives to reduce heat as this is not outward facing stuff; software RAID 1)

    VirtIO did not make much difference for network driver
    Just got 19MB/s scp-ing from debian container up to host on virtual IP ... no change from E1000

  • Appears we are hitting some limit with CPU units and or disk related units (if that exists).

    Odd since top / htop doesn't show anything significant for load on the CPU.

    Empty box, so I expected no resource contention issues. Odd that virtualization software would limit resources at this point. Doh!

    Trying to find what to tweak. Boosted the CPU and Core count to 50% of the server. Upped the CPU units to 450,000 (max is 500,000 supposedly).

    Still drudging along, so missing something :)

  • Still lagging..

    I actually mirrored our static main file pile over to Proxmox on the controller instance :)

    Now trying to copy that data from the main instance into the container. Is there some way to actually do this by popping the container open as the admin and copying the files? Doing it the network way via another rsync job is absolutely horridly slow. Unsure what else is default locked down/limiting throughput.

Sign In or Register to comment.