Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Proxmox for high availability
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Proxmox for high availability

danitfkdanitfk Member

Hello all fellows ,

I want start a new generation for my business and It will be High Availability VM's. (It's not quite new , But for me our my clients YES).

I decided to buy new dedicated servers from OVH. So I have 2x1Gbps Internal Network with vRack option in there.

Sadly, Virtualizor haven't HA option , So there is another options like 1)Proxmox 2)OpenStack etc...

Proxmox with HA feature ,Is it free? or not How much?
OpenStack or Proxmox?

Which one is more suitable for small-business scale ?
At most 10 Server and 100 VM's.


  • How do you plan to handle the shared storage and IP failover? (Although I guess with v-rack that may not be a problem)

  • danitfk said: o I have 2x1Gbps Internal Network with vRack option in there.

    There is no guaranteed performance over their internal network. Its just a vlan across a shared network. Sometimes those internal networks can get congested if you are running heavy volumes of storage traffic (ISCSI/NFS) you will find your VMs will crash or move to readonly mode.

    Where is your client based? Are you delivering to Europe/US/Asia/Middle East/Africa?

  • Proxmox supports HA out of the box

  • @MarkTurner said:

    I'm new to HA solutions if OP uses something simple as local storage on the node replicated with DRBD to a second node for in case of internal networks can get congested affect VMs on Node1 can only slow replication not totally stopped it and crashed...

    But that depends on if client can afford that to happened ... It always better to have cables pulled directly...

    As far as i understand this type of solution will work on less bandwidth than having a virtual SAN for example which constantly relay on network...

    You may configure DRBD to suspend the ongoing replication in this case, causing the Primary’s data set to pull ahead of the Secondary. In this mode, DRBD keeps the replication channel open — it never switches to disconnected mode — but does not actually replicate until sufficient bandwith becomes available again.

  • ClouviderClouvider Member, Patron Provider

    @coolice what you describe is nowhere near HA infrastructure setup. That's a standard setup with more up to date backup, no auto failover etc.

    Also, since your SAN and Internet connections aren't dual diverse you have a single point of failure of edge switch.

  • coolicecoolice Member
    edited April 2015

    What i write is in mean of using Proxmox with HA think you do not understand me or do not know of what Proxmox is capable of ... no SAN is involved Proxmox Cluser nodes are setup with a DRBD between theirs local storages on which VMs run... When node failed, VM is restarted on the second one using DRBD copy on its local storage...

    It's a HA setup

    EDIT: here the exact setup manual with DRBD

    Proxmox supports VMs on NFS, DRBD, Ceph, GlusterFS, Sheepdog, ISCSI SAN ... You can choose different storage setups for different VMs, and DRBD is one of older ones supported...

    The question was DRBD with possibility of primary’s data set to pull ahead of the secondary good enough for OP needs... There also are secondary questions how slow ovh vrack becomes and how ahead the split will be in that case scenario... and My third statement was that this type of storage setup will require less network than for example Ceph or NFS and is maybe a solution to handle better possible internal networks congestion ...

    P.S I mention I'm new to HA ... 33% of my hosting infrastructure run on Proxmox Hypervisors, just do not use HA in production that's I mention that I'm new to it... i just use the other Proxmox features to make my life easyer and plan to go up to 80% Proxmox in next months

Sign In or Register to comment.