Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Clustering Proxmox
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Clustering Proxmox

agoldenbergagoldenberg Member, Host Rep

Their documentation isn't that great and I am looking for someone who's done this before to maybe lend me a hand.

Both Nodes are online but when I add the second node to the cluster, it shows as offline.

Can anyone lend a hand?

Comments

  • Shared storage or DRDB?

  • I just set up 3 new Proxmox nodes yesterday to test Ceph/gluster drives.
    All the nodes need to be on the same subnet to cluster.

  • agoldenbergagoldenberg Member, Host Rep

    Ohhhhhh! That's probably the issue. Is there any way to connect 2 nodes together over the internet? So I can manage both from one ui?

  • @agoldenberg said:
    Ohhhhhh! That's probably the issue. Is there any way to connect 2 nodes together over the internet? So I can manage both from one ui?

    Perhaps a VPN of some sort?

  • tomletomle Member, LIR
    edited February 2014

    Try setting up a VPN-tunnel and clustering on the VPN IPs.

    @agoldenberg said:
    Ohhhhhh! That's probably the issue. Is there any way to connect 2 nodes together over the internet? So I can manage both from one ui?

  • FrankZFrankZ Veteran
    edited February 2014

    I was not able to get VPN to work to cluster Proxmox. I used gre tunnels, the down and dirty is...


    On node #1

    iptunnel add gre1 mode gre local < IP4 #1 > remote < IP4 #2> ttl 255

    ip addr add 10.0.1.1/30 dev gre1

    ip link set gre1 up

    route add 10.0.1.1 gw 10.0.1.2 dev gre1


    On Node #2

    iptunnel add gre1 mode gre local < IP4 #2 > remote < IP4 #1> ttl 255

    ip addr add 10.0.1.2/30 dev gre1

    ip link set gre1 up

    route add 10.0.1.2 gw 10.0.1.1 dev gre1


    'ip tunnel show' will verify that the tunnel is up

    Thanked by 2vRozenSch00n marrco
  • agoldenbergagoldenberg Member, Host Rep

    Ah ok well it's not super important. I'm just messing around with it for now, but it sounds like what I want isn't what would happen even with a cluster. So I will keep them separate for now.

  • fileMEDIAfileMEDIA Member
    edited February 2014

    Proxmox use multicast, so your tunnel must be support multicast too. Tinc works fine. For small cluster you can switch corosync to unicast.

    Proxmox cluster works up to 32 nodes fine. We use this amount of nodes for our CloudVM.

    Thanked by 2FrankZ vRozenSch00n
  • Thanks @fileMEDIA any other clustering tidbits you feel like sharing?
    Are you using gluster or Ceth for your cloud?

    and since this is a Proxmox thread (Hi Jack) .. Here are the ISO install options for future readers

    linux ext4 – sets the partition format to ext4. The default is ext3.

    hdsize=nGB – this sets the total amount of hard disk to use for the Proxmox installation. This should be smaller than your disk size.

    maxroot=nGB – sets the maximum size to use for the root partition. This is the max size so if the disk is too small, the partition may be smaller than this.

    swapsize=nGB – sets the swap partition size in gigabytes.

    maxvz-nGB – sets the maximum size in gigabytes that the data partition will be. Again, this is similar to maxroot and the final partition size may be smaller.

    minfree=nGB – sets the amount of free space to remain on the disk after the Proxmox instillation.

    http://www.jamescoyle.net/how-to/261-proxmox-advanced-install-settings

  • fileMEDIAfileMEDIA Member
    edited February 2014

    Normally CloudVM using local storage at the moment, because ceph is beta. But we testing it in our environment for a few customers which want it and our internal VMs.

    Works really fine with infiniband QDR because all local drives on all nodes creates a a big distributed and replicated storage. I think we release it for all when proxmox release 3.2.

    Thanked by 1FrankZ
Sign In or Register to comment.