Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


High Available Hosting
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

High Available Hosting

JasonhyperhostJasonhyperhost Member, Patron Provider
edited December 2022 in General

interested to see , does any host on here actually offer Highly available hosting , or not trying to scout to see if its worth me investing into kit to do ours

Comments

  • @Jasonhyperhost said:
    interested to see , does any host on here actually offer Highly available hosting , or not trying to scout to see if its worth me investing into kit to do ours

    Not yet but we actually plan on starting to setup our infrastructure like this. We are soon to have it.

  • risharderisharde Patron Provider, Veteran

    I do not offer any hosting like this as yet though I did previously as a sideline project which I never got to advertising. But I must say, I suspect that to do so and keep it cheap might result in one not being the cheapest and thus I'm not sure if the LowEnd market would jump at the idea.

    With that being said, I've been coding my sites with high availability in mind which will in some way trickle down to my clients.

    And also with that being said, many many years ago, mediatemple was one of the first to claim a sort of high available approach - haven't used them in a while but I think they did some grid-like approach to website hosting.

  • There are hosts that offer this but the market itself is a bit varied (with various levels of marketing and nomenclature). This is mostly due to how unique each implementation is and is heavily based on client's requirements and intended use of the service.

    I think a great place to get some inspiration is Azure's Reliability Infographic Marketing document here: https://azure.microsoft.com/files/Features/Reliability/AzureResiliencyInfographic.pdf

    In addition to having the client actually do it themselves based on where they deploy their VMs, Azure also has an add-on you can purchase which handles auto-deployment or redeployment if things go offline.

  • Not really something you are likely to find in the low end market. Traditionally this involves setting up multiple stateless application frontends across many geographical locations behind failover DNS, and then a geographically replicated database behind that.

    Nowadays Kubernetes deployed across multiple availability zones of AWS or GCP, plus their replicated relational database offerings will cover that. But you have to pay way more than buying servers and setting things up yourself, especially in terms of egress and cross-AZ traffic.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    Are you hoping to find shared/reseller hosting with this?

    If so, you're kinda hooped since that won't protect from Litespeed/Apache taking a crap, or the node locking up/panic'ing. At best you'd just be able to prevent the hardware dropping offline, but the VM will still have to boot back up on new metal.

    cPanel had a road plan for adding HA, not sure if that's still a go or if they scrapped it (again).

    Francisco

    Thanked by 2raza19 HalfEatenPie
  • raza19raza19 Veteran
    edited December 2022

    @Francisco said:
    Are you hoping to find shared/reseller hosting with this?

    If so, you're kinda hooped since that won't protect from Litespeed/Apache taking a crap, or the node locking up/panic'ing. At best you'd just be able to prevent the hardware dropping offline, but the VM will still have to boot back up on new metal.

    cPanel had a road plan for adding HA, not sure if that's still a go or if they scrapped it (again).

    Francisco

    Fran if one were to deploy simple high availability like through haproxy( I can think of a simple scenario where files, folders and sessions rsyncing both ways), how wud one go about creating an insync database like mysql on 2 servers where one transaction wud replicate across nodes and system wud recover in case one went offline and came back up later or does this require one to design an application with high availability in mind in the first place like using a scalable object oriented db instead of classical relationship db setup. I've always been curious of how one wud go about doing this. I've experience working with haproxy successfully but only for static websites.

  • crunchbitscrunchbits Member, Patron Provider, Top Host

    @raza19 said:

    @Francisco said:
    Are you hoping to find shared/reseller hosting with this?

    If so, you're kinda hooped since that won't protect from Litespeed/Apache taking a crap, or the node locking up/panic'ing. At best you'd just be able to prevent the hardware dropping offline, but the VM will still have to boot back up on new metal.

    cPanel had a road plan for adding HA, not sure if that's still a go or if they scrapped it (again).

    Francisco

    Fran if one were to deploy simple high availability like through haproxy( I can think of a simple scenario where files, folders and sessions rsyncing both ways), how wud one go about creating an insync database like mysql on 2 servers where one transaction wud replicate across nodes and system wud recover in case one went offline and came back up later or does this require one to design an application with high availability in mind in the first place like using a scalable object oriented db instead of classical relationship db setup. I've always been curious of how one wud go about doing this. I've experience working with haproxy successfully but only for static websites.

    I had been looking into HA clusters with proxmox for some internal stuff. It can get fairly tedious (especially proper hardware for fencing) and from my perspective I mostly just wanted HA across different physical locations for some core apps which this wasn't really well-suited for with our restraints. The most cost-efficient option internally (and for customers) has been things like redundant PSUs, multiple network ports, ECC ram, raid arrays and monitoring, and just being proactive & responsive. Hardware will break and be replaced, fiber will get cut and be repaired, but the biggest thing I wanted to avoid was unwanted data loss. We can always rack another identical (or better) server for a customer and migrate disks in an emergency. It's that final ~10% of true high availability that results in something like 90% of the additional cost and complexity.

    Thanked by 1raza19
  • @crunchbits said:

    @raza19 said:

    @Francisco said:
    Are you hoping to find shared/reseller hosting with this?

    If so, you're kinda hooped since that won't protect from Litespeed/Apache taking a crap, or the node locking up/panic'ing. At best you'd just be able to prevent the hardware dropping offline, but the VM will still have to boot back up on new metal.

    cPanel had a road plan for adding HA, not sure if that's still a go or if they scrapped it (again).

    Francisco

    Fran if one were to deploy simple high availability like through haproxy( I can think of a simple scenario where files, folders and sessions rsyncing both ways), how wud one go about creating an insync database like mysql on 2 servers where one transaction wud replicate across nodes and system wud recover in case one went offline and came back up later or does this require one to design an application with high availability in mind in the first place like using a scalable object oriented db instead of classical relationship db setup. I've always been curious of how one wud go about doing this. I've experience working with haproxy successfully but only for static websites.

    I had been looking into HA clusters with proxmox for some internal stuff. It can get fairly tedious (especially proper hardware for fencing) and from my perspective I mostly just wanted HA across different physical locations for some core apps which this wasn't really well-suited for with our restraints. The most cost-efficient option internally (and for customers) has been things like redundant PSUs, multiple network ports, ECC ram, raid arrays and monitoring, and just being proactive & responsive. Hardware will break and be replaced, fiber will get cut and be repaired, but the biggest thing I wanted to avoid was unwanted data loss. We can always rack another identical (or better) server for a customer and migrate disks in an emergency. It's that final ~10% of true high availability that results in something like 90% of the additional cost and complexity.

    Spot on. Unfortunately I've had to learn this the hard way. Last year I lost all server data due to disk crashes, the backup too fried! It was just worst bout of luck. Since then I've been experimenting with haproxy to make sure as much data is backed up in an active sync kind of situation where another node cud be brought back as quickly as possible but I'm still far away from actually setting two A records pointing to two different servers for redundancy, that wud require in sync replication and takeover at database level. This is where it gets confusing.

    Thanked by 1crunchbits
  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @crunchbits said: I mostly just wanted HA across different physical locations for some core apps which this wasn't really well-suited for with our restraints.

    This sounds like a bad time when an SQL database gets involved. Writes to disk are going to have to wait for it to be replicated to both locations, meaning every query is going to take $query_time + latency between POP's.

    @raza19 said: Fran if one were to deploy simple high availability like through haproxy( I can think of a simple scenario where files, folders and sessions rsyncing both ways), how wud one go about creating an insync database like mysql on 2 servers where one transaction wud replicate across nodes and system wud recover in case one went offline and came back up later or does this require one to design an application with high availability in mind in the first place like using a scalable object oriented db instead of classical relationship db setup. I've always been curious of how one wud go about doing this. I've experience working with haproxy successfully but only for static websites.

    I wouldn't even try unless the application is designed for it.

    There's no way to integrate shared hosting to do this very easily. You could:

    • Use MySQL Percona to handle the DB replication, but it's very easy for things to fall out of sync and you need a minimum of 3 members.
    • Use Gluster for your data store, but gluster is very easy to bottleneck.

    This doesn't fix the issue of config files getting synced around though. Slapping haproxy infront doesn't fix anything either, and enabling caching can lead to users getting access to areas they shouldn't due to admin cookies getting cached.

    VMWare has/had some 'constant replication' stuff where a VM is constant 'in live migration' so you can just drop a hypervisor and the VM shouldn't go down. You still need some sort of SAN for storage though.

    The VMWare route doesn't fix 'cPanel/Apache/LSWS/PHP/SQL shat the bed' though.

    Francisco

    Thanked by 1raza19
  • crunchbitscrunchbits Member, Patron Provider, Top Host

    @Francisco said:

    @crunchbits said: I mostly just wanted HA across different physical locations for some core apps which this wasn't really well-suited for with our restraints.

    This sounds like a bad time when an SQL database gets involved. Writes to disk are going to have to wait for it to be replicated to both locations, meaning every query is going to take $query_time + latency between POP's.

    Yeah, it isn't a solution at all for that. Even if you just had a sort of write to database first, then database live rsync elsewhere it just gets too messy and convoluted and will absolutely end up with some level of corruption. I also realized that we were trying to solve for some extremely unlikely scenario where all of the diverse fiber paths we use got cut and our backup non-fiber management network went out which would make this system very unlikely to function properly anyways.

    Thanked by 1raza19
  • FranciscoFrancisco Top Host, Host Rep, Veteran
    edited December 2022

    @crunchbits said: then database live rsync elsewhere

    Everyones whose ever had to do an InnoDB recovery is going through PTSD at the moment.

    Francisco

    Thanked by 2crunchbits raza19
  • Several projects that we help manage are setup in with a round robin DNS, floating IP's, several haproxy servers, nginx servers, a mysql cluster, with a gluster backend for data storage. I can tell you this is no way a cheap solution either. Everything is running as vm's on OnApp servers for live migration should a host node start reporting problems. This is also cloned to another datacenter should one center go completely offline.

  • MerakithMerakith Barred
    edited December 2022

    I am trying out a HA setup right now for WP Sites. DNS Geo Balancing, 5x VPS with Caddy Webserver (config shared over NFS) + InnoDB Cluster. I manually update Wordpress, Plugins & Themes since I don't use a real time sync. I am using Unison which is a bidirectional sync and pushes the changes with some lag. I am thinking of pushing wp-content/uploads to some s3 like solution.

    But now when a host need to offer such solutions then a lot of factors come in to play. It will be real messy setup to use with a control panel or etc or when multiple peoples are managing such setup. Those who really want such setup will prefer to go into AWS, Google Cloud, Azure, Oracle, IBM, Digital Ocean, Vultr, Linode or Jelastic. It also come down to what applications are offered. WP in the current state is not ready for such architecture. There are more cons than pro.

Sign In or Register to comment.