Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Providers with High Availability VPS
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Providers with High Availability VPS

randvegetarandvegeta Member, Host Rep

As far as I can tell, the majority of VPS providers are 're-sellers'. And by re-seller, I mean they rent dedicated servers from other providers, possibly direct from a data center.

How many of you have High Availability clustering and how do you do it? I think for a lot of dedicated server providers, a good portion of their dedicated server sales must be to VPS providers.

Mostly, VPS providers tend to ask for 4 or more disks in a RAID 10 configuration. Makes sense, since this is obviously intended to provide a greater level of performance and reliability. But wouldn't most VPS providers prefer a more scalable, dynamic and fault tolerant solution?

If you're a VPS provider, would you not prefer to rent a dedicated server with iSCSI storage and just very basic local storage on the node itself? With an additional node, you can mount the same iSCSI storage and configure it to have High Availability. Or if you need to migrate to a new node, it becomes a lot faster as you don't need to transfer any data. Or do VPS providers prefer to have full control over their environments, including storage, and not want to take any chances with any kind of 'shared' resource.

I'm a little surprised that A.) Dedicated Server providers don't offer this. And B.) VPS providers don't ask for this.

«13

Comments

  • ClouviderClouvider Member, Patron Provider

    OnApp

  • randvegetarandvegeta Member, Host Rep

    Not going to touch that. Also, not sure how OnApp or even Virtuozzo would actually resolve the HA issue when it comes to storage. You still need 10G+ storage network

    Thanked by 1BlaZe
  • ClouviderClouvider Member, Patron Provider

    More than 10G if you need good performance, but it works on 1G as well. Depending what your project is hat might be enough.

    Although OnApp storage had a bumpy road initially it is working great now and we’re very satisfied with it.

  • drserverdrserver Member, Host Rep

    We have cloud setup with 40gbps mellanox storage network and C7000s based on cloudstack in preparation. We got enough hardware for 2AZs so we will start with that.

    It is in final testing stage. It will be deployed in EU and it will not be low end pricing.

    Thanked by 2BharatB vimalware
  • letboxletbox Member, Patron Provider
    edited May 2018

    We will deployed those thing for testing by 10th May it will be free test for everyone

    We set it by 2x 10Gbps port and we will have 2x 10Gbps more if everything go smooth. So total 40Gbps

  • RadiRadi Host Rep, Veteran

    Pricate crossconnect between nodes costs money. New NICs would need to be involved in process as well. It simply isn't viable doing this and selling High Availability at LET pricing.

  • randvegetarandvegeta Member, Host Rep

    Radi said: Pricate crossconnect between nodes costs money. New NICs would need to be involved in process as well. It simply isn't viable doing this and selling High Availability at LET pricing.

    I realize I posted this on LET, but I was making a general observation. Being based in HK, there is little that can be 'LET pricing'. In general, it's something i'm just surprised VPS providers are not that interested in.

    You're right that a XCON fee is not so cheap, and that may make it unfeasible if you had to pay that fee, but if you're a dedicated server provider, especially with your own facility, it's not a huge cost. Youd on't have monthly XCON fees, only the initial cost of the switches and cables. Which I think if you were catering to VPS providers, would be offset by reduced disks required within the nodes.

    Clouvider said: More than 10G if you need good performance.

    Well I did say 10G+, but actually 10G throughput is still pretty decent. 1.25GB/s, is twice as fast as any SATA disk, even in RAID10. The storage cluster could probably use multiple 10G links, bonded, but for the nodes themsevles, for the most part, 10G is probably enough. It's certainly enough to compete with SATA performance.

    Most 10G NICs seem to come with 2 interfaces any way. So 20G is easily achievable per node, assuming you have sufficient number of ports on your switch. The switches are expensive, so actually that kind of makes the switch ports valuable too. We do single 10G links by default.

    Clouvider said: but it works on 1G as well.

    Does it work well? I've got a couple of mini clusters running Virtuozzo on 1G NICs. It works, and I've had been able to achieve read/write speeds of over 100MB/s, which is about as good as a cheap consumer HDD. But I wouldn't put more than 2 nodes in that kind of setup.

  • drserverdrserver Member, Host Rep
    edited May 2018

    randvegeta said: Youd on't have monthly XCON fees, only the initial cost of the switches and cables. Which I think if you were catering to VPS providers, would be offset by reduced disks required within the nodes.

    You are talking about hundreds of thousands of dollars of investment and expect cloud server with HA for 7 usd. And that is your right. But please don't ask why is no one offering HA service then. Basically idea of 7 usd is stopping any serious cloud to be in this kind of market.

    Also to answer your question, AWS have free tier services, you can get HA across 4 AZs behind LB. Check there, it will cost you nothing.

    You can get reseller package, integrate it with your billing and you can sell AWS Cloud. Most of low end providers do that as a "Premium service".

  • RadiRadi Host Rep, Veteran
    edited May 2018

    @randvegeta As far as I know, you have a VPS provider as well(2). Go ahead and do it, I promise that I will signup for at least a month for a $7 HA VPS. :)

  • randvegetarandvegeta Member, Host Rep

    drserver said: You are talking about hundreds of thousands of dollars of investment

    I don't think so. Switches, NICs and cables do not cost that much. 10s of thousands MAYBE. Depends on the type of gear you buy. But the main thing is it's not going to cost 10s of thousands MORE. I think most hosting providers are probably already moving over to multiple 10G inter-rack cabling. Fiber cables are not that expensive (modules are a little expensive) and Gbit switches are not that cheap. So I don't think there is a massive burden financially for any serious provider.

    drserver said: for 7 usd.

    I am not referring to the LET market. Plenty of hosts here who do not only cater for LET. In fact I do not know any host here that is exclusively LET market. I did clarify this in my previous post.

    drserver said: But please don't ask why is no one offering HA service then

    The question is not why is no one providing it. The question is why are VPS providers not more interested in it, and why dedicated server providers are not offering the tools to VPS providers that would make this easier.

    If the reason for VPS providers to not offer it is because of cost, then I don't see why it should cost so much. Yes XCON fees are expensive n most DCs, and if you have many servers scattered throughout a DC that you need to interconnect with 10G, AND maintain your own storage cluster, then yes, I can see why that would be expensive. But if dedicated server providers (who own the infra) had these types of clients in mind, then putting in adequate cabling and switching capacity should not be that difficult or expensive. There is a shift towards SSDs over HDDs, and I am pretty sure the cost of having a 'centralized' storage cluster with 10G+ networking would be significantly cheaper than putting in 4x SSDs in every box.

    The other benefits of using network storage is you can normally 'upgrade' as needed, so you don't need to over provision. This is especially true if you have thin provisioning. A very good chunk of disk space is unused. It's hard to know exactly, but probably less than 30% of allocated disk space is actually used. So the upfront cost to the dedicated server provider may actually be less. They can add storage as and when needed. You can also have different classes of storage. SSD based, HDD based, HDD with SSD cache etc. All of which can be provisioned on the fly for the VPS providers.

    Of course the downsides of a centralized storage is that there is potential for centralized failure, the dedicated provider over selling too much, not being managed properly, one bad neighbor using all the available disk I/O or bandwidth, and technically, if you're comparing to 1 or 2 local disks (particularly HDDs), it's actually more expensive. Actually, in whatever configuration you choose, if the disks are 100% utilized, it should actually be more expensive. But since when are HDDs and SSDs ever fully utilized?

    Radi said: @randvegeta As far as I know, you have a VPS provider as well(2). Go ahead and do it, I promise that I will signup for at least a month for a $7 HA VPS. :)

    I'll probably have something ready in the next week or so. We've got a Xen based cluster in HK. At the budget end, it's only running on Gbit (for the storage network) but it's fine for LET :P. Especially since it's also in HK.

  • lionlion Member

    @randvegeta said:
    I am not referring to the LET market. Plenty of hosts here who do not only cater for LET. In fact I do not know any host here that is exclusively LET market. I did clarify this in my previous post.

    Why ask here then

  • drserverdrserver Member, Host Rep

    randvegeta said: I don't think so. Switches, NICs and cables do not cost that much. 10s of thousands MAYBE. Depends on the type of gear you buy.

    2x IBM infinitiband switches will be around 30K each on 40gbps

    Cables will be around 50 usd each, you need like 60 of them (depends on setup)

    Mezzanine cards will be each 150 usd, depends on your deloyment, we have 192 of them

    Storage back-end 6x HP DL380p with 25 drives each, we have 3 SSD arrays and 3 Spinning Arrays on 10K. approx 110K usd, i don't remember exactly

    4x C7000 configured, each was 42K usd

    Juniper in rack aggregation was around 2 k

    you do the math.

    OR

    just take DL380s and fill them with vps, money wise it will be same

    Thanked by 2Clouvider vimalware
  • jsgjsg Member, Resident Benchmarker

    I guess many think it's not worth it. Probably reason number 1: VPS is per se a market segment that is largely about being cheaper. Reason number 2: most clients interested in high availability don't even look at VPS but get dedicated machines. Reason number 3: da cloud which is seen as somehow high availability by many.

    And I think you are wrong about higher end equipment not being really more expensive. 10 Gb NICs for example still are quite a bit more expensive than 1 Gb plus the more expensive switches etc plus of course the upstream bandwidth.

  • randvegetarandvegeta Member, Host Rep

    drserver said: 2x IBM infinitiband switches will be around 30K each on 40gbps

    Well this is the majority of the cost, and is probably overkill. You an pick up some 10G switches for a less than $2K.

  • randvegetarandvegeta Member, Host Rep

    lion said: Why ask here then

    This is not a question for end users, but hosting providers. And most hosts here do not exclusively cater to LET, so it's as good a place as any to ask the question. And I'm banned from WHT, so I can't ask there!

  • FHRFHR Member, Host Rep

    Having setups with centralized and possibly HA storage is viable for larger providers. No way in hell it's going to be profitable if you are small, even if we are talking used equipment (InfiniBand stuff is extremely cheap second hand).

  • MikePTMikePT Moderator, Patron Provider, Veteran

    @randvegeta said:

    drserver said: 2x IBM infinitiband switches will be around 30K each on 40gbps

    Well this is the majority of the cost, and is probably overkill. You an pick up some 10G switches for a less than $2K.

    Its not overkill. You either do it right or you won't be able to provide a superb service.

  • drserverdrserver Member, Host Rep

    @randvegeta said:
    Well this is the majority of the cost, and is probably overkill. You an pick up some 10G switches for a less than $2K.

    Can you please send me a link of those. I need to see that.

  • ClouviderClouvider Member, Patron Provider

    @FHR said:
    Having setups with centralized and possibly HA storage is viable for larger providers. No way in hell it's going to be profitable if you are small, even if we are talking used equipment (InfiniBand stuff is extremely cheap second hand).

    It is very profitable for us, but the investment was quite steep.
    Notice though that we don't advertise Cloud in LET prices at all.

  • FHRFHR Member, Host Rep

    @Clouvider said:

    @FHR said:
    Having setups with centralized and possibly HA storage is viable for larger providers. No way in hell it's going to be profitable if you are small, even if we are talking used equipment (InfiniBand stuff is extremely cheap second hand).

    It is very profitable for us, but the investment was quite steep.
    Notice though that we don't advertise Cloud in LET prices at all.

    Yes, but I would call you a larger provider :) I remember looking at your Cloud/VPS prices, it was also quite steep. Different product for different market, certainly not low end.

    Thanked by 1Clouvider
  • ClouviderClouvider Member, Patron Provider

    AWS is large. When you compare with AWS you feel... Small. ;-)

  • randvegetarandvegeta Member, Host Rep

    FHR said: viable for larger providers

    What is a larger provider? I don't see HA as something that expensive to setup. You can get a pretty decent little cluster with 5x Dual Xeon machines and a 10G switch running on Virtuozzo (or in @Clouvider's case, OnApp). Buying brand-new gear, a decent setup (128GB RAM, 20 Cores, 3x HDDs 2x SSDs per node) will set you back about $20 - $30K. Use second hand gear that's a couple of generations old and it would probably cost you 1/3 - 1/2 that.

    If you use Gbit networks and consumer hardware, it's even less.

    Heck, I built a mini HA cluster using nothing but surplus hardware which is essentially worthless.

    • 3x i3 Servers with 4GB RAM, 1TB HDD and 128GB SSD - for the Storage Cluster
    • 2x i7 Servers with 16GB RAM, and 128GB SSD - For the Virtualization Nodes

    Using Xen for virtualisation, connect the storage via ISCSI and you have high availability storage + hypervisors.

    It's only a tiny cluster, but it works, and you can get about 100MB/s of read/write throughput. Yadda yadda yadda! Find that gear second hand, and it probably cost you around $500!

    drserver said: Can you please send me a link of those. I need to see that.

    Plenty of branded (Cisco/Dell/IBM) switches available on ebay (2nd hand). Or if you're interested in trying out something new and cheap, Ubiquiti ES-16-XG is about $1K and has 16 SFP 10G ports. Never used it though. I've tried their 1G 48 port switches, and those were actually pretty good!

    Or for a little more, you can get a Huawei S6700 for about $4k new. It's not hard to find decent prices 10G switches.

    MikePT said: Its not overkill

    Depends what you're comparing it to and what you're trying to compete against. RAID 10 SATA HDD/SSDs are the most common, and you can easily match that performance with a 10G network. Especially if you're going to bond 2 NICs. And since I'm talking about matching performance of onboard disks, I think anything more is probably overkill. Especially if you're talking about a setup that costs over 100K, vs a something that is <20% of that.

  • ClouviderClouvider Member, Patron Provider

    In our case cost spikes as we do everything in N+1 or 2N on the Cloud setup, that includes the switches etc. Scale is a problem as well as you need to have interconnect when you expand beyond one switch, ideally 100G in our case, so yeah. Can be cheap, can be expensive. Depends what you need really.

    Thanked by 1FHR
  • drserverdrserver Member, Host Rep

    randvegeta said: If you use Gbit networks and consumer hardware, it's even less.

    I have nothing more to say. For a moment i was actually thinking that you are serious. I am out.

    Thanked by 3FHR BharatB doghouch
  • randvegetarandvegeta Member, Host Rep

    Clouvider said: . Scale is a problem as well as you need to have interconnect when you expand beyond one switch, ideally 100G in our case, so yeah. Can be cheap, can be expensive. Depends what you need really.

    Definitely! At some point. Scaling the cluster actually ends up costing much more. As in, doubling capacity more than doubles the cost. In which case I find it more cost effective to run more clusters, rather than bigger clusters. The downside is you end up having more to manage, which in itself has cost, so needs to be factored in.

    drserver said: I have nothing more to say. For a moment i was actually thinking that you are serious. I am out.

    What's wrong with consumer gear or gbit switches?

    Consumer hardware is cheap, has high performance, and is no less reliable than 'enterprise' hardware. Consumer gear has it's limitations. Less RAM and fewer cores supported. They don't scale out as well. But for a small system, particularly for testing purposes, why not? I find it especially useful to determine how feasible a system is for performance. If it runs well on a cheap as chips consumer grade HA setup, it must run much better when using proper equipment :-).

    Besides, I'm not recommending anyone actually use consumer gear to sell VPS for use in a commercial/production environment. I was simply saying that it could be done on a budget, and not with unreasonable compromises.

  • mkshmksh Member

    I for one like the spirit behind DIY low budget approaches. The result is likely going to be well... DIY low budget but as long as it somewhat works and you can find someone willing to make the compromise who cares?

    Thanked by 1Shazan
  • raindog308raindog308 Administrator, Veteran

    randvegeta said: What's wrong with consumer gear or gbit switches?

    Consumer hardware is cheap, has high performance, and is no less reliable than 'enterprise' hardware.

    image

    You are high as a kite with that statement.

    In general, SAN-type setups (which is what you're describing) for web hosting are too expensive to do right and worthless if not done right, which is why they're rarely done in this market.

    Most providers simply don't need a SAN to meet their customer's needs. And that's end of story, because they are much more complicated to manage.

    And large providers who have practically unlimited funds and a huge customer base and hundreds of engineers...nope! Neither Amazon nor Microsoft Azure nor Google Cloud use SANs or storage networks. All storage is local SSD.

    https://serverfault.com/questions/117745/what-makes-cloud-storage-amazon-aws-microsoft-azure-google-apps-different-fr

    SANs have a place - generally outside the cloud - but they are not a panacea for all.

    Thanked by 1FlamesRunner
  • drserverdrserver Member, Host Rep
    Thanked by 1FHR
  • raindog308raindog308 Administrator, Veteran

    ...which is not implemented on a SAN, so...?

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    raindog308 said: ...which is not implemented on a SAN, so...?

    You can move the volumes around, no? Unless you mean they're mapping the volumes that way too?

    I guess hyper converging?

    Francisco

Sign In or Register to comment.