New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
If you could get something like 4x servers (2x compute + 2x storage backend) under $500, you can have a 1 server failure of each type, but with the prices in AU, I doubt you would be able to source dedicated servers like that under $500.
Out of interest, what tech stack would you use to handle that setup? Difficult to manage/maintain?
Also, would it make more sense to integrate the storage into the compute nodes? The main thing I don't know about is how to reliably sync the two disks, how to recover from failure, and what issues/caveats to expect.
The budget is rather flexible, as I'm still exploring options. I do like the notion of dedicated hardware, as I know what performance to expect out of it (cloud providers tend to be a lot more vague on this front).
XtreemFS, or GlusterFS, or Ceph as storage backend. Mount the entire volume up to the two compute servers. You now know how many nodes will fail before the data craps itself. If you have more than 2 servers, you can set the number of replicas (redundancy, number of servers that will die before data goes poof), that the storage system will have. Its extremely flexible, and you will know the limits. If you have extremely heavy IO, beyond what network can provide, then you are better off getting fibre channel and doing this yourself as things get expensive real quick.
I generally do not advise at all to have both storage + compute servers merged. They both require different HW (storage server memory should not put pressure on what compute system is using) This leaves a lot of flexibility. Maybe you need to do some maintenance on the storage system, and need to take one node offline. With a merged node, you now have partial compute/storage outage. Without, you can do whatever you want to the storage node without impacting anything else. In a seperate storage/compute config, you can also add more hardware to one (maybe you need more compute power or storage?) without having to spend money on the other. If your going to be having some high IO, you will probably need some 15K disks or SSDs, and you really don't want to make those purchases when you only need compute power and not more storage.
Now that you've gotten storage online, you need to build a scale-out computing system. I reccomend using docker or coreos, depending on what you like to contain your app. This will make it easy to add capacity, and make migrations and backups a snap. Non-containing is fine too if you are using something like Salt/Chef/Puppet to configure things, but as a security person, SeLinux with MLS policy + Docker is quite nice
Sure.
Softlayer > Telstra
Telstra's business service may, on paper, seem good, but after using it for a while, it's a platform stuck in the past. Unlike Softlayer, who continuously introduce new "cloud options" and improve their system.
AWS > Azure
Simply because AWS has higher uptime than Azure.
AWS > Softlayer > Azure
AWS and Softlayer come pretty close in my opinion and I could easily put one over the other. In terms of flexibility though, AWS wins out by the sheer amount of features you can have.
>
According to Telstra's cloud service website (offering private and public), they recommend Softlayer too and provides their services. I think it may be worth ordering direct from SL, if you are interested in SL's services.
https://www.telstra.com.au/business-enterprise/solutions/cloud-services/public-cloud/softlayer
I'll dare say that Telstra and Softlayer's public cloud platform share similar features, except that Telstra is exclusively using VMWare. There is nothing outdated about VMWare.
Cloud.net is other alternative
https://jager.cloud.net/search?region=6&cpu=4&mem=7680&disc=100
Am not looking to run multiple instances of the application, so this doesn't actually sound like a problem. The second server would only serve as a hot spare should the main one fail (and also does enable upgrades to the main server with minimal downtime).
I don't quite follow there. If you're upgrading the disks, you're not touching the CPU/RAM etc, so you're not paying any more or less regardless of compute/storage separation?
Thanks again for the other posts.
Does any know where to find CPU/IOPS type info on SoftLayer's cloud?
What you're looking for is a managed infrastructure, which can only reasonably be done with virtualization. Don't reinvent the wheel with redundant / failover setups if you come to a forum like this to ask.
Just buy a High Availability (active failover) SSD VPS from one of the big players like Softlayer, AWS, Azure, etc.
You're really overengineering this to the point it's useless. It's already in your target country and you seem to be want to remain online after a tsunami.
Stop being such an amateur only focusing on the % uptime. Better think of a good data backup strategy instead of this bullshit.
Yes in AUS - its fibre so obviously limited areas of service...
Thanks for the advice provided, and a managed solution is what I've been thinking (but one should always research options so I wanted to know alternatives).
Uptime certainly isn't, and never has been, my only concern, so I really don't know what you're on about here.
Am pretty sure none of those on your list provide HA or active failover.
I'm not really into Australian HA hosting, sorry.
The point is there are businesses devoting major R&D into these kind of setups, so just pay for those instead of rolling your own and ending up with more costs and less uptime out of the box.
That is for if your servers are full, and you need to add more storage via another server. If your doing raid/etc, you're going to have a long way ahead to rebuilding your array when you decide to upgrade/replace disks. However, you mentioned in this case that it would be a hot spare, so you shouldn't have any issues with this. In this case, two servers would probably be fine.
They don't offer HA or failover in any location, not just Oz. it's a relatively common misconception that when a provider calls their VPS 'cloud' that HA or failover or VM mobility is implied.
But absolutely agree with your point - let the cloud guys do the 'cloud' and you just worry about what you are actually running on it.
You'd do better to build a redundant setup over >1 VM. Amazon and Google both make this relatively easy with load balancers, distributed data stores etc. You'll get better update, easier scalability and probably lower costs.
We manage setups like this - drop me a PM if you need any help.
Seems like my understanding is flawed then.
I thought they related to the ability to automatically switch over in response to a failure? If so, don't most of these cloud providers do this? (migrate VMs to another host if one goes down) If this isn't what they do, then how do they respond to failure? (I would've thought the purpose of networked storage was to enable the server to quickly be re-spun up on any host)
@xyz
None of the providers listed can migrate VMs between hosts (hypervisors).
All are local storage single servers, running virtualisation.
If you want HA, you have to build it.
Some clouds (like us) can migrate VMs, and offer a more flexible option, but the distinction has been lost in the 'cloud' noise
How 'business critical' is this? Critical enough that it would result in lost revenue?
AWS is not local storage. Azure not either.
AWS and Azure do migrations between hosts when a host fails (boot up EBS on other host).
Source: I work with AWS daily by now.
AWS instances are EBS on the small types and local SSD on most of the other types, as are Azures but both also offer EBS/type storage as an option also. Originally all instances were local disk only.
Rackspace, Linode, DigitalOcean, Vultr (insert 'cloud' here) are all local local.
All EC2 instances support EBS. Some instance types provide local storage for when that's useful (e.g. you don't want network overhead), but this doesn't preclude you from using EBS storage.
Azure/GCE are all similar.
Linode/DO/Vultr are at a different pricing level and don't seem to be too different from a standard VPS.
Most providers market cloud as "pay per hour", and "scalability" it has nothing to do with migrating VM's if one goes down.
So a pimp marketing "per per hour" fat whores who has quite the large scalability as you can imagine is a cloud pimp, amirite?