New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I'll be surprised if any of the bigger brands such as Vultr go to Ryzen in HF VPS, they'll either stick with Intel or perhaps go to EPYC at a push. But compare the cost of some of the higher frequency AMD EPYCs vs Intel Es for example, Intels are the better option there.
Not going to get the vendor agreements you want as a large cloud with Ryzen systems. It's mostly enterprise class gear that comes with things like service contracts at that scale
You can't afford hands to spend hours working on a downed node. Call Dell or someone and they'll take a new one off the pallet for you.
Some places even work by the rack. Just using available capacity and live migration to eventually replace multiple systems at once.
What's the comparison on that vs. local SSD/NVME though?
I think most people would prefer local RAID for performance reasons.
Here are some tests I did with various providers. Speeds should be more than sufficient for most people, only really beat by NVMe. I'd take storage reliability any day, esp for production.
Vultr HF (Local NVMe. No RAID)
BuyVM (Local NVMe)
UpCloud (NAS, MaxIOPs Inhouse)
Heficed (NAS, CEPH Triple Replication)
OVH VPS (Local SSD)
Personal VM (SSD, no RAID)
Local SSDs in RAID 10 would perform better than network storage aswell, Vultr local isn't a good example as we all clearly know there's no RAID as they avoid the question. NAS/Ceph is good for high availbility but performance of local SSD/NVMe will be better (when done right) - There's still the risk of failure even in a NAS, albeit not as high.
OVHs results look surprisingly poor though unless they're limiting?
Yea, OVH is probably limiting. This VM is their black friday offer from last year. Of course, Local/NVMe/RAID will always do better "when done right". But for most people, the compromised performance wont make a difference, depending on their workload. Certainly it isn't terrible performance. What's the expected IO of RAID local non-nvme?
That's one I did the other week with 4 x SSDs RAID 10, as an example.
I was just considering using Vultr's cloud offerings for a database instance -- specifically the high frequency VPS with NVMe storage.
But now I'm really having doubts, and the more I look for info on their infrastructure, the more concerned I get.
For me, one of the big reasons to use cloud services instead of dedicated hardware is because the provider handles hardware issues behind the scenes, ensuring high levels of reliability and availability without the customer needing to do anything or even know about the details. That should mean that they're using high quality hardware with built-in redundancies. Otherwise, I'd feel better managing my own server, where at least I can set up some basic software RAID-1 and know that a bad SSD/NVMe won't kill my database.
Yup, likewise. I have a colo with ~20 i7 servers; however, host prod projects on cloud providers with redundant/NAS storage. Better safe than sorry, esp. if I dont need much compute resources.
Check out UpCloud, you'll get comparable MySQL, Redis, IO performance with redundant/NAS storage.
https://joshtronic.com/2020/06/01/vps-showdown-digitalocean-lightsail-linode-upcloud-vultr/
For workload that require very low latency local SSD/NVMe would be the best choice. There is overhead network latency when you use Ceph.
You also only shared those local SSD with other VPS in same node while with Ceph you probably must compete in a single cluster that shared with much higher, even thousand VPS. That's why some provider should limit their IOPS quota for each VPS, unless you want small user abusing the cluster.
The bright side Ceph aside for high reliability is they can be scale very well. If you need additional IOPS you just need add new node. And you can get block storage, object storage (S3), and file system from single cluster which is simplified operational.
Hetzner with Ceph
Hetzner probably limit IOPS to 5K for each VPS.
Oh they actually might be high GHZ intel skylake, my bad. They were hellishly fast so I assumed ryzen
The difference can be less drastic than you may think
In our work we've done a lot with RDMA storage clustering, works worlds better than TCP. The network effect gets compounded by latency, which this cuts by an order of magnitude
Different solutions have different problems - I know GlusterFS the best. The per file metadata can hurt random iops the most - but host level disk caches solve this in VMs