New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Been testing a new 'HA' cluster in HK, and if you want to be my guinea pig for a month, and pay $7 for it, I'll take it ;-)
So to be clear, this is not the first HA cluster we've had, just the first one we aren't paying an arm and a leg for in licensing fees. It's a much simpler setup than our current Virtuozzo hyperconverged 'Cloud'.
The setup is as follows:
3x Hypervisor (Xen) Servers (Dual Xeon E5 Servers, 128GB RAM, 64GB USB, 10GB NIC)
2x Synology RS3618
The hypervisors have no internal disks. Instead it boots off a USB thumb drive.
The Synology NAS is configured in HA, directly connected over 10G. The second 10G port on each NAS connects to a 10G switch which all 3 hypervisors are also connected. Currently, each NAS has 4x4TB HDDs (2x Seagate Ironwolf + 2x Toshiba Enterprise NAS) configured in RAID 10 and 2x Kingston DC400 960GB SSDs configured in RAID 1 for read/write cacheing.
So for a total of 32TB HDD space and more or less 2TB of SSD space, there is just 8TB (actually 7.3TB) of usable space.
The NAS itself has HA, and so do the hypervisors. There is no switch redundancy per se, so the network itself becomes the single point of failure.
New HK VPS ordered at VPSBit.com will automatically be provisioned on this cluster. A few VMs are already running.
If you're interested in trying it, here's the order URL: https://my.vpsbit.com/order/main/packages/HK-VPS-XEN/?group_id=4
Use Coupon code is : guineapigs to get 70% off (valid to May 31st and upto 128 VMs)
@randvegeta Any chance you can post some disk stats from a VPS hosted on one of the nodes?
Looks like some congestion over HE.net to USA from HK. That's pretty slow... but this is about the disk. Running dd get's slightly different results.
In terms of write speed, performance is comparable to locally attached disks.
Very nice! I got one earlier just to have a play
Thanks for paying to be my guinea pig
You getting the same/similar performance?
Will need to have way more data on the NAS to see how many IOPS it can really handle, but I'm hoping the SSD caching will mean it will be plenty for the full 7.3TB of usable space. The NAS has quite a few more drive bays if an upgrade is needed. Apparently adding more will improve the overall performance, but I don't think it will make much difference for the individual VMs. Running dd directly on the NAS gets write speeds of over 400MB/s, but over a VM, it almost never goes about 265MB/s, and generally hovers around the 200MB/s mark. Not entirely sure why, but it probably doesnt matter as I consider anything over 150MB/s (about the same speed as a reasonably fast single locally attached SATA HDD) to be acceptable for this kind of VPS.
The next test cluster will run a RAID 10 cluster with more HDDs and no SSD caching, and see if there is a significant difference in performance. In terms of raw write speeds, I expect not, as when I removed the SSD cache, the performance was pretty much the same. With that in mind, SSDs don't help that much with raw read/write, but seek time. But adding more disks to a RAID 10 array may also help with that too.
Is this one time or a cycle discount?
I did test ceph last a couple of days but i got shitty i/o performance even i back it up with 2 x 10Gbps but will do more test in future and add more staff it's not easy to got this done thought.
It really depends on how you configure your ceph environment :-) You can get rather decent IO of out a ceph environment, if you understand how it actually works.
What is "shitty i/o performance" anyway?
We still doing testing we missing few drives and no journals drives yet so it would be that low after all. We will do more test while all drives in