Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Providers with High Availability VPS - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Providers with High Availability VPS

13»

Comments

  • randvegetarandvegeta Member, Host Rep

    Radi said: @randvegeta As far as I know, you have a VPS provider as well(2). Go ahead and do it, I promise that I will signup for at least a month for a $7 HA VPS. :)

    Been testing a new 'HA' cluster in HK, and if you want to be my guinea pig for a month, and pay $7 for it, I'll take it ;-)

    So to be clear, this is not the first HA cluster we've had, just the first one we aren't paying an arm and a leg for in licensing fees. It's a much simpler setup than our current Virtuozzo hyperconverged 'Cloud'.

    The setup is as follows:

    3x Hypervisor (Xen) Servers (Dual Xeon E5 Servers, 128GB RAM, 64GB USB, 10GB NIC)
    2x Synology RS3618

    The hypervisors have no internal disks. Instead it boots off a USB thumb drive.

    The Synology NAS is configured in HA, directly connected over 10G. The second 10G port on each NAS connects to a 10G switch which all 3 hypervisors are also connected. Currently, each NAS has 4x4TB HDDs (2x Seagate Ironwolf + 2x Toshiba Enterprise NAS) configured in RAID 10 and 2x Kingston DC400 960GB SSDs configured in RAID 1 for read/write cacheing.

    So for a total of 32TB HDD space and more or less 2TB of SSD space, there is just 8TB (actually 7.3TB) of usable space.

    The NAS itself has HA, and so do the hypervisors. There is no switch redundancy per se, so the network itself becomes the single point of failure.

    New HK VPS ordered at VPSBit.com will automatically be provisioned on this cluster. A few VMs are already running.

    If you're interested in trying it, here's the order URL: https://my.vpsbit.com/order/main/packages/HK-VPS-XEN/?group_id=4

    Use Coupon code is : guineapigs to get 70% off (valid to May 31st and upto 128 VMs)

  • trewqtrewq Administrator, Patron Provider

    @randvegeta Any chance you can post some disk stats from a VPS hosted on one of the nodes?

    Thanked by 1Aidan
  • randvegetarandvegeta Member, Host Rep

    @trewq said:
    @randvegeta Any chance you can post some disk stats from a VPS hosted on one of the nodes?

    wget -qO- bench.sh | bash
    ----------------------------------------------------------------------
    CPU model            : Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz
    Number of cores      : 1
    CPU frequency        : 2593.776 MHz
    Total size of Disk   : 37.2 GB (0.9 GB Used)
    Total amount of Mem  : 990 MB (359 MB Used)
    Total amount of Swap : 1983 MB (0 MB Used)
    System uptime        : 0 days, 9 hour 0 min
    Load average         : 0.06, 0.04, 0.05
    OS                   : CentOS 6.8
    Arch                 : x86_64 (64 Bit)
    Kernel               : 2.6.32-642.el6.x86_64
    ----------------------------------------------------------------------
    I/O speed(1st run)   : 181 MB/s
    I/O speed(2nd run)   : 195 MB/s
    I/O speed(3rd run)   : 184 MB/s
    Average I/O speed    : 186.7 MB/s
    ----------------------------------------------------------------------
    Node Name                       IPv4 address            Download Speed
    CacheFly                        204.93.150.152          21.7MB/s
    Linode, Tokyo, JP               106.187.96.148          2.22MB/s
    Linode, Singapore, SG           139.162.23.4            11.5MB/s
    Linode, London, UK              176.58.107.39           8.23MB/s
    Linode, Frankfurt, DE           139.162.130.8           4.76MB/s
    Linode, Fremont, CA             50.116.14.9             817KB/s
    Softlayer, Dallas, TX           173.192.68.18           312KB/s
    Softlayer, Seattle, WA          67.228.112.250          491KB/s
    Softlayer, Frankfurt, DE        159.122.69.4            2.03MB/s
    Softlayer, Singapore, SG        119.81.28.170           1.71MB/s
    Softlayer, HongKong, CN         119.81.130.170          10.3MB/s
    ----------------------------------------------------------------------
    

    Looks like some congestion over HE.net to USA from HK. That's pretty slow... but this is about the disk. Running dd get's slightly different results.

    [root@h103 ~]# dd if=/dev/zero of=/tmp/test1.img bs=100M count=1 oflag=dsync
    1+0 records in
    1+0 records out
    104857600 bytes (105 MB) copied, 0.404234 s, 259 MB/s
    [root@h103 ~]# dd if=/dev/zero of=/tmp/test1.img bs=100M count=10 oflag=dsync
    dd: writing `/tmp/test1.img': No space left on device
    10+0 records in
    9+0 records out
    1005187072 bytes (1.0 GB) copied, 5.03702 s, 200 MB/s  
    

    In terms of write speed, performance is comparable to locally attached disks.

  • trewqtrewq Administrator, Patron Provider

    randvegeta said: In terms of write speed, performance is comparable to locally attached disks.

    Very nice! I got one earlier just to have a play :)

  • randvegetarandvegeta Member, Host Rep
    edited May 2018

    trewq said: Very nice! I got one earlier just to have a play :)

    Thanks for paying to be my guinea pig :D

    You getting the same/similar performance?

    Will need to have way more data on the NAS to see how many IOPS it can really handle, but I'm hoping the SSD caching will mean it will be plenty for the full 7.3TB of usable space. The NAS has quite a few more drive bays if an upgrade is needed. Apparently adding more will improve the overall performance, but I don't think it will make much difference for the individual VMs. Running dd directly on the NAS gets write speeds of over 400MB/s, but over a VM, it almost never goes about 265MB/s, and generally hovers around the 200MB/s mark. Not entirely sure why, but it probably doesnt matter as I consider anything over 150MB/s (about the same speed as a reasonably fast single locally attached SATA HDD) to be acceptable for this kind of VPS.

    The next test cluster will run a RAID 10 cluster with more HDDs and no SSD caching, and see if there is a significant difference in performance. In terms of raw write speeds, I expect not, as when I removed the SSD cache, the performance was pretty much the same. With that in mind, SSDs don't help that much with raw read/write, but seek time. But adding more disks to a RAID 10 array may also help with that too.

  • @randvegeta said:

    If you're interested in trying it, here's the order URL: https://my.vpsbit.com/order/main/packages/HK-VPS-XEN/?group_id=4

    Use Coupon code is : guineapigs to get 70% off (valid to May 31st and upto 128 VMs)

    Is this one time or a cycle discount?

  • letboxletbox Member, Patron Provider

    @randvegeta said:

    trewq said: Very nice! I got one earlier just to have a play :)

    Thanks for paying to be my guinea pig :D

    You getting the same/similar performance?

    Will need to have way more data on the NAS to see how many IOPS it can really handle, but I'm hoping the SSD caching will mean it will be plenty for the full 7.3TB of usable space. The NAS has quite a few more drive bays if an upgrade is needed. Apparently adding more will improve the overall performance, but I don't think it will make much difference for the individual VMs. Running dd directly on the NAS gets write speeds of over 400MB/s, but over a VM, it almost never goes about 265MB/s, and generally hovers around the 200MB/s mark. Not entirely sure why, but it probably doesnt matter as I consider anything over 150MB/s (about the same speed as a reasonably fast single locally attached SATA HDD) to be acceptable for this kind of VPS.

    The next test cluster will run a RAID 10 cluster with more HDDs and no SSD caching, and see if there is a significant difference in performance. In terms of raw write speeds, I expect not, as when I removed the SSD cache, the performance was pretty much the same. With that in mind, SSDs don't help that much with raw read/write, but seek time. But adding more disks to a RAID 10 array may also help with that too.

    I did test ceph last a couple of days but i got shitty i/o performance even i back it up with 2 x 10Gbps but will do more test in future and add more staff it's not easy to got this done thought.

  • ZerpyZerpy Member

    @key900 said:
    I did test ceph last a couple of days but i got shitty i/o performance even i back it up with 2 x 10Gbps but will do more test in future and add more staff it's not easy to got this done thought.

    It really depends on how you configure your ceph environment :-) You can get rather decent IO of out a ceph environment, if you understand how it actually works.

    What is "shitty i/o performance" anyway?

  • letboxletbox Member, Patron Provider

    @Zerpy said:

    @key900 said:
    I did test ceph last a couple of days but i got shitty i/o performance even i back it up with 2 x 10Gbps but will do more test in future and add more staff it's not easy to got this done thought.

    It really depends on how you configure your ceph environment :-) You can get rather decent IO of out a ceph environment, if you understand how it actually works.

    What is "shitty i/o performance" anyway?

    We still doing testing we missing few drives and no journals drives yet so it would be that low after all. We will do more test while all drives in :)

Sign In or Register to comment.