Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


256MB @ $10.50/yr ; 1GB @ $40/yr ; 2GB @ $7/mo ; KVM SSD+RAID10 / SAN+HA ; ALL NEW Chicago Location - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

256MB @ $10.50/yr ; 1GB @ $40/yr ; 2GB @ $7/mo ; KVM SSD+RAID10 / SAN+HA ; ALL NEW Chicago Location

2

Comments

  • GoodHostingGoodHosting Member
    edited August 2014

    @gehaxelt said:
    I think I/O could be better for a SSD node. Hope that helps :)

    @mov3 said:
    WTF, I/O 7 MB/s

    We got to the bottom of this one, gehaxelt specifically asked to be deployed on our Scranton cluster, due to the bandwidth and network availability we have in this location compared to our Chicago location. The Chicago location contains our shiny new HA failover and SAN configuration, with many MX500 SSDs in RAID10; whereas our Scranton location (being our previous buildout) only contains 8x SSDs per node in RAID10, as local storage. This is what caused the inconsistency in benchmarking.

    Please see below:

    As run from our Scranton node:

    [toor@fox1 ~]# sudo -u oneadmin dd if=/dev/zero of=test bs=64k count=16k
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 1.28216 s, 837 MB/s
    
    [toor@fox1 ~]# ioping -c10 / -s 64k
    64.0 KiB from / (ext2 /dev/mapper/vg00-lvroot): request=1 time=474 us
    64.0 KiB from / (ext2 /dev/mapper/vg00-lvroot): request=2 time=28.9 ms
    64.0 KiB from / (ext2 /dev/mapper/vg00-lvroot): request=3 time=583 us
    64.0 KiB from / (ext2 /dev/mapper/vg00-lvroot): request=4 time=15.3 ms
    64.0 KiB from / (ext2 /dev/mapper/vg00-lvroot): request=5 time=598 us
    64.0 KiB from / (ext2 /dev/mapper/vg00-lvroot): request=6 time=553 us
    64.0 KiB from / (ext2 /dev/mapper/vg00-lvroot): request=7 time=16.1 ms
    64.0 KiB from / (ext2 /dev/mapper/vg00-lvroot): request=8 time=14.3 ms
    64.0 KiB from / (ext2 /dev/mapper/vg00-lvroot): request=9 time=586 us
    64.0 KiB from / (ext2 /dev/mapper/vg00-lvroot): request=10 time=552 us
    
    --- / (ext2 /dev/mapper/vg00-lvroot) ioping statistics ---
    10 requests completed in 9.1 s, 128 iops, 8.0 MiB/s
    min/avg/max/mdev = 474 us / 7.8 ms / 28.9 ms / 9.6 ms

    As run from our Chicago node:

    [toor@cx12 ~] # sudo -u oneadmin dd if=/dev/zero of=test bs=64k count=16k
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 3.17982 s, 338 MB/s
    
    [toor@cx12 ~] # ioping -Cc10 / -s 64k
    64.0 KiB from / (ext4 /dev/mapper/vg00-lvroot): request=1 time=52 us
    64.0 KiB from / (ext4 /dev/mapper/vg00-lvroot): request=2 time=46 us
    64.0 KiB from / (ext4 /dev/mapper/vg00-lvroot): request=3 time=41 us
    64.0 KiB from / (ext4 /dev/mapper/vg00-lvroot): request=4 time=33 us
    64.0 KiB from / (ext4 /dev/mapper/vg00-lvroot): request=5 time=28 us
    64.0 KiB from / (ext4 /dev/mapper/vg00-lvroot): request=6 time=55 us
    64.0 KiB from / (ext4 /dev/mapper/vg00-lvroot): request=7 time=36 us
    64.0 KiB from / (ext4 /dev/mapper/vg00-lvroot): request=8 time=40 us
    64.0 KiB from / (ext4 /dev/mapper/vg00-lvroot): request=9 time=38 us
    64.0 KiB from / (ext4 /dev/mapper/vg00-lvroot): request=10 time=30 us
    
    --- / (ext4 /dev/mapper/vg00-lvroot) ioping statistics ---
    10 requests completed in 9.0 s, 25.1 k iops, 1.5 GiB/s
    min/avg/max/mdev = 28 us / 39 us / 55 us / 8 us

    As you can tell, the initial deployment had throughput in mind; but absolutely tanks when high IOps or write latency is concerned, whereas our newest deployment (the Chicago deployment) keeps a sane throughput while really maximizing on the IOps potention of an SSD array in RAID10. These two array are each powerful in their own way, but it's clear to see that the Scranton node has issues when high IOps are pushed towards the underlying RAID cards.

  • GoodHosting said: 100 CPU Units == 24x7 usage on 1 Core; or 24x7 50% usage across 2 cores, since you can design your own Virtual Machines; you can decide how the CPU units get split across more cores, if your application works better with more threads.

    Now I'm interested. Could you PM me the datacenter information?

  • @Silvenga said:
    Now I'm interested. Could you PM me the datacenter information?

    Hello @Silvenga , I have PMed you the datacenter name and a test IP.

    Thanked by 1Silvenga
  • @GoodHosting said:
    As you can tell, the initial deployment had throughput in mind; but absolutely tanks when high IOps or write latency is concerned, whereas our newest deployment (the Chicago deployment) keeps a sane throughput while really maximizing on the IOps potention of an SSD array in RAID10. These two array are each powerful in their own way, but it's clear to see that the Scranton node has issues when high IOps are pushed towards the underlying RAID cards.

    Okay, another update from me. As @GoodHosting mentioned, I chose to switch to the older node for some reasons.
    We have investigated my issue a bit further and it seems like it was a mistake on my side (not attaching the vda-devices correctly). A quick redeployment of the VPS with the block devices attached correctly finally fixed the issues.

    I get the following results now:

    root@me:~# dd if=/dev/zero of=/tmp/test bs=64k count=16k
    16384+0 Datensätze ein
    16384+0 Datensätze aus
    1073741824 Bytes (1,1 GB) kopiert, 2,29652 s, 468 MB/s
    
    root@me:~# ioping -c10 / -s 64k
    65536 bytes from / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb): request=1 time=0.3 ms
    65536 bytes from / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb): request=2 time=0.4 ms
    65536 bytes from / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb): request=3 time=0.5 ms
    65536 bytes from / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb): request=4 time=0.3 ms
    65536 bytes from / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb): request=5 time=0.3 ms
    65536 bytes from / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb): request=6 time=0.4 ms
    65536 bytes from / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb): request=7 time=0.7 ms
    65536 bytes from / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb): request=8 time=0.4 ms
    65536 bytes from / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb): request=9 time=0.5 ms
    65536 bytes from / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb): request=10 time=0.4 ms
    
    --- / (ext4 /dev/disk/by-uuid/13ed6e17-3e0c-40fe-890c-6302bfeb91eb) ioping statistics ---
    10 requests completed in 9010.1 ms, 2414 iops, 150.9 mb/s
    min/avg/max/mdev = 0.3/0.4/0.7/0.1 ms
    

    The support is really awesome considering this still is an "unmanaged" plan.

    Thanks again,
    gehaxelt

  • @gehaxelt said:
    The support is really awesome considering this still is an "unmanaged" plan.

    I'm glad we were able to help you through these hoops! As you understand far too well now, OpenNebula isn't exactly "new user friendly" in that it is not an interface that makes things obvious to new users. I'm glad we could help you re-attach your disk using the correct VirtIO options and help you through the OS installation to optimize your performance on this node.

    Feel free to contact us again at any time if we could help you further.

    Thanked by 1gehaxelt
  • Could u msg me for datacenter info??

  • @mahfuz said:
    Could u msg me for datacenter info??

    Yes, I will PM you the information now.

  • ausaus Member

    @GoodHosting said:
    with many MX500 SSDs in RAID10

    Never heard of this SSD...typo? Otherwise, good offer. Please PM me the dc / test ip.

  • @aus said:
    Never heard of this SSD...typo? Otherwise, good offer. Please PM me the dc / test ip.

    I do apologize, they are Crucial M5xx series SSDs, the X was a typographical error. I'll PM you the datacenter and test IP information now. thank you for pointing out that issue; I was pretty hastily replying yesterday to keep up with all the orders and messages, and made a few typos here and there.

  • @GoodHosting , Can you do monthly for 256mb ram vps if purchased bulk ?

  • @theweblover007 said:
    GoodHosting , Can you do monthly for 256mb ram vps if purchased bulk ?

    Hello @theweblover007,

    As explained in my other offer threads, the issue is that PayPal (the preferred payment method by many, sadly) imposes an extortionately high fee on small value transactions, which makes the lower monthly plans simply not make sense to accept, when PayPal gets more of the money than we do out of a payment.

    That being said, if you were to bulk order these plans, I am sure we could work something out considering.

  • and are they usable with hitleap via wine ?

  • theweblover007 said: and are they usable with hitleap via wine ?

    Sounds like an excellent way to murder CPU and annihilate RAM. I'd think mowing lawns would bring you more income and would definitely be healthier. :)

  • @theweblover007 said:
    and are they usable with hitleap via wine ?

    We do have customers that run HitLeap on Linux. I would assume that they are using WINE, and I have not heard any complaints yet; so I would assume that it is working. However, since I do not use the software myself, I could not give any guarantee.


    @sleddog said:
    Sounds like an excellent way to murder CPU and annihilate RAM. I'd think mowing lawns would bring you more income and would definitely be healthier. :)

    We have the CPU and RAM available on the cluster, so it's up to the customer to choose how to use their resources. Although I do agree with the latter part of your statement quite a lot, it's not up to me to decide that :).

  • Payment through skrill is accepted or not ?

  • edited September 2014

    @sleddog
    That was just a question becuase OP allows that on some plans. Stupid.

  • @usama742 said:
    Payment through skrill is accepted or not ?

    Hello @usama742,

    We do not accept Skrill, as their requirements on account validation are just plain stupid.

  • @GoodHosting are all of these packages at the Chicago location?

  • @catalystium said:
    GoodHosting are all of these packages at the Chicago location?

    Yes!

  • now all your VPS are down.

    not only jingling VPS bluescreen, 2012 VPS lose all data and shows

    NO BOOTABLE DEVICE

    so....

  • @lewissue said:
    now all your VPS are down.

    not only jingling VPS bluescreen, 2012 VPS lose all data and shows

    NO BOOTABLE DEVICE

    so....

    so you run away like this tell your staff to not refund

    let me show some pics

  • maybe we can dispute this, need confirmation from @goodhosting before going dispute this. all my vps is down

  • Guys you don't respond to emails and your Control Panel gives 500 - Internal Server Error.
    What happened?

  • VM filesystem went read-only yesterday, now everything seems toasted. Sort it out @GoodHosting or it's refund time.

  • GoodHostingGoodHosting Member
    edited September 2014

    Hello @vpnarea / @sleddog ,

    We are currently working on migrating customer data from a hosed RAID array to a new storage server, as unfortunately; a RAID array corruption caused us to have to take the SAN offline before damage propagated to the other SAN. We are working on changing the HA Slave to the new HA Master, and syncing data backwards. This process will take some time, and we are working at full on to get this issue resolved as soon as possible.

    --

    We took the control panel offline so that nobody would drastically attempt to delete their VM, or do any management actions that may have resulted in data loss for their instance. We have placed the hypervisors in a locked state to prevent any data degradation while this is sorted.

  • A notification about this issue in the past 2 days would have been nice. Any certainty on when this will be sorted out, approximately?

  • Don't by from this guy and this site. they doesn't gonna answer you and they doesn't gonna provide you a good service. you gonna be regret if you buy.
    Am i right Mr Albino Geek?
    You better to answer my ticket that why you shut my server down and don't fix it after a 35 days

  • FrankZFrankZ Veteran
    edited September 2014

    Having a good hosting provider with HA and SAN will help you to:
    A. Reduce downtime.
    B. Fail spectacularly.

  • @hosein4213 said:
    Don't by from this guy and this site. they doesn't gonna answer you and they doesn't gonna provide you a good service. you gonna be regret if you buy.
    Am i right Mr Albino Geek?
    You better to answer my ticket that why you shut my server down and don't fix it after a 35 days

    As per the "Review" guidelines I'd love a ticket # to prove you were ever a client. I don't see any tickets that have gone without reply for more than 24 hours at the moment, let alone the 35 days that you claim in this post.

  • FrankZFrankZ Veteran
    edited September 2014

    I don't see any tickets that have gone without reply for more than 24 hours at the moment

    @GoodHosting play fair "Damon will update you soon." should not count.

Sign In or Register to comment.