Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


LeaseWeb USA , is Down - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

LeaseWeb USA , is Down

2

Comments

  • I have a Leaseweb DE VPS, and yes, disk IO has been abysmal in the past few hours/days. Hope they can do the migration quickly.

  • At the speed that the VMs in Germany run right now it will probably take them just 4 hours to shut down the nodes.
    apt-get update && apt-get upgrade took 20 minutes.

    Thanked by 1inthecloudblog
  • So leaseweb finished their move to pure SSD in Germany:

    root@QSZJ001:/var/log# dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 41.1235 s, 26.1 MB/s
    root@QSZJ001:/var/log# dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 40.8227 s, 26.3 MB/s
    root@QSZJ001:/var/log# dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 45.9642 s, 23.4 MB/s
    root@QSZJ001:/var/log# dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
    ^C13829+0 records in
    13829+0 records out
    906297344 bytes (906 MB) copied, 203.143 s, 4.5 MB/s
    
    root@QSZJ001:~# ioping -RD .
    
    --- . (ext4 /dev/dm-0) ioping statistics ---
    3.76 k requests completed in 3.07 s, 1.23 k iops, 4.80 MiB/s
    min/avg/max/mdev = 365 us / 812 us / 217.6 ms / 3.65 ms
    

    Not great for SSD.

    Thanked by 1GStanley
  • 26.1 MB/s ought to be enough for everyone.
    Bill Gates, 1993.

    Thanked by 1inthecloudblog
  • Is this 'cloud' or just expensive VPS?

  • tr1ckytr1cky Member
    edited October 2015

    @MarkTurner said:
    Is this 'cloud' or just expensive VPS?

    CloudStack KVM, Network Storage

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    Storage is the bane of every cloud out there.

    Doesn't VPS.NET use some fat EMC setup now instead of the 'self built' SANs? Back when them and onapp were one in the same they had tons of issues with LSI's as well as ZFS based storage not working as expected when scaled.

    @tr1cky - It's possible they're using 1gbit storage links instead of 10gbit?

    Francisco

  • Francisco said: @tr1cky - It's possible they're using 1gbit storage links instead of 10gbit?

    Probably. I wouldn't complan if i/o would be at ~100mb/s

  • tr1cky said: CloudStack KVM, Network Storage

    But not redundant storage by the sound of it and neither is the disk performant.

    This is a comparison Cloudstack/KVM/Fully redundant storage:

    dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 2.07028 s, 519 MB/s

    dd if=/dev/zero of=sb-io-test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 2.43524 s, 441 MB/s

    dd if=/dev/zero of=sb-io-test bs=1M count=1k oflag=dsync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 23.9265 s, 44.9 MB/s

    dd if=/dev/zero of=sb-io-test bs=64k count=16k oflag=dsync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 231.678 s, 4.6 MB/s

  • @arpanjot said:
    did they changed their NL storage to ssd and remove capping?

    Ssd yes. Remove capping no. I had to pay for overages. Sheet.

  • @sin said:
    My US VPS has been up for a long time, one of the most stable vpses I have - they do plan on moving the US locations to the same platform they put the NL vpses on.

    You mean ssds? Where does that info come from?

  • The more SAN I see the more happy I am for choosing RAID10 + SATA Enterprise drives.

    This would be a good example, or the pathetic shit of 5 MB/s max out of CloudatCost (not a true provider actually)

    But when LeaseWeb cannot make the shit work, time to fuck off back to local RAID storage and stop trying to re-invent the wheel and pack Tbs on to shit storage clusters. Do not get me started on DreamHost's Ceph. The fact DreamHost made it and has been used on their shared shit platform for years should say enough.

    @0xdragon said:
    All hail the Kidechire!

    This cannot be said enough. Your best cloud Open Stack bullshit solution is only as good as your SAN redundancy, or lack of.

    Thanked by 10xdragon
  • GStanley said: Do not get me started on DreamHost's Ceph

    Ceph as a technology is very robust, I don't know about Dreamhost's implementation. What problems did you experience?

    We started testing it about 5 years ago when it was in very very experimental state. We have 2,880 OSDs running in a Ceph cluster right now, each OSD host has dual 10GE and we can easily saturate a client's 10GE interface. You can't skimp on SSDs, we've deploy NVMe's on each OSD host which has helped performance no-end.

    It works phenomenally well even under heavy load. Obviously the larger the cluster, the more robust it becomes (more can fail without it becoming service affecting). You have to dimension it big which is where Sage was planning it to be and be prepared to spend 6 months tweaking it. But once its running, very nice/robust platform.

    The POSIX interface is not production ready but for RBD, its great.

    SAN/NAS when done right, works. When you cut corners, under dimension, under serve then you just have a timebomb. The problem is many companies get a server, stick some storage shelves on the back of some cheap SATA controller and call it a SAN/NAS.

    This was my gripe with Backblaze's S3 product, its completely under-dimensioned on all levels, starting with the use of contended SATA interfaces to the disks, under-performing SATA controllers and insufficient CPU/memory.

  • zeitgeistzeitgeist Member
    edited October 2015

    @tr1cky said:
    So leaseweb finished their move to pure SSD in Germany:

    Did they? They said yesterday they'd start in batches. This is the kind of performance I had (and still have) when they experienced and acknowledged the issues. Before that I had ~ 80 MB/s. In other Leaseweb VPS locations that migrated to SSD they performance is constant, capped at ~ 80 MB/s as well.

    dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 12.5352 s, 85.7 MB/s

    Are you sure they moved your VPS to SSD yet?

  • I remember in some previous thread someone said he asked them about the poor performance and they replied that until all the migrations are complete they are throttling raw disk speed.

  • Log:

    http://leasewebnoc.com/en/networkstatus/performance-degradation-on-our-public-virtual-server-platform-in-germany-1

    Currently, 16 Oct. 09:30 CEST, they are about to start the migration in batches. In other words, they are just about to begin.

  • sinsin Member

    inthecloudblog said: You mean ssds? Where does that info come from?

    Their homepage under the news section...also I tweeted the LeasewebUSA account and asked them when they would be moving the US vpses and they said sometime before the end of the 4th quarter.

  • @zeitgeist said:
    Log:

    http://leasewebnoc.com/en/networkstatus/performance-degradation-on-our-public-virtual-server-platform-in-germany-1

    Currently, 16 Oct. 09:30 CEST, they are about to start the migration in batches. In other words, they are just about to begin.

    They fooled me. Yesterday was in fact only a maintenance that didn't upgrade the storage to SSDs.

  • @tr1cky Leaseweb DE says they finalized the migration to SSD storage. Did you notice any IO improvement? My VPS is still rather slow (<30MB/s).

  • madtbhmadtbh Member
    edited October 2015

    I have 2 servers in NL and I was moved to the new platform after a few days of downtime on the 4th of September.

    Server #1:

    dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 13.0597 s, 82.2 MB/s

    Server #2:

    dd if=/dev/zero of=sb-io-test bs=1M count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB) copied, 14.8413 s, 72.3 MB/s

  • MarkTurnerMarkTurner Member
    edited October 2015

    madtbh said: 82.2 MB/s

    (sarcasm) Thats super fast for a 'NetApp' SSD platform

  • MarkTurner said: This was my gripe with Backblaze's S3 product, its completely under-dimensioned on all levels, starting with the use of contended SATA interfaces to the disks, under-performing SATA controllers and insufficient CPU/memory.

    Much cheaper however.

    Thanked by 1alexvolk
  • William said: Much cheaper however.

    There are plenty of cheap, performant S3 services. Backblaze should stick to their sluggish backup rather than trying to leverage the same infrastructure for services where performance is expected.

  • WilliamWilliam Member
    edited October 2015

    Yes, at 5$/TB? Link?

    Thanked by 1alexvolk
  • Google unlimited storage is still working great, they have no problem with me using 5TB at the time for 8$/m and it's very fast.

  • I have a box in Virginia ( yeah , about what the thread is .. )

  • William said: Yes, at 5$/TB? Link?

    Doesn't matter if its $1/TB its still unmerchantable. Just because its cheap doesn't make complete lack of reasonable performance acceptable.

    I wouldn't expect gigabit level performance for that price, but reasonable these days seems to equate to 50-100Mbps. My testing showed a tenth of that at peak.

    @tr1cky: Google storage is ok, but not really an object store per-se, its more like a file store with a proprietary interface.

  • MarkTurnerMarkTurner Member
    edited October 2015

    inthecloudblog said: I have a box in Virginia ( yeah , about what the thread is .. )

    Hows the disk performance on that VM at the moment?

    The point I was making before is that is 82.2MB is the result of the 'upgrade' to NetApp SSD storage then there is a problem. We're running large NetApp flash and NetApp HDD storage and performance on the flash products is stellar even under heavy heavy load.

    I loaded up a XenServer VM using NetApp Flash and I am getting > 1.5GB/s. That is being rate limited by the 10GE interface on the server I am using.

  • @tr1cky said:
    Google unlimited storage is still working great, they have no problem with me using 5TB at the time for 8$/m and it's very fast.

    You are the only user? Wasn't also $10 per month?

  • FrecyboyFrecyboy Member
    edited October 2015

    Issam2204 said: You are the only user?

    They only made it for tr1cky, no other users !

    No, serious, I'll try it out soon too, looks nice.

Sign In or Register to comment.