Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


fibervolt
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

fibervolt

flyfly Member
edited July 2012 in General

they upgraded to ssd's.

their network has always been fast (from Linode Dallas):

2012-07-27 20:14:26 (15.6 MB/s) - `/dev/null' saved [104857600/104857600]

disk is now super fast:

1073741824 bytes (1.1 GB) copied, 5.3991 s, 199 MB/s

+1. too bad it's openvz

Comments

  • Looks great - I like their website.

  • @fly said: 1073741824 bytes (1.1 GB) copied, 5.3991 s, 199 MB/s

    Umm, this is super fast?

    Thanked by 1HalfEatenPie
  • AlexBarakovAlexBarakov Patron Provider, Veteran

    This cant be ssd... raid 10 enterprise sata disks..

  • TazTaz Member

    Either it is satat raid10 or if it is SSD, some one is abusing or badly oversold.

  • DamianDamian Member

    @LiquidHost said: This cant be ssd... raid 10 enterprise sata disks..

    +1.

    I get 182mb/sec on a somewhat-heavily-loaded server with 7.2k enterprise SATA drives on RAID 5.

  • flyfly Member

    /me shrugs

    it's fast.

  • jarjar Patron Provider, Top Host, Veteran

    199mb/s is only slow if you like to stare at the numbers. That's going to accomplish any task suitable for shared resources.

    Thanked by 1Amitz
  • TIL, anything below 100MB/s is considered slow and oversold by LET standards

  • AlexBarakovAlexBarakov Patron Provider, Veteran

    @cosmicgate said: TIL, anything below 100MB/s is considered slow and oversold by LET standards

    Honestly, if I get 60MB/s + I am completly satisfied, as long as it is stable. 60MB/s in my eyes is enough for everything and never had a problem with working with slower IO. Actually my main site is hosted on enviroment that is around 60-70MB/s, along with other production sites as well and I think it is working fairly well.

    As not all of my nodes are in RAID 10, we have around 70-100MB/s on some of our nodes and none of our users actually complained, so I can not agree with that standards at all.

  • TazTaz Member

    50-60mb/s constant is perfect for average users.

    Thanked by 1TheHackBox
  • JoeMeritJoeMerit Veteran
    edited July 2012

    Here is some results of the 128Volts plan.

    CPU model : Intel(R) Xeon(R) CPU E31230 @ 3.20GHz
    Number of cores : 1
    CPU frequency : 3200.000 MHz
    Total amount of ram : 256 MB
    Total amount of swap : 0 MB
    Download speed from CacheFly: 55.2MB/s
    Download speed from Linode, Atlanta GA: 9.84MB/s
    Download speed from Linode, Dallas, TX: 10.5MB/s
    Download speed from Linode, Tokyo, JP: 4.47MB/s
    Download speed from Linode, London, UK: 10.5MB/s
    Download speed from LeaseWeb, Haarlem, NL: 19.2MB/s
    Download speed from Softlayer, Singapore: 6.75MB/s
    Download speed from Softlayer, Seattle, WA: 15.3MB/s
    Download speed from Softlayer, San Jose, CA: 33.5MB/s
    Download speed from Softlayer, Washington, DC: 33.8MB/s
    I/O speed : 304 MB/s

    edit: I see that sometimes when I run this it says my cpu frequency is 1600mhz and the I/O speed is down to around 190MB/sec so looks like there is some CPU throttling or something.

  • prometeusprometeus Member, Host Rep

    @cosmicgate said: TIL, anything below 100MB/s is considered slow and oversold by LET standards

    Please don't be so extreme ;)

    The sequential speed you get with dd is only one side of a good I/O. Experience shows that average use don't need more than 30-40MB of sequential speed to behave fine if also latency is constant and random read/write are good.

    Also most of the sequential speeds test are influenced by various layers of cache so asking for an increase of sequential speeds you are forcing providers to tune systems to respond better to some tests but worst in real life usage.

    I report you a real experience. Some of you know we have some nodes which use one of our old fiber channel san as storage. The maximum sequential speed I can get from the san is 90-95MB/s, looking at your statement this would be sub standard for you.
    BUT if you put on this node 10-15 heavy torrenters (as it actually has), sequential speed stay into the 70-90MB/s range AND more important latency stay low and constant and other users/application on the node run unaffected.

    The same number of torrenters on a traditional raid array with a dd of 200MB/s make latency suffer much more with spikes of >100ms... So what do you think it would be better?

    [root@pm12 template]# dd if=/dev/zero of=testfilex bs=64k count=16k conv=fdatasync     
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1,1 GB) copied, 11,842 s, 90,7 MB/s
    
    [root@pm12 template]# ioping -c 20 .
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=1 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=2 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=3 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=4 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=5 time=0.4 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=6 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=7 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=8 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=9 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=10 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=11 time=0.4 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=12 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=13 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=14 time=0.4 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=15 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=16 time=0.4 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=17 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=18 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=19 time=0.3 ms
    4096 bytes from . (ext4 /dev/mapper/vps1-vz): request=20 time=0.3 ms
    
    --- . (ext4 /dev/mapper/vps1-vz) ioping statistics ---
    20 requests completed in 19009.0 ms, 3007 iops, 11.7 mb/s
    min/avg/max/mdev = 0.3/0.3/0.4/0.0 ms
    

    :P

  • VictorVictor Member

    It's SATA not SSD, sorry for the confusion. I am assuming you are on the Chicago node, after the upgrade, our dc stuffed up the network port, so it's now a 100Mbps port instead of a gigabit port which it was before. We will get it back to gigabit soon and you should be seeing speeds of ~60-70+MB/s.

  • flyfly Member

    this ia already pretty good... one way you can improve is by getting some xen in the mix

    Thanked by 1TheHackBox
  • Everything should be working okay now :)

    We're looking into different types of virtualisation but we need to gauge interest first.

Sign In or Register to comment.