Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


New CacheCade enabled KVM Node Read Cache Result
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

New CacheCade enabled KVM Node Read Cache Result

FRCoreyFRCorey Member
edited August 2012 in General

Still fooling around with the trial key, but the regular key will arrive tomrrow along with the CPU's I was suppsoed to get, ended up with E5-2609's instead of 2620's, 2620's going in tomrrow.

Might want to get a towel.

Results of CacheCade read cache with 4x Intel 520 Gen2's 6G/s in Raid 0, accelerating a 4x 1TB 7200RPM SAS 6G/s. Test was a 2gb file being read and written out to /dev/null if you write it to a file again you're just going to get bottlenecked by the write speeds. Will enable RW Caching and redo the cache set to a raid 10 as well. LSI 9265-8i with cache vault.

32768+0 records in
32768+0 records out
2147483648 bytes (2.1 GB) copied, 0.290297 s, 7.4 GB/s

Thanked by 1Asim
«1

Comments

  • rds100rds100 Member
    edited August 2012

    @FRCorey said: 32768+0 records in

    32768+0 records out
    2147483648 bytes (2.1 GB) copied, 0.290297 s, 7.4 GB/s

    Meh. Try echo 1 > /proc/sys/vm/drop_caches
    then redo the test.

    Or test with a file that is larger than the node's memory.

  • johnjohn Member
    edited August 2012

    If we assume 500 MB/s per Intel 520 and that RAID 0 achieves perfect scaling, the max is 2 GB/s. Your likely hitting RAM.

  • MaouniqueMaounique Host Rep, Veteran
    edited August 2012

    I don't see any hard drive of any technology (well, except ramdrive but that is not hard drive) that can achieve that. Even ramdrive will be limited by SATA interface. SATA 3 caps at 6 Gb.
    M

  • jhjh Member

    @john said: If we assume 500 MB/s per Intel 520 and that RAID 0 achieves perfect scaling, the max is 2 GB/s. Your likely hitting RAM.

    As far as I'm aware, most types of RAM don't benchmark at 7.4GB/s, and are probably more like half of that.

  • MaouniqueMaounique Host Rep, Veteran

    @jhadley said: As far as I'm aware, most types of RAM don't benchmark at 7.4GB/s, and are probably more like half of that.

    So he has very good ram because I will never believe that is a 2 gb cpu cache :P
    M

  • rds100rds100 Member
    edited August 2012

    @Maounique said: So he has very good ram because I will never believe that is a 2 gb cpu cache :P

    M

    >

    Not really:

    
    [root@monitor4 ~]# dd if=test of=/dev/null bs=64k
    32768+0 records in
    32768+0 records out
    2147483648 bytes (2.1 GB) copied, 0.273884 seconds, 7.8 GB/s

    This is on Xeon E3-1230V2 with DDR3-1600 ECC dual channel ram. So quite normal result for reading from the RAM.

    Edit:
    If you repeat the test several times it gets even better:


    [root@monitor4 ~]# dd if=test of=/dev/null bs=64k
    32768+0 records in
    32768+0 records out
    2147483648 bytes (2.1 GB) copied, 0.241678 seconds, 8.9 GB/s
  • Card Cache is 1gb, so ran with 2gb file, if you do the normal drive test of the HDD's you get the expected results of around 400Mb/s. However I'm not looking for raw speed, the idea is that the cache speeds up the most requested data on the node without having to spend gobs of money for less capacity per node if you just go with all SSD's. Also the cache drives will be rebuilt into raid 10 for redundancy purposes and I want to play with enabling write cache as well.

  • AlexBarakovAlexBarakov Patron Provider, Veteran

    @FRCorey said: I want to play with enabling write cache as well.

    And then...... The power goes down.

    Thanked by 1klikli
  • @Alex_LiquidHost said: The power goes down.

    I enabled the trial of CacheCade today and write was horrible, the regular array is way faster than a pair of raid0 SSD

    That 7.4GB/s is not right, I can achieve that with the cache disabled, it is ram. I am glad you made me look for the trial key, and my take from it is this is not worth $225, flashcache FTW, now this 9266-4i card is bitchin!

    Thanked by 1Nick_A
  • AlexBarakovAlexBarakov Patron Provider, Veteran

    I haven't yet got the chance to try SSD caching. What SSD's are you using for this?

  • @Alex_LiquidHost said: What SSD's are you using for this?

    @FRCorey said: Results of CacheCade read cache with 4x Intel 520 Gen2's 6G/s in Raid 0, accelerating a 4x 1TB 7200RPM SAS 6G/s

    Looks like @FRcorey used Intel 520's, I used Samsung 830's

  • @miTgiB what cache hit percent are you seeing (dmsetup status) ?

  • miTgiBmiTgiB Member
    edited August 2012

    @rds100 said: cache hit percent

    Here is a full node that I gathered up all the abusive VPS in Los Angeles and moved them to this node. These are not abusive users, just their VPS would gain the most benefit from the cache.

    [root@e3la17 ~]# dmsetup status cachessd
    0 3803815936 flashcache stats: 
            reads(937631231), writes(3789667384)
            read hits(709236577), read hit percent(75)
            replacement(170902634), write replacement(0)
            invalidates(1)
            pending enqueues(0), pending inval(0)
            no room(0)
            disk reads(228472256), disk writes(3789946613) ssd reads(709235880) ssd writes(228472256)
            uncached reads(1), uncached writes(3789946552), uncached IO requeue(0)
            uncached sequential reads(0), uncached sequential writes(0)
            pid_adds(0), pid_dels(0), pid_drops(0) pid_expiry(0)
    [root@e3la17 ~]# uptime
     14:41:23 up 41 days, 19:24,  1 user,  load average: 0.88, 0.52, 0.41
    
    Thanked by 1rds100
  • @miTgiB said: read hit percent(75)

    Wow! i can see why it helps. I guess i'll be rolling it too :)

  • @rds100 said: I guess i'll be rolling it too :)

    It is such a small cost, and being creative and shoving the SSD internally and mounting with velcro, you can give the performance near pure SSD with the large allotment of space of a traditional LEB. Yes there are some tweaks to get a little more milage out of this setup, but keeping $225 in my pocket not buying CacheCade is well worth the small added time spent using flashcache. I don't see anything wrong with paying for CacheCade, but I think all the providers around here in for the long haul are smart enough to save that money.

    The only downside is purely marketing right now, those providing pure SSD nodes will win, for now. Give it a few months and people will see the foolishness of paying a premium for small slivers of disk space.

    Thanked by 1Taz
  • Btw what made you choose writearound mode and not writethrough? (for writeback it is obvious why it is excluded - not safe).

  • @rds100 said: what made you choose writearound mode

    While the above node was the first I tried this on, it is also the last 4 disk array I've built. and the larger arrays can write much faster than SSD, and with the load SSD is taking off the array for reads, it just felt (w)right.

    Thanked by 1rds100
  • prometeusprometeus Member, Host Rep

    @miTgiB said: While the above node was the first I tried this on, it is also the last 4 disk array I've built. and the larger arrays can write much faster than SSD, and with the load SSD is taking off the array for reads, it just felt (w)right.

    what ratio of cache GB x TB of user space did you choose?

  • MaouniqueMaounique Host Rep, Veteran

    @miTgiB said: and mounting with velcro

    Reminds me of someone who is putting inside 3 3.5 hdds in a 1 u case with 2 caddys :P
    M

  • @prometeus said: what ratio of cache GB x TB of user space did you choose?

    I never put that much thought into it. I see great results from 64gb SSD that I use 50gb of for the cache and I rarely see 1tb of user data on a OpenVZ node. I expect the e5 nodes will see more space used, but as of now I am not seeing it.

  • @miTgiB said: I never put that much thought into it. I see great results from 64gb SSD that I use 50gb of for the cache and I rarely see 1tb of user data on a OpenVZ node.

    64gb SSDs are hopelessly slow. Serious. Even 128gb have issues. Get a Plextor M5 Pro or a OCZ Vertex 4 if going for 128gb. The fastest right now and not too expensive. Yes, although they advertise blazing speeds - this is not true unless on larger size SSDs.

    Otherwise for most workloads I'd go for Samsung 830 on 256gb and above. Intel 520 and anything Sandforce related, I would trust on a production server right now.

    Also, if you are running Ivy Bridge and a 7 series motherboard, be sure to check if Intel release their latest drivers on Linux. It's out on Windows. Those have RAID0 Trim enabled for SSD and can improve performance quite a lot.

  • miTgiBmiTgiB Member
    edited August 2012

    @concerto49 said: Yes, although they advertise blazing speeds - this is not true unless on larger size SSDs.

    Throwing money at something is the easy way, but the Samsung 830's even the 64gb and 128gb are faster reads than a 12 disk raid10 array. And OCZ? How did you get a working one? I didn't know they existed.

    We can quote this and that benchmark all day long, at the end of the day, it is real world use that matters to me, and not much else. Stability is always my #1 goal, always has been, always will be. Things break, I am realistic, but I would rather sacrifice performance for stability any day.

    @concerto49 said: Also, if you are running Ivy Bridge and a 7 series motherboard, be sure to check if Intel release their latest drivers on Linux.

    I use this motherboard. It uses the Intel C602 chipset, so no idea if that is 7 series or not, but the day I buy an Intel board is the day you should run fast and far from my service.

  • concerto49concerto49 Member
    edited August 2012

    @miTgiB said: And OCZ? How did you get a working one? I didn't know they existed.

    They are 1 of the biggest SSD makers. They probably have more retail sales than Samsung and Intel. Samsung just get enough OEM sales from Sony/Apple/etc. OCZ do make enterprise stuff too.

    @miTgiB said: It uses the Intel C602 chipset

    Patsburg is 7 series. Anything Ivy Bridge is. It's just you can run a Sandy Bridge motherboard with an Ivy Bridge CPU so I had to be explicit.

    @miTgiB said: Stability is always my #1 goal, always has been, always will be.

    Stay away from anything Sandforce related then. It's a PAIN.

    @miTgiB said: Throwing money at something is the easy way, but the Samsung 830's even the 64gb and 128gb are faster reads than a 12 disk raid10 array.

    Check out Plextor M5 / M5 Pro :) You'll be surprised. Edit: the M3 series will do and will cut cost. Don't need Toggle Nand.

  • Oh cut the crap about Sandforce, please. Yes, they've had a lot of bugs. There has been enough time (years) so the bugs in the firmware should have been worked out by now. Besides intel uses their own firmware on the sandforce SSDs.

  • concerto49concerto49 Member
    edited August 2012

    @rds100 said: There has been enough time (years) so the bugs in the firmware should have been worked out by now.

    Should have, but after their recent TRIM saga over when Sandisk Extreme came out and their new 5 series firmware where TRIM was suddenly disabled for everyone I wouldn't trust them. They had enough fun with the BSODs already.

    @rds100 said: Besides intel uses their own firmware on the sandforce SSDs.

    And they still suffer from most of the issues. There is a base firmware. Intel just makes mods on known issues Sandforce is too lazy to fix.

    We're talking about production nodes that run lots of clients VPS here. I wouldn't want any corruption, slowdowns or reboots.

  • @concerto49 said: OCZ do make enterprise stuff too.

    Their failure rates on recent SSD models is still astronomical. I will not be looking at OCZ for many years to come, even their enterprise line. I went with the Samsung for their low reported failure rate while being a tad cheaper than Intel.

  • @miTgiB said: I went with the Samsung for their low reported failure rate while being a tad cheaper than Intel.

    +1. My choice for now too. Samsung 830 is faster than Intel 520 anyway. Real world usage that is.

  • No need to use velcro, if you have spare slots just get a PCIE SSD they're just as cheap and slightly faster as they have a better position on the motherboard compared to the raid card.

  • @FRCorey said: just get a PCIE SSD

    Not a bad idea, but I'd like a working one, and I only see OCZ, only other option is Intel for $4000

    I bought an OCZ once and had to RMA it 4 times and still never had a working one so tossed it into the trash as the postage for all these RMA shipments was killing me.

Sign In or Register to comment.