Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Samsung 840 EVO-Series - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Samsung 840 EVO-Series

2»

Comments

  • jbilohjbiloh Administrator, Veteran

    We haven't tried. Some customers have requested them for software raid and they work fine though.

  • OliverOliver Member, Host Rep

    Good to hear, this is valuable information I think for some smaller providers who have less flexibility to experiment with different hardware setups for whatever reason...

  • MaouniqueMaounique Host Rep, Veteran

    I am not sure it is much better to have ssds in hw raid.
    Especially raid 5 or other non 1, 0, 10 and similar where extra writes are needed for parity distribution.
    True, a good card with lots of ram can improve things, but let's face it, the biggest feature of raids, besides redundancy, is increasing the number of IOPS.
    SSDs dont really need that, a software raid will do just fine, especially when no parity needs calculated.
    So, raid 10 in software on an E3 with 4 small drives will not do much worse than a hardware one, besides software raids are more resilient and can be ported easier dont have the SPOF of a raid card that can create corruption of data in certain cases, even if you have redundancy.
    In case of mechanical drives, the raid cards do much more than just offloading parity calculation (where needed) they greatly increase the IOPS by grouping together writes serving from cache small repeated reads and monitoring the disks, but in case of SSDs the IOPS are there galore, reads and writes can be done almost simultaneously there is no need to group together writes and the OS caches well enough if memory is available for that.
    In huge servers yeah, raid controllers are needed even for SSDs, however, there is little need in day-to-day computing for huge servers with ssd if you need it you are likely to build them modular, like in science labs, each unit with own cpu, storage and ram.

  • smansman Member
    edited October 2013

    Works great on my WinBloz desktop. That Magician software is pretty slick. Wouldn't consider these drives for anything server related. Only use non-Sandforce Intel for that.

  • I've test Samsung Evo with an LSI 2108 in RAID 6, one of the drives get FAIL status at third day.. changed for a new one.. and rebuild.

    I get the drive home and test it, Samsung Magician says is perfect, nothing on SMARTD, drive is fully working.

    There is no firmware update available, latest comes from factory inside...

    Don't know what to think about.. will continue testing with the server.

    Keep updated..

  • I bought a 120GB in mid October and there was a firmware update waiting for me.

    Only have one complaint about the Magician software -- I wish it would let me enter an in-service date, so then in addition to reporting the Total Bytes Written I could also see the Average Bytes Written per Day. Reason being that AnandTech estimated lifespan at 7.91 years at 50GB of writes per day, so I like comparing my average daily usage to that to see if I'm higher or lower than that (probably arbitrary) number.

  • netomxnetomx Moderator, Veteran

    @FallenAngel said:
    I've test Samsung Evo with an LSI 2108 in RAID 6, one of the drives get FAIL status at third day.. changed for a new one.. and rebuild.

    I get the drive home and test it, Samsung Magician says is perfect, nothing on SMARTD, drive is fully working.

    There is no firmware update available, latest comes from factory inside...

    Don't know what to think about.. will continue testing with the server.

    Keep updated..

    there's no firmware update for 840 pro, right? I uninstalled the software after testing sata3

  • edited November 2013

    @Maounique said:
    Especially raid 5 or other non 1, 0, 10 and similar where extra writes are needed for parity distribution.

    @FallenAngel said:
    I've test Samsung Evo with an LSI 2108 in RAID 6, one of the drives get FAIL status at third day.. changed for a new one.. and rebuild.

    Never run RAID 5/6 on SSDs. It's writing multiple times per write on each drive (to distribute parity information) and it's killing the drives.
    http://www.infostor.com/disk-arrays/skyera-raid-5-kills-ssd-arrays.html

    @Maounique said:
    So, raid 10 in software on an E3 with 4 small drives

    Unlike E3v3, older E3 boards only support 2x SATA3 ports, which is another good reason to use a HW-RAID controller.

  • George_Fusioned said: Never run RAID 5/6 on SSDs. It's writing multiple times per write on each drive (to distribute parity information) and it's killing the drives. http://www.infostor.com/disk-arrays/skyera-raid-5-kills-ssd-arrays.html

    Has anyone who quotes that article actually read the paper "Stochastic Analysis on RAID Reliability for Solid-State Drives"? It has no relation to the FUD of having RAID5 SSDs.

    The paper models the wear levelling and that's that. They've validated against primitive 32GB raid setups just to verify that the formulas works at the underlying SSD level. Real world data gets more complicated given the differences in sizes, sourcing from staggered batches, and destaging controllers that group writes.

    It's still a good theoretical paper on the worst-case performance curve of SSD/RAIDs but hardly representative of a production environment.

  • George_Fusioned said: Never run RAID 5/6 on SSDs

    The wear argument certainly has a intuitive explanation, but I just finished to speak with a IBM Power product specialist, and the only supported SSD RAID level for a midrange ($30k) IBM power 720 server is RAID5. RAID 10 and RAID 1 are unsupported on this system. IBM usually does extensive testing on this product line, I believe they found that RAID5 on SSD is reliable enough on a mission critical database server. I specifically asked to escalate the check on next technical level, and they confirmed that a RAID5 SSD is totally fine.

  • The wear argument comes in two form. First is the underlying erase-write of the SSD cells. This is just always going to be an issue when you use an SSD, regardless of RAID type. The GC and over provisioning is meant to address that anyways. A second argument is the parity induced by RAID5/6.

    And its actually a bit flawed. It's only an issue if assuming we're comparing RAID0 vs RAID5 for any given number of disks. RAID5/6 write-amp is bounded by [N/(N-P), P] where N is the number of disks and P is the parity blocks. Comparing a 4 disk setup write amplification:

    • RAID0: 1
    • RAID1: 2
    • RAID5: 1.33
    • RAID6: 2
    • RAID10: 2

    I don't think anyone here actually runs a production server as RAID0. And basing a choice to use RAID10 over RAID5 because of wear/tear is entirely wrong. Choose RAID10 for faster rebuilds, avoiding controller overhead/issues, and all of the various reasons we chose it in non-SSD cases. But if those basis are covered by throwing money at it, then from a wear perspective, it's actually the better of them all.

  • jbilohjbiloh Administrator, Veteran

    Any updates on first hand experience with the EVO's in a software or hardware RAID environment? I am curious to hear..

  • Unfortunately, LSI does not want to support Samsung because LSI purchased sanforce.

    LSI is being acquired by Avago Technologies for 6.6 Billion in cash according to the LSI website. This could change things quite a lot. Hopefully Avago Technologies understand we want LSI tech to support ALL ssd technologies, not only the sanforce controller.

    I have 32 Samsung EVO 120 GB, 4 Intel expanders and 2 LSI 9271-8i just waiting to rock. Sadly, I'm waiting for them to release a firmware update for the 9271 and am tempted to just go with a different company such as Areca... although I love LSI, I feel they are heading in a bad direction if they continue along the path to the dark side.....

    Best wishes,

    Deep Wolf

  • jbilohjbiloh Administrator, Veteran

    @deepwolf said:
    Unfortunately, LSI does not want to support Samsung because LSI purchased sanforce.

    LSI is being acquired by Avago Technologies for 6.6 Billion in cash according to the LSI website. This could change things quite a lot. Hopefully Avago Technologies understand we want LSI tech to support ALL ssd technologies, not only the sanforce controller.

    I have 32 Samsung EVO 120 GB, 4 Intel expanders and 2 LSI 9271-8i just waiting to rock. Sadly, I'm waiting for them to release a firmware update for the 9271 and am tempted to just go with a different company such as Areca... although I love LSI, I feel they are heading in a bad direction if they continue along the path to the dark side.....

    Best wishes,

    Deep Wolf

    What happens now if you just try to use the EVOs with your 9271?

  • @jbiloh said:
    What happens now if you just try to use the EVOs with your 9271?

    The controller hangs for a long time, about 3 minutes, then once it does continue, it shows "0" drives detected.

    Once loading into windows, if they are connected to the expanders, they give a "Bad PHY" error in megaraid logs.

    I called LSI and their response was basically "We don't plan to support this drive, but wait a month and see what happens...." (that was about 2 months ago) and "What if you just purchase different SSDs"...

    I also tested these EVO with an Areca 1882ix, and they are not compatible with that card either.

    So, I am just going to sell these EVO and purchase Mushkin 120 unthrottled firmware instead. :-/ Oh well....

  • concerto49concerto49 Member
    edited January 2014

    @jbiloh said:
    So, I am just going to sell these EVO and purchase Mushkin 120 unthrottled firmware instead. :-/ Oh well....

    Mushkins are horrible by the way. 120GB EVOs are also horrible. EVOs only work from 250GB and above. I'll say no more to that 1.

  • PremiumHostPremiumHost Member
    edited January 2014

    said: Sequential Read Speed 540 MB / Sequential Write Speed 520 MB / Random Read Speed 98K / Random Write Speed 90K

    Something is really wrong here. SSD speed should be much faster.

  • @concerto49 said:
    Mushkins are horrible by the way. 120GB EVOs are also horrible. EVOs only work from 250GB and above. I'll say no more to that 1.

    Granted I'm not using it on a busy VPS node but I just built a new Desktop with a 128GB Samsung EVO as the OS drive and I'm extremely pleased. Now I know the 256's are a little bit faster but for <$100 I'm very happy and my desktop flies, SSD is by far the best upgrade I've ever made so I can't say the 128's are "horrible".

    AMD A10-5800K APU with 8GB of DDR3 1866 and 128GB EVO SSD, great desktop for work for under $350.

  • @nunim said:

    The 120GB EVO is trash. 250GB are way faster. I just wouldn't get EVO as 128GB/120GB stuff. There are way better alternatives similarly priced.

Sign In or Register to comment.