Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

VPS Providers: How are SSDs treating you?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

VPS Providers: How are SSDs treating you?

frangofrango Member
edited December 2012 in Providers

First of all happy holidays, merry christmas, holiday or non-holiday to all of you, respectively!

Re question:

I did quite a bit of research on SSDs before upgrading one VPS setup to use SSD and one of the things I stumbled upon quite a few times is SSD reliability and durability. Stuff like brand new SSD randomly dying or SSD storage cycles hampering with performance after a lot of writes.

I am crazy happy with the performance of it, tracking reporting and analytics reporting from a a couple web apps I run are now super fast. It's server porn to me right now.

My question arises because right now I am telling everyone to go SSD if looking for VPS, and wanted to see how they actually stack up internally for the few providers out there.

Do they fail more often? Are the write cycle at all important to you?

Have a great day,



  • Nick_ANick_A Member, Top Host, Host Rep

    I'm not sure what you mean by are the write cycle important. Would you mind explaining further?

    The right SSDs seem to be as if not more reliable than your standard HDD.

  • Wooot! Another Francisco.

  • @Nick_A Nice seeing RamNode around here, have seen your company referred to around a couple of times.

    What I mean is the P/E cycle. Flash storages supposedly have an actual physical limit on the amount of times they can be written and rewritten before they get memory wear. I know most SSDs have a lot (like +100,000 cycles), but was wondering if maybe a SSD raid setup or backup SSD would actually cause a failure. I figure that's where they will see the most action.

    @Taz, Wooting back! But fill me in. Is there a Francisco namesake menace here?

  • MaouniqueMaounique Host Rep, Veteran

    The most worn ones are those doing caching for a large array.
    The experience so far (touch wood) has been great with Samsungs 830.
    If you enable it as only read cache, if it fails, no data will be lost as the original copy is on the raid, but with write cache thins are a little more tricky.
    Pure SSD storages were great also, cant tell about reliability because none failed yet while a few SAS2 did, however they are much more in numbers than SSDs.
    To tell you the truth, when i read about them i get the creeps, I guess it is like medical students that discover signs of all illnesses in the world. Better be oblivious and trust the manufacturer which give big MTBFs and do backups just in case.

  • Nick_ANick_A Member, Top Host, Host Rep

    I just didn't understand the question. I know what write cycle is, but I guess I was reading too much into the question. My initial reaction was "of course it's important..." Sorry about that.

  • Juat for a quick what-if, as i have never trusted S.M.A.R.T data, but .... read caching node

    [root@e3la17 ~]# smartctl -d sat -i /dev/sdb
    smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build)
    Copyright (C) 2002-10 by Bruce Allen,
    Device Model:     SAMSUNG SSD 830 Series
    Serial Number:    S0XXNYAC310938
    Firmware Version: CXM03B1Q
    User Capacity:    64,023,257,088 bytes
    Device is:        Not in smartctl database [for details use: -P showall]
    ATA Version is:   9
    ATA Standard is:  Not recognized. Minor revision code: 0x31
    Local Time is:    Mon Dec 24 20:05:52 2012 EST
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled
    [root@e3la17 ~]# smartctl -l selftest /dev/sdb
    smartctl 5.39.1 2010-01-28 r3054 [x86_64-unknown-linux-gnu] (local build)
    Copyright (C) 2002-10 by Bruce Allen,
    SMART Self-test log structure revision number 1
    Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
    # 1  Short offline       Completed without error       00%      4211         -
    [root@e3la17 ~]# w
     20:06:06 up 165 days,  1:49,  1 user,  load average: 1.32, 1.64, 1.30
    [root@e3la17 ~]# dmsetup status
    cachessd: 0 3803815936 flashcache stats:
            reads(5604167037), writes(22221224515)
            read hits(4239589611), read hit percent(75)
  • Yeah SSD drives have a "limited" write life.

    Been quite a while since I sat down and did the math.

    It was something in the ballpark of 7 years writing data in huge amounts non stop all day. Something in a ration like 128GB filled collectively to 21TB. The scenario to accomplish would be pure abuse and not a real work load especially in a VPS environment.

    What is probably more relative and concerning is as SSD capacity shrinks due to stagnant files lingering on SSD the remaining "free" space gets overwrote again and again. So you have more use on a smaller shrinking available area. That's more likely where failures can and will occur.

    For this reason, I think on the hosting side of things, SSD cache makes a ton of sense and storing files on SSD long term, much less sense.

    Biggest problem for many providers is finding and affording reliable and proven drives. The Samsung 830's are awesome, as are offerings from Intel.

  • TazTaz Member
    edited December 2012

    @mitgib and @nick_a does ssd read and write cache shows better results for typical dd test or read only cache shows better result ?

  • MaouniqueMaounique Host Rep, Veteran
    edited December 2012

    So, probably in wost case scenario, 6 months are to be expected out of a SSD.
    OTOH, a big one used for half will last theoretically more than 2 times a smaller one filled.
    I guess only time will tell, probably the reason not many ppl are jumping in the SSD bandwagon yet.
    SAS2 disks are just as expensive per GB, but that is proven and known technology with all it's faults and weaknesses, you know what to expect.

    @Taz you need to do that on an actual loaded node to see how saved IOPS by cache for reading translate into faster writes on the array itself or on cache.
    After doing some math and a bit of testing together with Uncle reached the conclusion write cache is not worth it if you have a big array behind it with lots of IOPS but it begins to make sense in raid 1 or 10 with only 4 slow drives.
    That is only speed wise, because write cache does not save IOPS usually.
    I think it is not worth it unless for DD tests on a slow array and the risks are big.

  • @Maounique said: SAS2 disks are just as expensive per GB

    SAS2 is not a generic term for 15k drives, as Toshiba, Western Digital and Seagate all have a 7200rpm SAS2 line

  • @Taz said: does ssd read and write cache shows better results for typical dd test or read only cache shows better result ?

    this was done on the node I listed here above

  • pubcrawlerpubcrawler Banned
    edited December 2012

    6 month failure on SSD? NO way. It does happen though. That would only be true if you were per se using the SSD as a cache store with constant writes and flushes and if you were filling the SSDs to capacity.

    I know of several large photo hosting sites that are using big dedicated servers (dozens of them) with RAID arrays of SSDs for Varnish storage. It's about as abusive as you can be to SSDs and while there are drive failures, wouldn't say anywhere near where spinning drives are and throughput is unequaled.

    Warranties on drives are typically several years, so expect that lifespan as the minimum.

    6 month lifespan, that would be a defective drive or bad controller (very common issue).

    " it wasn't a typical failure.. just visible degraded performance so we got it swapped out asap."

    It depends on the make and model. Degraded performance can happen for all sorts of reasons. Any time you load millions of small files for instance onto anything it's asking for degraded performance. Most disk design, filesystems, etc. work on idea of fewer and larger files.

    I think SSDs are best used in conjunction with spinning disks. The way I use SSDs in production systems is boot drives are spinning disk. I map a second block of drive from the SSDs to use for data where speed is truly needed (usually MySQL or equivalent database). All backups are done to other big slow spinning drives / offloaded to other server(s).

    (but the SSD cache is arguably even a better model hybrid in the client hosting world)

  • @mitgib Mother of cache!

  • @pubcrawler said: 6 month lifespan, that would be a defective drive or bad controller (very common issue).

    Or unoptimized setup. It is very common for most to only partition 60% of the drive

  • MaouniqueMaounique Host Rep, Veteran

    @miTgiB said: SAS2 is not a generic term for 15k drives, as Toshiba, Western Digital and Seagate all have a 7200rpm SAS2 line

    ours are 10 k

  • @Maounique said: ours are 10 k

    I prefer SAS/SAS2 for the full duplex ability, but the performance is all over the map now with 7.2/10/15k drives available.

  • pubcrawlerpubcrawler Banned
    edited December 2012


    Partition 60% of the drive :) Then fill it 80% means you are doing mass writes and deletes on a small area. Yeah, that will be a problem.

    I partition our SSDs 100% as one big chunk. Haven't had a SSD fail and we are at several years now of running them.

    I am curious about the SSD cache like others here. Haven't used it (other than end user where it's sort of invisible what exactly is hitting the cache and isn't). Anyone have a reference or recommended reading on SSD cache as implemented by some of the VPS companies here?

  • MaouniqueMaounique Host Rep, Veteran

    SSDs should know how to level writes ?

  • Yeah write leveling is a feature of any drive you should buy. Think all the SSDs claim to do this as of a good while ago.

  • @pubcrawler said: I am curious about the SSD cache like others here.

  • Interesting @miTgiB. MySQL has been a pissing fit for me lately. Innodb is arrgh, non portable, requiring expensive exports and imports into other systems. Tired of that dance. (Yeah master + slave if you are into that would work too). Experimenting again with MyISAM and eventually Percona's offerings instead. Want a database that can function with disk replication / distributed file system. Yeah, me, being picky. Really surprised Facebook still uses MySQL so much.

    I take it most VPS providers are using the SSD cache though as a function/feature of RAID cards like most notably LSI brand?

Sign In or Register to comment.