Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Thoughts on zfs versus raid
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Thoughts on zfs versus raid

Hi

I was going to buy a server and setup RAID on it with 4 1tb SSD drives. I know that's dangerous if you don't swap out the drives every few months, replacing some with new ones or even just pulling a drive out for a few hours every few months with different drives being pulled out every time.

I had an interesting conversation the other day with someone who told me they don't use raid anymore for SSD drives. They use ZFS now for their vps nodes and that it works even faster than raid. What

What are your thoughts on this? Is zfs reliable for a vps node? Do you think it can rival raid and be even better than raid when setting up SSD drives? Thanks

-noob

Comments

  • ZFS if done right, can be fantastic. But remember ZFS is going to consume a lot of RAM, needs a decent HBA and will want a decent SSD for its ZIL.

  • raindog308raindog308 Administrator, Veteran

    @N00bish said:
    I know that's dangerous if you don't swap out the drives every few months, replacing some with new ones or even just pulling a drive out for a few hours every few months with different drives being pulled out every time.

    ...um, what?

  • N00bishN00bish Member
    edited April 2016

    Um I'll break it down for you. SSD drives just die right? So if you have 4 1tb SSD drives all installed on the same day and setup using raid10, then after a few years they'll most likely all die at the same exact time since SSD can't take as much read write as a normal old mechanical drive. Not to mention when an SSD dies it just.... Well dies and doesn't give you any notice in most cases. Did I break that down enough for you? I can bring it down another notch if you want

  • NeoonNeoon Community Contributor, Veteran

    @N00bish said:
    Um I'll break it down for you. SSD drives just die right? So if you have 4 1tb SSD drives all installed on the same day and setup, then after a few years they'll most likely all die at the same exact time since SSD can't take as much read write as a normal old mechanical drive. Not to mention when an SSD does it just.... Well dies and doesn't give you any notice in most cases. Did I break that down enough for you? I can bring it down another notch if you want

    Well, Google tested tons of SSD, they blamed the Age, not the Write/Read's from the SSD.

  • raindog308raindog308 Administrator, Veteran

    @N00bish said:
    Um I'll break it down for you. SSD drives just die right? So if you have 4 1tb SSD drives all installed on the same day and setup using raid10, then after a few years they'll most likely all die at the same exact time since SSD can't take as much read write as a normal old mechanical drive. Not to mention when an SSD dies it just.... Well dies and doesn't give you any notice in most cases. Did I break that down enough for you? I can bring it down another notch if you want

    No, you've convinced me already you're stone cold crazy. Thanks.

    Thanked by 2Jacob rm_
  • emreemre Member, LIR
    edited April 2016

    RAID 10 + Daily remote Backups * RINB * *Raid is not backup :D

    no need to swap anything.. Where do you read such nonsense bullshit...

    On the other hand most enterprise SSD's live much much longer than what you heard.

    I have yet a SSD on a server of mine die for the last 5 years or so.

    I've lost at least 2 consumer grade lowend SSD's in that time frame btw, but these are both in my own business laptop and already have remote backups..

  • I haven't said anything about backups? And I haven't said anything about swapping. And you're wrong if you tell me that is safe to keep 4 SSD drives, all installed on the same exact day, on a raid10 system, without ever swapping out any of those drives. You're wrong. Point blank period. Straight up wrong.

    You wanna be a nerd and a know it all and the typical 'I knew everything' web host idiot. Notice my name is noobish.

    I asked a question about thoughts on zfs and I get retarded replies from the worlds smartest people.

    If anyone feels like sticking to the topic and giving out real, helpful information like Mark did up above feel free to. Everyone else with their dumb ass comments that feel like showing off and growing an inch or two, go back to playing your Nintendo

  • emreemre Member, LIR

    @N00bish

    you are nothing but a fucking idiot.

    What people and me is writing here are based on years and years of experience with at least for about 300 - 400 ssd's in any kind of raids and zfs configuartions in any kind of server.

    you are lucky enough to get an answer for your fucking idiotic questions and bullshit and yet you still seem to know everything but got no clue about anything.

    go fuck yourself and stick an ssd raid array to your ass.

    Thanked by 2theroyalstudent rm_
  • @N00bish said:
    You're wrong. Point blank period. Straight up wrong.

    You wanna be a nerd and a know it all and the typical 'I knew everything' web host idiot. Notice my name is noobish.

    I asked a question about thoughts on zfs and I get retarded replies from the worlds smartest people.

    If anyone feels like sticking to the topic and giving out real, helpful information like Mark did up above feel free to. Everyone else with their dumb ass comments that feel like showing off and growing an inch or two, go back to playing your Nintendo

    Go fuck yourself with a bunch of 1TB SSDs with Debian installed on them, thanks.

    There is no need for you to be so anal about everything that is said, they are probably right with what they said.


    @emre said:
    no need to swap anything.. Where do you read such nonsense bullshit...

    BlazingServers.net (thank @BlazingServers)

  • BlazingServersBlazingServers Member, Host Rep

    @theroyalstudent said:
    BlazingServers.net (thank @BlazingServers)

    Mind to explain please?

  • We have SSD's in place now running their entire capacity reads / writes every 48 hours, that are coming up on 24 months.

    We only ever see failures in the legacy spinning metal drives.

  • After 3 years of ZFS on Linux experience that is what we know for sure about ZFS:
    Pros:
    - ZFS snapshots (copy-on-write) are fast and reliable. They are easy way to replicate your server to a backup location.
    - Zpool (which actually does the RAID job) is rock solid

    Cons:
    - You'll get a serious performance degradation with ZFS with SSD-only storage even if you have enough RAM. Maybe things have changed recently, but a year ago we lost 50% IOPS performance comparing to bare-metal or XFS/EXT4
    - You are not able to prioritise the recovery process of a degraded Zpool array. For example, if a disk dies Zpool will give as much power as possible for the array to resync. If you run VMs on that array - they might get stuck.
    - As mentioned above - RAM usage. Especially if you like features like deduplication or compression.

    Taking all that into consideration we decided not to go with ZFS storage on our SSD-only nodes. Mostly for the performance sake.

  • MunMun Member

    Bare metal will be much faster, I personally only use ZFS where I need a raid with proxmox and don't have a physical card that proxmox accepts as a RAID card.

    You can disable some of the functionality to lower the RAM usage.

  • drendren Member
    edited April 2016

    A well supported ZFS mirror on a couple RAID-10 arrays is pretty slick

Sign In or Register to comment.