Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Is there any benchmark result of Hetzner Datacenter SSD?
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Is there any benchmark result of Hetzner Datacenter SSD?

Hello,

I notice that Hetzner has SSD, and Datacenter SSD.
Is there any crystaldiskmark or bench mark results for the Datacenter SSD?

Thanks.

Comments

  • What difference?

  • @hawkjohn7 said:
    What difference?

    I have no ideas yet

  • I would guess there are different types of datacenter SSDs.
    But if you want I can bench mine.

  • @eol said:
    I would guess there are different types of datacenter SSDs.
    But if you want I can bench mine.

    Yes please

  • root@sv39 ~ # smartctl -i /dev/sda
    smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.15.0-39-generic] (local build)
    Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

    === START OF INFORMATION SECTION ===
    Model Family: Samsung based SSDs
    Device Model: SAMSUNG MZ7GE240HMGR-00003
    Serial Number: SEX69YAF800007
    LU WWN Device Id: 5 002538 8001b137a
    Firmware Version: EXT0303Q
    User Capacity: 240,057,409,536 bytes [240 GB]
    Sector Size: 512 bytes logical/physical
    Rotation Rate: Solid State Device
    Device is: In smartctl database [for details use: -P show]
    ATA Version is: ACS-2, ATA8-ACS T13/1699-D revision 4c
    SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s)
    Local Time is: Wed Feb 13 13:53:06 2019 CET
    SMART support is: Available - device has SMART capability.
    SMART support is: Enabled

    root@sv39 ~ # hdparm -Tt /dev/sda

    /dev/sda:
    Timing cached reads: 31054 MB in 1.99 seconds = 15581.13 MB/sec
    Timing buffered disk reads: 1610 MB in 3.00 seconds = 536.17 MB/sec

    root@sv39 ~ # dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc status=progress
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.21522 s, 255 MB/s

    root@sv39 ~ # echo 3 > /proc/sys/vm/drop_caches

    root@sv39 ~ # dd if=tempfile of=/dev/null bs=1M count=1024 status=progress
    1020264448 bytes (1.0 GB, 973 MiB) copied, 2.00027 s, 510 MB/s
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 2.10126 s, 511 MB/s
    root@sv39 ~ # dd if=tempfile of=/dev/null bs=1M count=1024 status=progress
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.140916 s, 7.6 GB/s

    root@sv39 ~ # rm tempfile

    root@sv39 ~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randread
    test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.2.10
    Starting 1 process
    Jobs: 1 (f=1): [r(1)] [23.1% done] [354.5MB/0KB/0KB /s] [90.8K/0/0 iops] [eta 00Jobs: 1 (f=1): [r(1)] [33.3% done] [354.7MB/0KB/0KB /s] [90.8K/0/0 iops] [eta 00Jobs: 1 (f=1): [r(1)] [41.7% done] [354.8MB/0KB/0KB /s] [90.9K/0/0 iops] [eta 00Jobs: 1 (f=1): [r(1)] [50.0% done] [349.7MB/0KB/0KB /s] [89.6K/0/0 iops] [eta 00Jobs: 1 (f=1): [r(1)] [58.3% done] [345.4MB/0KB/0KB /s] [88.5K/0/0 iops] [eta 00Jobs: 1 (f=1): [r(1)] [66.7% done] [354.1MB/0KB/0KB /s] [90.9K/0/0 iops] [eta 00Jobs: 1 (f=1): [r(1)] [75.0% done] [326.4MB/0KB/0KB /s] [83.6K/0/0 iops] [eta 00Jobs: 1 (f=1): [r(1)] [83.3% done] [354.1MB/0KB/0KB /s] [90.9K/0/0 iops] [eta 00Jobs: 1 (f=1): [r(1)] [91.7% done] [355.2MB/0KB/0KB /s] [90.9K/0/0 iops] [eta 00Jobs: 1 (f=1): [r(1)] [100.0% done] [355.7MB/0KB/0KB /s] [91.4K/0/0 iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=27552: Wed Feb 13 13:56:13 2019
    read : io=4096.0MB, bw=360119KB/s, iops=90029, runt= 11647msec
    cpu : usr=13.68%, sys=42.90%, ctx=994945, majf=0, minf=73
    IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
    issued : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=64

    Run status group 0 (all jobs):
    READ: io=4096.0MB, aggrb=360118KB/s, minb=360118KB/s, maxb=360118KB/s, mint=11647msec, maxt=11647msec

    Disk stats (read/write):
    sda: ios=1022857/50, merge=1861/0, ticks=721552/212, in_queue=721572, util=99.16%

    root@sv39 ~ # fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite
    test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
    fio-2.2.10
    Starting 1 process
    Jobs: 1 (f=1): [w(1)] [12.5% done] [0KB/179.4MB/0KB /s] [0/45.1K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [16.7% done] [0KB/178.2MB/0KB /s] [0/45.7K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [20.8% done] [0KB/178.1MB/0KB /s] [0/45.9K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [25.0% done] [0KB/180.3MB/0KB /s] [0/46.2K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [29.2% done] [0KB/182.2MB/0KB /s] [0/46.6K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [34.8% done] [0KB/182.9MB/0KB /s] [0/46.8K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [39.1% done] [0KB/180.2MB/0KB /s] [0/46.4K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [43.5% done] [0KB/182.7MB/0KB /s] [0/46.8K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [47.8% done] [0KB/183.5MB/0KB /s] [0/46.1K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [52.2% done] [0KB/184.4MB/0KB /s] [0/47.2K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [56.5% done] [0KB/184.5MB/0KB /s] [0/47.3K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [60.9% done] [0KB/185.8MB/0KB /s] [0/47.4K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [65.2% done] [0KB/184.2MB/0KB /s] [0/47.2K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [69.6% done] [0KB/183.8MB/0KB /s] [0/47.4K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [73.9% done] [0KB/184.5MB/0KB /s] [0/47.3K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [78.3% done] [0KB/186.8MB/0KB /s] [0/47.7K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [82.6% done] [0KB/183.7MB/0KB /s] [0/47.2K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [87.0% done] [0KB/185.7MB/0KB /s] [0/47.6K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [91.3% done] [0KB/184.6MB/0KB /s] [0/47.3K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [95.7% done] [0KB/182.8MB/0KB /s] [0/46.8K/0 iops] [eta 00Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/182.1MB/0KB /s] [0/46.9K/0 iops] [eta 00m:00s]
    test: (groupid=0, jobs=1): err= 0: pid=27573: Wed Feb 13 13:57:18 2019
    write: io=4096.0MB, bw=187363KB/s, iops=46840, runt= 22386msec
    cpu : usr=7.12%, sys=34.32%, ctx=555831, majf=0, minf=9
    IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
    submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
    complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
    issued : total=r=0/w=1048576/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
    latency : target=0, window=0, percentile=100.00%, depth=64

    Run status group 0 (all jobs):
    WRITE: io=4096.0MB, aggrb=187362KB/s, minb=187362KB/s, maxb=187362KB/s, mint=22386msec, maxt=22386msec

    Disk stats (read/write):
    sda: ios=0/1034812, merge=0/3033, ticks=0/1396004, in_queue=1395768, util=99.59%

    root@sv39 ~ #

  • eol said: root@sv39

    how many servers do you have?

  • @comXyz said:

    eol said: root@sv39

    how many servers do you have?

    7.

  • If it's paid hourly product you can test it yourself, If it's paid monthly product like a dedi you can get full refund If you feel it's not What u need After a week.

  • ZerpyZerpy Member
    edited February 2019

    The thing is, throughput won't really differ much between consumer and enterprise grade drives, if you're doing a benchmark, because you're working with a rather small dataset.

    However, where enterprise grade drives, do differ is the overall performance for larger datasets, you'll have a rather huge difference in terms of performance.. If we take an ElasticSearch cluster (mostly indexing data).

    A hetzner server with datacenter drives:

    A hetzner server with consumer drives:

    The two graphs, are a part of same cluster, they're a replica of each other. The logstash ingest machines and the ElasticSearch master runs on separate boxes, so these two servers above are both "Data Nodes" in ElasticSearch - they're doing the same amount of iops in this particular work load, but the utilization of the drives differ quite a lot.

    If the cluster is under high load, the node with consumer drives reaches its max capacity rather quickly.

    I'm moving away from using consumer drives in this particular setup, because it simply doesn't scale - I get a lot better performance on the enterprise grade drives, making it well worth the cost.

    For some things consumer drives will work just fine :) But if you're actually using your disks, then surely spend the additional few euros to get the servers with better drives.

    The same is true for the consumer vs enterprise grade NVMe drive servers by the way.

  • @Zerpy are you using software RAID or hardware RAID on the servers?

  • @comXyz said:
    @Zerpy are you using software RAID or hardware RAID on the servers?

    Software raid 1 :)

    Thanked by 1comXyz
  • First-RootFirst-Root Member, Host Rep
    edited February 2019

    I can confirm @Zerpy we switched from Samsung Pro to Intel Datacenter and hell they are a complete other league. Not so much in simple benchmarks but you notice the higher performance and endurance when the disks starts to be used.

  • @Zerpy said:

    I'm moving away from using consumer drives in this particular setup, because it simply doesn't scale - I get a lot better performance on the enterprise grade drives, making it well worth the cost.

    My evidence of this isn't anywhere near as conclusive (and pretty!) as yours, but I can say I've had the same experience.

    I was at one host where my dedicated database server was using Micron consumer SSDs in software RAID-1. This system wasn't under heavy load, but it simply couldn't handle concurrent operations without massive slowdowns. For instance, a simple mysqldump to the SSD holding the database would completely hog all IO. It would literally pause any queries in process until the dump file was finished being written out. Not good!

    I migrated over to another system using Intel datacenter SSDs. The software versions and configuration were exactly identical, but performance on the new system was a night and day difference. I can now do all kinds of reads and writes on the SSDs without having any real effect on database operations.

    So, my situation might be anecdotal, but I've certainly seen what you're describing.

  • @FR_Michael said:
    I can confirm @Zerpy we switched from Samsung Pro to Intel Datacenter and hell they are a complete other league. Not so much in simple benchmarks but you notice the higher performance and endurance when the disks starts to be used.

    Good to hear that I'm not alone! :-)

    What's interesting to me is that the reviews of SSDs concentrate on raw speed benchmarks and mention the improved write durability of datacenter SSDs, but I've never seen any discussion of which type is better for concurrent operations than another. This is why I was shocked to see the difference. I thought all SSDs, with their very high IOPS capabilities, should perform more or less similarly. I've never heard of one type being good for single vs. multi-threaded operations.

  • Consumer SSDs have a small fast cache to hide slow flash/controllers.
    Once the cache is full it goes all downhill from there...
    You don't want that in a serious box.

    Thanked by 1VPSensational
  • malekmalek Member, Host Rep
    edited February 2019

    get datacenter drives, those are usually samsung pm863 ssds which have WAY higher write endurance than consumer ones.

  • @eol said:

    @comXyz said:

    eol said: root@sv39

    how many servers do you have?

    7.

    Are you a provider, or you just let them idle?

  • @Janevski said:

    @eol said:

    @comXyz said:

    eol said: root@sv39

    how many servers do you have?

    7.

    Are you a provider

    No.

    @Janevski said:
    or you just let them idle?

    The Hetzner box is for testing/experiments but idling mostly, yes.

    Thanked by 1Janevski
  • t0mt0m Member
    edited February 2019

    @eol said:
    Serial Number: SEX69YAF800007

    That’s one sexy serial number.

  • Do you guys know any provider that has public SSD grade like Hetzner?
    I tried to check on some server provider websites, but they're just saying SSD in general.

  • darkimmortaldarkimmortal Member
    edited February 2019

    Had both ~500gb SSDs - standard one was some sort of special OEM-only micron drive with hilariously low endurance - way below the average consumer drive. Data center edition is a solid Samsung enterprise disk, same model as posted above

    Overall steer clear of the non-datacenter option, I was taken aback at just how awful its specs are - was expecting an off the shelf consumer SSD, but got something far worse

    Also for some unknown reason they reduced the price of the 480gb datacenter SSD to the same price as the 500gb standard, so datacenter at today's prices is an absolute no brainer

    Thanked by 2comXyz vimalware
  • @comXyz said:
    Do you guys know any provider that has public SSD grade like Hetzner?
    I tried to check on some server provider websites, but they're just saying SSD in general.

    OVH brand only uses enterprise grade drives.

    @darkimmortal said:
    Had both ~500gb SSDs - standard one was some sort of special OEM-only micron drive with hilariously low endurance - way below the average consumer drive.

    Hetzner uses a whole bunch of Crucial MX500 and Micron 1100, both are "off the shelf" drives, but they're both as awful.

    I had one consumer drive being Intel, which was performing better than Micron 1100 and Crucial MX500, but still bad.

    Data center edition is a solid Samsung enterprise disk, same model as posted above

    Datacenter drives (at least 480gb version) can be either Intel SSDSC2BB480G7 (S3520), Intel SSDSC2KB480G7R (S4500), or Samsung PM863 - I think there's one type more, but I can't remember which one.. The servers I currently have, one have S3520 and one have S4500 - both servers purchased same date.

    Thanked by 2comXyz vimalware
  • eoleol Member
    edited February 2019

    @Zerpy said:
    OVH brand only uses enterprise grade drives.

    I doubt this.

    @Zerpy said:
    Hetzner uses a whole bunch of Crucial MX500 and Micron 1100, both are "off the shelf" drives, but they're both as awful.

    I got Samsung PM853T SSDs from Hetzner.

    EDIT2:
    Those are enterprise grade.

  • @eol said:

    @Zerpy said:
    OVH brand only uses enterprise grade drives.

    I doubt this.

    It's ok you doubt it, but reality is that the actual OVH brand only uses enterprise grade drives, SoYouStart and Kimsufi differs, but OVH itself.. every drive is enterprise grade from HGST, Intel or Samsung :)

  • eol said: I got Samsung PM853T SSDs from Hetzner.

    EDIT2:
    Those are enterprise grade.

    so the enterprise grade will start with PM..something?

  • @comXyz said:
    so the enterprise grade will start with PM..something?

    Not necessarily.
    There are also Samsung SM..., etc.

    Any search engine can give youi more details.

    Thanked by 1comXyz
  • ZerpyZerpy Member
    edited February 2019

    @comXyz said:
    so the enterprise grade will start with PM..something?

    There's also PM consumer drives, such as PM951.

    Edit: or the famous PM961 that Clouvider advertises as an "enterprise drive" despite it's a consumer drive :-D

  • @eol said:

    @comXyz said:
    so the enterprise grade will start with PM..something?

    Not necessarily.
    There are also Samsung SM..., etc.

    Any search engine can give you more details.

    EDIT2:
    Typo.

Sign In or Register to comment.