Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


tarisu review: Terrible Disk io
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

tarisu review: Terrible Disk io

lalasirlalasir Member

This is the lowest disk I/O I have ever seen:

` # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #

Yet-Another-Bench-Script

v2025-01-01

https://github.com/masonr/yet-another-bench-script

## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##

Thu Mar 6 04:47:57 AM EST 2025

Basic System Information:

Uptime : 0 days, 1 hours, 25 minutes
Processor : Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
CPU cores : 2 @ 2297.338 MHz
AES-NI : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM : 3.8 GiB
Swap : 1020.0 KiB
Disk : 34.3 GiB
Distro : Debian GNU/Linux 12 (bookworm)
Kernel : 6.1.0-9-amd64
VM Type : KVM
IPv4/IPv6 : ✔ Online / ❌ Offline

IPv4 Network Information:

ISP : Bugra Sevinc
ASN : AS214304 BUGRA SEVINC
Host : RCS Technologies FZE LLC
Location : Frankfurt am Main, Hesse (HE)
Country : Germany

fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda1):

Block Size 4k (IOPS) 64k (IOPS)
Read 884.00 KB/s (221) 34.79 MB/s (543)
Write 922.00 KB/s (230) 35.18 MB/s (549)
Total 1.80 MB/s (451) 69.97 MB/s (1.0k)
Block Size 512k (IOPS) 1m (IOPS)
------ --- ---- ---- ----
Read 131.08 MB/s (256) 144.15 MB/s (140)
Write 138.05 MB/s (269) 153.75 MB/s (150)
Total 269.14 MB/s (525) 297.91 MB/s (290)

iperf3 Network Speed Tests (IPv4):

Provider Location (Link) Send Speed Recv Speed Ping
Clouvider London, UK (10G) 2.10 Gbits/sec 982 Mbits/sec 65.4 ms
Eranium Amsterdam, NL (100G) 2.20 Gbits/sec 3.05 Gbits/sec 49.3 ms
Uztelecom Tashkent, UZ (10G) 878 Mbits/sec 90.6 Mbits/sec 234 ms
Leaseweb Singapore, SG (10G) busy 404 Kbits/sec 221 ms
Clouvider Los Angeles, CA, US (10G) 865 Mbits/sec 233 Mbits/sec 186 ms
Leaseweb NYC, NY, US (10G) 1.31 Gbits/sec 1.37 Gbits/sec 133 ms
Edgoo Sao Paulo, BR (1G) 781 Mbits/sec 276 Mbits/sec 254 ms
Thanked by 1cainyxues
«1

Comments

  • looks like HDD

    Thanked by 2cainyxues admax
  • wadhahwadhah Member

    @cybertech said:
    looks like HDD

    a 2.5 hdd 500gb from 2010?

    Thanked by 2DeusVult cainyxues
  • NVMe disk cached with HIGH SPEED HDD type shit

    Thanked by 1sillycat
  • plumbergplumberg Veteran, Megathread Squad

    Is it impacting your vps use?
    Or just worried about looding on a flexing match with someone?

    Thanked by 1tarisu
  • lalasirlalasir Member

    @plumberg said:
    Is it impacting your vps use?
    Or just worried about looding on a flexing match with someone?

    What are you talking about?
    The lowest disk I/O will not impact VPS use?

    Thanked by 1DeusVult
  • xHostsxHosts Member, Patron Provider

    What package did you sign up for ?

  • nomekonomeko Member

    @lalasir said:

    @plumberg said:
    Is it impacting your vps use?
    Or just worried about looding on a flexing match with someone?

    What are you talking about?
    The lowest disk I/O will not impact VPS use?

    I mean, how much does it affect my yabs scripts?

  • lalasirlalasir Member

    @xHosts said:
    What package did you sign up for ?

    2 vCore E5v4
    6GB RAM
    50GB SSD
    Istanbul/Turkey
    10 GBPS Port/Shared

  • sillycatsillycat Member

    @nomeko said:

    @lalasir said:

    @plumberg said:
    Is it impacting your vps use?
    Or just worried about looding on a flexing match with someone?

    What are you talking about?
    The lowest disk I/O will not impact VPS use?

    I mean, how much does it affect my yabs scripts?

    Fumo fumo

    Thanked by 2barbaros nomeko
  • barbarosbarbaros Member
    Thanked by 1tarisu
  • tarisutarisu Member, Host Rep
    edited March 6

    Hi Everyone!

    We have already mentioned in the previous topic messages that there is such a situation on the YABS side. We use Sata SSD on our servers and there is no problem with disk performance. I wish you had contacted our support team instead of creating a thread directly here. If you run a test using DD instead of YABS, you can see the average values for Sata SSD.

    Write Test:
    dd if=/dev/zero of=/root/testfile bs=1M count=1024 oflag=direct

    Read Test:
    dd if=/root/testfile of=/dev/null bs=1M count=1024 iflag=direct

    Judging by the thread title and the messages, we don't think your intentions are good.

    Edit:

    Since we provide Swap space as 1MB on VPS Servers and all RAM is allocated to the user, an output is obtained in this way about disk speed.

    Regards!

  • nszervernszerver Member

    Does this look good?
    opinion?

    root@tata:~# dd if=/dev/zero of=/root/testfile bs=1M count=1024 oflag=direct
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.486708 s, 2.2 GB/s

    root@tata:~#dd if=/root/testfile of=/dev/null bs=1M count=1024 iflag=direct
    1024+0 records in
    1024+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.258392 s, 4.2 GB/s
    root@tata:~#

  • MetroVPS_NMPMetroVPS_NMP Patron Provider, Veteran

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

  • @tarisu said:
    Hi Everyone!

    We have already mentioned in the previous topic messages that there is such a situation on the YABS side. We use Sata SSD on our servers and there is no problem with disk performance. I wish you had contacted our support team instead of creating a thread directly here. If you run a test using DD instead of YABS, you can see the average values for Sata SSD.

    Write Test:
    dd if=/dev/zero of=/root/testfile bs=1M count=1024 oflag=direct

    Read Test:
    dd if=/root/testfile of=/dev/null bs=1M count=1024 iflag=direct

    Judging by the thread title and the messages, we don't think your intentions are good.

    Use random data instead, for starters. These commands are nearly useless and not a response in itself.

    Edit:

    Since we provide Swap space as 1MB on VPS Servers and all RAM is allocated to the user, an output is obtained in this way about disk speed.

    Regards!

    Wut?

    Thanked by 3naphtha M66B DeusVult
  • barbarosbarbaros Member

    Just install fio on the server with "apt-get install fio" then run this command

    This is only for write but still should be good indicative.

    fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1

    From one of my servers here is the result:

    WRITE: bw=313MiB/s (329MB/s), 313MiB/s-313MiB/s (329MB/s-329MB/s), io=19.1GiB (20.5GB), run=62271-62271msec

  • lalasirlalasir Member

    @tarisu said: Judging by the thread title and the messages, we don't think your intentions are good.

    The testing has already spoken

    Thanked by 1DeusVult
  • tarisutarisu Member, Host Rep
    edited March 7

    @lalasir said:

    @tarisu said: Judging by the thread title and the messages, we don't think your intentions are good.

    The testing has already spoken

    Greetings!

    We are aware that this is the case on the Fio side, testing with only Fio does not reflect the actual disk performance of the server, you can install Windows and verify with Crystaldiskmark, and you can also verify in Linux using DD. Obviously if that value was real you wouldn't even be able to update the server :)

    We are happy to offer all our solutions at accessible prices. Also, although we provide VPS, our clusters are not very full, so performance is not a headache.

    Regards.

  • nszervernszerver Member

    @Mahfuz_SS_EHL said:
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.619242 s, 1.7 GB/s

  • barbarosbarbaros Member

    @nszerver said:

    @Mahfuz_SS_EHL said:
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.619242 s, 1.7 GB/s

    Try doing same test with adding oflag=direct to your original command.

  • @tarisu said:

    @lalasir said:

    @tarisu said: Judging by the thread title and the messages, we don't think your intentions are good.

    The testing has already spoken

    Greetings!

    We are aware that this is the case on the Fio side, testing with only Fio does not reflect the actual disk performance of the server, you can install Windows and verify with Crystaldiskmark, and you can also verify in Linux using DD. Obviously if that value was real you wouldn't even be able to update the server :)

    We are happy to offer all our solutions at accessible prices. Also, although we provide VPS, our clusters are not very full, so performance is not a headache.

    Regards.

    Not a fan of how yabs uses it, but fio itself is the gold standard and absolutely does represent actual disk performance

  • TangeTange Member

    @shaikhmanal said:
    NVMe disk cached with HIGH SPEED HDD type shit

    i think those type sh!t is better than OP posted

  • DeusVultDeusVult Member

    I like how Tarisu, instead of just addressing the issue, attacks their customer by saying that lalasir created this thread in bad faith... Not a good advertise for your company, imo

    Thanked by 1gbzret4d
  • nszervernszerver Member

    @barbaros said:

    @nszerver said:

    @Mahfuz_SS_EHL said:
    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.619242 s, 1.7 GB/s

    Try doing same test with adding oflag=direct to your original command.

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync oflag=direct

    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.619962 s, 1.7 GB/s

    Thanked by 1barbaros
  • jsgjsg Member, Resident Benchmarker

    @lalasir said:
    This is the lowest disk I/O I have ever seen:

    [yabs]

    I would LOVE to have that kind of disk performance on my @MassiveGRID VPS in their DE, FRA location ...

    Thanked by 1gbzret4d
  • tarisutarisu Member, Host Rep

    @DeusVult said:
    I like how Tarisu, instead of just addressing the issue, attacks their customer by saying that lalasir created this thread in bad faith... Not a good advertise for your company, imo

    Greetings!

    None of our customers have had any problems with disks and as stated, our customer is in a newly activated Cluster. We use Sata SSD on our servers and we use almost the same disks in most Clusters. We provide all disks unused. We had to make a return in this way due to different conversations between the customer and our sales team.

    Regards.

  • tarisutarisu Member, Host Rep
    edited March 7

    Hello Everyone!

    The tests performed by us are as follows. Tests were performed by installing Windows and Linux on the same plan and Cluster. No problem was detected by us, average values were determined for Sata SSD.

    Regards.


  • barbarosbarbaros Member

    @tarisu said:
    Hello Everyone!

    The tests performed by us are as follows. Tests were performed by installing Windows and Linux on the same plan and Cluster. No problem was detected by us, average values were determined for Sata SSD.

    Regards.


    So you say that all of the tests are good, except fio? And what's the reason for that. What's particular on your nodes that YABS returns shit fio results?

  • 428428 Member

    This debate with @tarisu is on github issue:

    https://github.com/axboe/fio/issues/711#issuecomment-437681750

    One hint might be that dd is writing zeroes, you could ask fio to do the same with zero_buffers=1. Right now you are writing random data. On top of that, you're using the async interface with, you'd want to use ioengine=psync or similar to get something closer to what dd is doing. You're also overwriting the same 1g with 1m buffers 5 times, which is also different. You might want to look into doing a fully comparative test, before you assume that anything is broken here.

    .

    That explains it, thanks. Writing zeroes was the difference. Using psync only further reduced the write speeds; and the reason for all this, is because of lack of information on the subject available on the internet, I would've had to guess my way through the lengthy documentation to ever figure this out on my own. (Especially since none of this is my area of expertise in the first place)
    I just wrongly assumed this to be a bug rather than a feature.

    .

    That’s why fio doesn’t write zeroes by default, it’s a bad write pattern if you really want to know what the storage device can do in terms of performance, as this case also aptly demonstrates.


    @tarisu said:
    ... and you can also verify in Linux using DD. ...

    Write Test:
    dd if=/dev/zero of=/root/testfile bs=1M count=1024 oflag=direct

    This is like asking: "Please test to write file with zero-fill, so the result will be higher"
    Writing endless zero is not real world scenario

    ... We are aware that this is the case on the Fio side, testing with only Fio does not reflect the actual disk performance of the server...

    Instead of addressing the issue, you put the blame to the fio library that use random-fill and do the test with dd with zero-fill to get that artifically higher speed to "lie to customer"

  • 428428 Member
    edited March 8

    dd difference between zero-fill (green) and random (red)

  • tarisutarisu Member, Host Rep

    Greetings!

    CrystalDiskMark already does the “random” tests for us, but for you we tested it again on Linux. This is the result, there is no problem with our servers. As for YABS, we provide our controls, we do not have a customer with performance problems.

    Regards :)

    @428 said:
    dd difference between zero-fill (green) and random (red)

Sign In or Register to comment.