New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
tarisu review: Terrible Disk io
This is the lowest disk I/O I have ever seen:
` # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
Yet-Another-Bench-Script
v2025-01-01
https://github.com/masonr/yet-another-bench-script
## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ##
Thu Mar 6 04:47:57 AM EST 2025
Basic System Information:
Uptime : 0 days, 1 hours, 25 minutes
Processor : Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
CPU cores : 2 @ 2297.338 MHz
AES-NI : ✔ Enabled
VM-x/AMD-V : ✔ Enabled
RAM : 3.8 GiB
Swap : 1020.0 KiB
Disk : 34.3 GiB
Distro : Debian GNU/Linux 12 (bookworm)
Kernel : 6.1.0-9-amd64
VM Type : KVM
IPv4/IPv6 : ✔ Online / ❌ Offline
IPv4 Network Information:
ISP : Bugra Sevinc
ASN : AS214304 BUGRA SEVINC
Host : RCS Technologies FZE LLC
Location : Frankfurt am Main, Hesse (HE)
Country : Germany
fio Disk Speed Tests (Mixed R/W 50/50) (Partition /dev/vda1):
Block Size | 4k (IOPS) | 64k (IOPS) |
---|---|---|
Read | 884.00 KB/s (221) | 34.79 MB/s (543) |
Write | 922.00 KB/s (230) | 35.18 MB/s (549) |
Total | 1.80 MB/s (451) | 69.97 MB/s (1.0k) |
Block Size | 512k (IOPS) | 1m (IOPS) |
------ | --- ---- | ---- ---- |
Read | 131.08 MB/s (256) | 144.15 MB/s (140) |
Write | 138.05 MB/s (269) | 153.75 MB/s (150) |
Total | 269.14 MB/s (525) | 297.91 MB/s (290) |
iperf3 Network Speed Tests (IPv4):
Provider | Location (Link) | Send Speed | Recv Speed | Ping |
---|---|---|---|---|
Clouvider | London, UK (10G) | 2.10 Gbits/sec | 982 Mbits/sec | 65.4 ms |
Eranium | Amsterdam, NL (100G) | 2.20 Gbits/sec | 3.05 Gbits/sec | 49.3 ms |
Uztelecom | Tashkent, UZ (10G) | 878 Mbits/sec | 90.6 Mbits/sec | 234 ms |
Leaseweb | Singapore, SG (10G) | busy | 404 Kbits/sec | 221 ms |
Clouvider | Los Angeles, CA, US (10G) | 865 Mbits/sec | 233 Mbits/sec | 186 ms |
Leaseweb | NYC, NY, US (10G) | 1.31 Gbits/sec | 1.37 Gbits/sec | 133 ms |
Edgoo | Sao Paulo, BR (1G) | 781 Mbits/sec | 276 Mbits/sec | 254 ms |
Thanked by 1cainyxues
Comments
looks like HDD
a 2.5 hdd 500gb from 2010?
NVMe disk cached with HIGH SPEED HDD type shit
Is it impacting your vps use?
Or just worried about looding on a flexing match with someone?
What are you talking about?
The lowest disk I/O will not impact VPS use?
What package did you sign up for ?
I mean, how much does it affect my yabs scripts?
2 vCore E5v4
6GB RAM
50GB SSD
Istanbul/Turkey
10 GBPS Port/Shared
Fumo fumo
@tarisu
Hi Everyone!
We have already mentioned in the previous topic messages that there is such a situation on the YABS side. We use Sata SSD on our servers and there is no problem with disk performance. I wish you had contacted our support team instead of creating a thread directly here. If you run a test using DD instead of YABS, you can see the average values for Sata SSD.
Write Test:
dd if=/dev/zero of=/root/testfile bs=1M count=1024 oflag=direct
Read Test:
dd if=/root/testfile of=/dev/null bs=1M count=1024 iflag=direct
Judging by the thread title and the messages, we don't think your intentions are good.
Edit:
Since we provide Swap space as 1MB on VPS Servers and all RAM is allocated to the user, an output is obtained in this way about disk speed.
Regards!
Does this look good?
opinion?
root@tata:~# dd if=/dev/zero of=/root/testfile bs=1M count=1024 oflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.486708 s, 2.2 GB/s
root@tata:~#dd if=/root/testfile of=/dev/null bs=1M count=1024 iflag=direct
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.258392 s, 4.2 GB/s
root@tata:~#
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
Use random data instead, for starters. These commands are nearly useless and not a response in itself.
Wut?
Just install fio on the server with "apt-get install fio" then run this command
This is only for write but still should be good indicative.
fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1
From one of my servers here is the result:
WRITE: bw=313MiB/s (329MB/s), 313MiB/s-313MiB/s (329MB/s-329MB/s), io=19.1GiB (20.5GB), run=62271-62271msec
The testing has already spoken
Greetings!
We are aware that this is the case on the Fio side, testing with only Fio does not reflect the actual disk performance of the server, you can install Windows and verify with Crystaldiskmark, and you can also verify in Linux using DD. Obviously if that value was real you wouldn't even be able to update the server
We are happy to offer all our solutions at accessible prices. Also, although we provide VPS, our clusters are not very full, so performance is not a headache.
Regards.
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.619242 s, 1.7 GB/s
Try doing same test with adding
oflag=direct
to your original command.Not a fan of how yabs uses it, but fio itself is the gold standard and absolutely does represent actual disk performance
i think those type sh!t is better than OP posted
I like how Tarisu, instead of just addressing the issue, attacks their customer by saying that lalasir created this thread in bad faith... Not a good advertise for your company, imo
dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync oflag=direct
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.619962 s, 1.7 GB/s
I would LOVE to have that kind of disk performance on my @MassiveGRID VPS in their DE, FRA location ...
Greetings!
None of our customers have had any problems with disks and as stated, our customer is in a newly activated Cluster. We use Sata SSD on our servers and we use almost the same disks in most Clusters. We provide all disks unused. We had to make a return in this way due to different conversations between the customer and our sales team.
Regards.
Hello Everyone!
The tests performed by us are as follows. Tests were performed by installing Windows and Linux on the same plan and Cluster. No problem was detected by us, average values were determined for Sata SSD.
Regards.
So you say that all of the tests are good, except fio? And what's the reason for that. What's particular on your nodes that YABS returns shit fio results?
This debate with @tarisu is on github issue:
https://github.com/axboe/fio/issues/711#issuecomment-437681750
.
.
This is like asking: "Please test to write file with zero-fill, so the result will be higher"
Writing endless zero is not real world scenario
Instead of addressing the issue, you put the blame to the fio library that use random-fill and do the test with dd with zero-fill to get that artifically higher speed to "lie to customer"
dd difference between zero-fill (green) and random (red)

Greetings!
CrystalDiskMark already does the “random” tests for us, but for you we tested it again on Linux. This is the result, there is no problem with our servers. As for YABS, we provide our controls, we do not have a customer with performance problems.
Regards