New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
@labze Can I order Hybrid Storage 2 TB as an upgrade to my existing service?
Not sure. Kinda depends what you upgrade from.
Could you take a Quick Look at 9750870? Thanks!
Invoice 15110
I have Budget Storage VPS 1TB and want to make an upgrade to Hybrid Storage 2 TB.
Unfortunately there's no upgrade path between the two. You'd have to create a new server and manually migrate data over.
Invoice# 15142
Invoice #: 15143
Thank you for 10% cashback, 50% more bandwidth and 3x backup slots!
Invoice #15158 - Hybrid Storage 2 TB
50% bandwidth + x3 backup slots please. Thank you!
Forgot me
Invoice #15159
I am sure it must be me but I am getting the following speeds for the HDD which doesn't seem in line comapred to the benches posted earlier!
I think that’s quite well if you’re running on HDD.
I also ordered hybrid storage server and gets similar iops as yours.
https://lowendspirit.com/discussion/6258/hostbrr-premium-storage-promo-7950xd-w-block-storage-only-7-budget-storage-from-2-8-tb
and
https://lowendtalk.com/discussion/comment/3670047/#Comment_3670047
This would be the Finland server right? The Finland server specifically has a high IO usage atm likely causing lower performance. It'll get better when usage slows down.
tl;dr My Hybrid Storage is "OK". Especially at BF pricing. This is not a bashing post. This is a perhaps too-overly-investigative review from a BF-signup customer.
This is gonna be a long one. I've had a few days, so time for a full review!
So I jumped on one of the Hybrid Storage 16 TB deals on Black Friday. After an initial OS template issue that took my stubbornness a couple days to give up on and switch OSes, it seems to be what it claims: 16 TB of storage.
Unfortunately, the network characteristics make it unusable for live backups, but rather only at-rest backups. Using
mtr
from a couple of other hosts (dedis, not VPSes) outside Hetzner with known-good networks themselves, I see a percent or few packet loss at random at the gateway IP. Pings are fine: it's the general speeds being lower than expected and the random packet loss that slightly concern me. Interactive SSH sessions are generally the responsiveness I'd expect, with occasional temporary session hangs: this is consistent with the issue being throughput rather than ping, combined with hiccups of packet loss.Random (read: not cherry-picked; I did it when I wrote this part of the post) Ookla Speedtest result. (I have seen the open source
speedtest-cli
be far too optimistic too many times now, so only official client for me!)So the network is frustrating. But not a show stopper. My problem is... the actual storage part of the VPS. Since yabs only benchmarks the root filesystem, this is very similar to what yabs runs, but both for the root filesystem (
/dev/vda
) and the storage filesystem (/dev/vdb
).As you can see, the speeds for
/
are Just Fine(TM). No complaints there for an OS disk on a storage VPS. But/storage
(not its real mountpoint; names changed to protect the innocent, etc.)? Whoo boy. Storage speeds of 100 MBps means the absolute best case can't fill the bandwidth pipes as it also appears to be mounted via a gigabit link. This is... unfortunate. But also maybe not a concern, since Speedtest can only use half the rated speed anyhow. But those IOPS are... not-great. At all. Also, as I tried to match YABS, these numbers are (like YABS) slightly optimistic (read "look better") than how the real world performance tends to feel.Since I tend to like
bonnie++
more thanfio
(and thereby YABS), I ran that too. But, in the interest of full disclosure, bonnie++ isn't perfect. Neither is fio, but I don't know anyone who's benchmarked the benchmarks to the same degree as bonnie++ (mostly by virtue of bonnie++ being a much older and more widely-used [outside of LET or even just Linux] benchmark. All the tools are suspect and comparing their output is more useful. As the root filesystem numbers above and via bonnie are totally non-problematic for a storage VPS root filesystem, and this is getting way longer than I anticipated, let's just look at the bonnie++ output for the storage filesystem (it's the device anyone really cares about anyhow, let's be honest):What does this tell us? Well... the storage is slow. Up to 75 seconds to create a file. That is over a whole wall clock minute. And the reads were literally timing out. This is... not very good for "I need this file", in honesty.
Now for real world can't-prove-things-very-well results... I normally see ~10-150 mbit (huge range, yes) when trying to back up to my storage volume from elsewhere. I cannot pin down why. Literally nothing I can control seems to correspond to the speed. Filesystem choice seems irrelevant, as I tried a few early on with the same results. Linux distro seems irrelevant, as I didn't start on this distro and originally blamed the speeds on the original distro choice not cooperating with the HostBrr network. It's not remote filesystem choice either as I've tried at least FTP, FTPS, SFTP, CIFS, NFSv4, and 9P, all with the same results and most with various performance tuning after the defaults showed much the same behavior. My only conclusion is that the huge IOPS throttle is killing the performance.
Do I think HostBrr's offering is terrible? No. Do I think it's great? No, especially at Black Friday pricing. If you're planning on cold-storage ship-it-and-forget-it backups, it's probably great. But if you're expecting to be able to send or read reasonably large sized files (I'd suggest the behaviors I've seen would be noticeable at all sizes but would feel less like "lag" at about 100 megs or so), in a "warm" fashion (backing them up as-needed with occasional live retrieval)... I'd suggest to test ASAP to figure out if your results match mine. I might just be completely cursed, after all! 😈
And finally a yabs since that's the LET standard (but with
-i
since the network bit is mostly unreliable these days, at best). But in this case, it wasn't telling the full story and I want to be fair to both @labze and other potential customers. In fact, I want to be so fair that I'm running this yabs after writing the rest of this post!Thank you for the thorough review. Always appreciate detailed and honest feedback.
To address a few concerns - what might be the biggest factor right now is that there is still large amounts of data being transferred into the server. Over a 24-hour span the typical average network usage is around 400 Mbps both up and down, with peak using much higher than that. That would correspond well to your benchmark test only getting around half of the port bandwidth for the test. This seems to be the situation everytime a new storage server has been launched and should settle down as people are done setting up their system.
With large amounts of data entering the server it is also affecting IO performance. The average write for the last 60 minutes has been 70 MB/s and often reaches over 100 MB/s. Furthermore, the 60 minute average read is 350 MB/s currently. Not sure why, I'll keep an eye out for that. It is likely causing higher IOwait.
I am not sure what is going on with the Bonnie++ benchmark. I haven't used that myself. My personal Nextcloud server is setup on the Finland Storage Server and I just completed a upload test with a bunch of small files and that completed as one would expect. Likewise, going through the gallery of Nextcloud does not hint of any real-world performance slowdown. At least not in this scenario.
That's not to take away from your results. I am going to investigate these potential issues and see if something more should be going on than just the high server load at the moment.
What is more concerning is the packet loss. If you could send some results of MTR from the storage server to the VPS and from the VPS to the storage server then that would be much appreciated. I'll also see if I can replicate that. Even during heavy load the network shouldn't drop packets at a frequent basis.
As always, if there is something really problematic going on I urge you and others to open a ticket. Most issues can usually be resolved reasonably fast. I just cannot always be aware of them if they are not reported :-)
By the way, you can perform a YABS test on the block storage, you simply need to run the test while in the folder (cd /storage).
This is a HDD yabs of my Hybrid Storage 2 TB. It was worse earlier but it's better now.
Hopefully, I'm still eligible for the bandwidth and backup slot upgrades. Invoice #15158
Germany location offline?
https://status.hostbrr.com/
@labze finland write speeds still terrible! Found anything?
yes, i know. Monitor show 3 servers in germany down. Was a brief downtime, 7 minutes
I'm glad you didn't take my post as "OMG everything sucks!" I figured it's been a week now, so things should have settled down now.
But bonnie kind of is in line with what I'm experiencing: small operations feel like they take forever. A directory listing, even, sometimes takes a literal minute to come back but it's usually fast. I suspect it has something to do with the way the storage system's scheduler works combined with however many of us are trying to get backups going, but that's pure speculation.
I'll try to get mtr data together in the couple days, if it continues, and put it in a ticket.
I don't think I ever thought about yabs benchmarking the current directory. I just always kind of naively assumed it benchmarked
/
!I don't foresee yabs differing greatly from my
fio
output (as yabs inspired it anyhow) at the moment. So perhaps I'll run it again later this week and we can see if things are changing for the better.Speeds are more miserable for me tonight than they have been, so perhaps it really just is a lot of people setting things up to sync. Hopefully the first ones finish soon, so the rest of us get a chance!
Depends on what server you mean. There was a faulty switch at the datacenter tonight which was promptly replaced causing around 10 minutes downtime for 3 servers.
The RAID-array has initialized a re-sync. That'll probrably last 24 hours yet and is causing a slowdown in performance. Furthermore, as long as people are still consistently using YABS and loading data onto the server it will not be at peak performance.
The fact that a RAID re-sync has initialized does not help with the performance currently. But I do find some of these issues strange. I run a Nextcloud server on the Finland storage server with quite a few other Docker applications and while performance (of course) isn't NVMe snappy, I do not feel the same slowness as you describe.
If issues persists I will do a in-depth troubleshoot and optimization. However, the Finland node is more busy than the Germany one while also undergoing a re-sync and it does not make sense to troubleshoot while the likely cause are these factors.
@labze hopefully you have not forgotten our extras.
Last few days Finland was very slow to me, probably that array resync was the biggest culprit. But today, I'd say everything is just fine, has resync finally finished?
I'm attaching current disk benches for both DE & FI, and I consider them very good, well... having in mind it is a shared HDD array. Plus good price plus excellent support, I consider this one of the best BF deals this year.
Have you fulfilled your back orders?
How will you handle DMCA
The array has indeed finished syncing and performance seems to be around normal again. Performance will probrably be a bit better over time as usage decreases and cache is being increased.
Think all orders are provisioned
DMCA notices will be forwarded and should be handled within 24 hours or your service gets suspended. Repeat offence will lead to service termination.