New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I went ahead and opened a ticket about it and they have been very forthcoming and offered to reprovision on another node or refund. they also expect things to get better, once most people have migrated and/or they can find and handle more abusing neighbours.
because I also like that location and would rather want to keep it I am also giving the 'benefit of the doubt' as you call it and let them reprovision.
new box might be on one of the upcoming stack or whatever, so more likely not filled yet. however the numbers are much better and constant esp. from fio but also during transfer.
it looks like they limit single network connection to a max of ~50Mbit, as I achieve constant ~6MB/s which definitely is not the filesystem, because a second (or third) transfer in parallel easily reaches the same ~6MB/s
I think that is a smart move to calm down on the situation and balance performance across users. something I can easily live with, if it stays that way - my use case are pushing in backups anyway ;-)
No. This is across the country, topping out at gigabit rates. Oh wait, did you select the 100Mbps unlimited option instead of Gigabit for 4TB then 10Mbps throttled?
Because you were being unreasonable. These benchmarks are like a slap to SSD's and a punch to HDD's on a SAN.
hmm, I am on the 4TB @ Gbps and could easily reach the Gbps with iperf in parallel as well (checked before I wrote the post above)
now that you mention it I tried some closeby targets with iperf in single connection mode as well, and you are right, it easily goes up for those as well...
still for real world transfer I mounted a storage via sshfs and when copying actual data (512MB borg backup chunks) from it I achieve the mentioned ~6 MB/s per thread I have running...
I did not want to overdo but I at least tried and could easily scale to 4x 6MB/s by running transfers in parallel. so there simply has to be some artificial limit per process somewhere. if it's not the network, maybe it's on write thread to the SAN or whatever 🤷♂️
again, that's NO complaint from my side, I rather think it's exactly the right approach to try and balance usage - it's a shared server after all.
eventhough i have renewed last year plan,
i wish @servarica_hani should have offered mouse plan this year too.
many ppl would have benefited from it while saving ipv4s
Meanwhile, I tried to install sshfs
that's how it looked for me before as well, fio numbers had to be considered 'optimistic', but I think that was due to ZFS caching layers.
maybe open a ticket and ask to be reprovisioned on another node? eventually it's just that one and the zpool acts up or whatever.
I guess ideally @servarica_hani puts together all torrenters and streamer/reencoder in one node and the real storage users in another 🤷♂️😂
my VPS was re-provisioned on another node and the difference to the previous one is like night and day.
So far, my best purchase this BF. Great support(wasn't expecting this to be better than the other premium provider I bought from) and the VM works fine.
ahh why did i miss this
Wow. You guys broke zfs.
Maybe it would have been best if staff promised a 7day deploy window, so that they could roll out VMs once every 30-60mins.
Simple solution to a human vice problem (benchmarking)
Sorry for the delay in answering here
Was working fully on the new rack
for the performance what happened is a storm of users filling and testing their vms to the max which we have never seen before
the difference between here and last time is that
1- We got the same number of users that we usually get in 2 to 4 weeks of posting the offer in less than 12 hours
2- all past offers we had only 100mbps network which made flling the vps slower which helped in the first few days
to fix the issue we are currently moving some users to different nodes and we are suspending extreme abusers
as I said the performance should be back in few days and just in case we are extending refund window by another week so you have 2 weeks to ask for refund if you didn't like the performance
to fix this issue we have a plan for future offers but unfortunatly it will not work for the restock in less than 2 weeks(too late to implement it)
@servarica_hani thanks for the update
What is an extreme abuser in this case? If a person starts using the service, uses that unlimited 100mbps and fills the disk, it's considered an abuser?
no
currently the only case we consider extreme is people who do torrent non stop and other p2p file sharing
What if they're downloading really good pr0n? /s
ETA: and are willing to share it lol
Installed DA on mine, using it to be a dedicated FTP server for my automated backups and hosted a parking page for a single domain name. 40+ MB/s transferring between servers, exceeds my requirements by a lot.
Why is @edoarudo5 banned?
I'm not banned.
he is not, yet. just childish.
I hope I don't get banned just for changing my avatar.
Haha damn you got me
Maybe I was lucky, neither I faced the IO Issue nor network. IO remained 130-140 MBps & Network 850+ mbps.
ever thought about your neighbours, who wish to use their portion of it as well?
io seems much better than yesterday .
can confirm, it's increased and stable - very happy!
I have not yet resume my rclone transfer..
Read/write speeds are better since yesterday. Will probably wait 1-2 days more before using it after hopefully everything settles down 😊
Maybe a warning. It will make people think you're a dick, in case you care.
Got in a little late. I put my name in the form for new stock. I assume we will still get it at the BF price?