New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More
This discussion has been closed.
Comments
updating about this issue, I just checked the VMs and it is online now. Thanks @VirMach.
I closed the not-responded ticket.
wow... nevermind... it feels again like back home... I dont even know why we even dare to ask you to delieve what we've paid for... I'M SORRY MATE ! wish you a nice weekend ! your humble client !
Network is fine, looks like if I start rsync or something for few minutes and use CPU near 100% it just goes down... log show nothing smart it just go off...
SJCZ004 woke up...
IIRC that was like a year ago in the 2018 thread. I am really glad that you decided not to go HW Raid10 with NVMe drives and instead do double the disk space and backups. From what I have seen so far the disk speed has been really good on all the nodes. I also expect it has been much simpler when things have broken, not that it is ever really simple when things break, but you know what I mean.
Sending you some good luck for your time working on the shared hosting node array.
If it was April in Tokyo I would say you are on a bad disk, but since it is July I will say maybe you are bumping the abuse script limits. Try limiting your rsync to 20 or 30 Mbs with --bwlimit=2512 and see if it still reboots on you.
One day I hope you find peace. This constant anger is not good for your health.
Sorry, but that disqualifies you! Fiverr you said?
I was just pondering how I would feel if VirMach let a bunch of Fiverr guys wander around the nodes and through the administration system. Then I saw your post.
Bravo sir, I could not have said it better.
We are at page 314 (The Pi page)
ViPi deals. Yay! Dedicated instances.
Me too, VPS has been down for a month and a half, and obviously, they don't want to fix it. If there will the worst provider poll, I will vote for VirMach.
LOL
Thanks to @Virmach 's sleepless nights, SJCZ004 is back as of this morning, after only about 12 days of downtime.
I'd jump off SJ to somewhere with more responsive DC techs (is there a chart somewhere? LA good? Dallas bad? SJ inept? ), but it's all about the petabytes!
My VPS in SJCZ004 is still in a state of downtime now. I have tried to restart many times but had no effect. If your VPS goes down for a longer time, you will also be disappointed with it.
Yeah but its not only rsync as after 12-20h with just ~10% cpu load it shutdown me again, will try to migrate out of denver as only there i have issue
Abuse script shouldn't shut you down without making the ticket first if I remember the coding correctly. If it fails at that point it would not perform the powerdown step, unless that part was recoded.
It also absolutely shouldn't do anything in the timeframe you mentioned.
Message me your service's IP, I'm interested in seeing what's going on with it in terms of logs on our end. If it's when you rsync that behavior you're describing is most likely not related to the CPU at all but could be related to perhaps memory or disk. I want to see if you're hitting OOM or if there's any node problems such as I/O errors or ECC memory errors. I'll also have a look either way but having your ID will help make the investigation go by faster.
Tokyo 027node is experiencing an extremely low io as well as being limited to less than 100Mbps bandwidth speed, cpu steal bouncing up and down at 10%,for about 3days,please investigate it! Thank you!
Tried to install Debian 11 from SolusVM template and this happened
I noticed something weird about that node today but couldn't identify it, I was finally able to locate the abuser. The node was technically fine, it just looks like one guy was sending such an extreme mail volume that it was tripping up our logging and causing issues with the anti-spam script, causing high disk wait, and flooding bursts of packets in a weird way where it didn't necessarily look terrible on the graphs but was definitely negatively affecting everything.
Node went from 40-45% user usage, 20% system usage, and 6% system interrupts with bursts of disk wait to 15% user, 5% system, 1% system interrupts just from powering him down.
I'll have our developer take a look and see how it was negatively affecting the anti-abuse script, these spammers are getting ridiculous with what they do to avoid getting caught so when they do get through it really impacts the system negatively and it's hard to identify since it ends up looking like something else.
We haven't necessarily completely forgotten about the <0.6% facing the issue we've been discussing. It's just something where the majority of the work is organizing it and then completing it in bulk and we're very close to having them all fixed at once.
I just haven't been providing an ETA because it could be highly varied. For example we got it organized and then when we re-released the Ryzen migrate button, a lot of people fixed it with the button so we have to go through the list again and mark the ones that were already fixed as we cannot include them in the fix or else data will be overwritten. So if I had provided a previous estimate and then that happened, people would just be further upset. I've been working on it today on the side.
Node?
NYCB030
@VirMach
TYOC025 Please check, it's been almost 12 hours since we lost contact.
Thank you, sir! It looks normal enough for now, I think you should ban such unruly abusers for the good of all,have a nice day!
It's working now. Thanks @VirMach for all your efforts. Much appreciated.
I've been in contact with them to get this back up, I haven't had a chance to complete it or send out a notice. Looks like it has power but won't cycle properly.
It's been around 9 hours, or 8 hours when you made the comment, you might as well say almost 24 hours then round that up to a week. Doesn't make a difference on how I'm working through the servers today.
All my VM's @VirMach have not been used the last two months; hope all issues will be settled soon
This whole thing is like watching a train wreck. My vps came back up but I cant trust it nor use it cause for smtp as the rDNS isnt set. But I keep coming back for the entertainment. The poor communications. The botched migrations. The tone of the written policies if you try to open a ticket. the decision to ignore your ticket system and attempt to support via this thread. and now the Bus Factor.
https://en.wikipedia.org/wiki/Bus_factor
But the funniest are the people who pop in here wanting to know if they can get in on the deal
Honestly I wish you luck. I just lost the faith
My migration request received no response in 4 days after the invoice was paid. Are you still operating normally? Please have a look at ticket #844405. @VirMach
Can you help me? I haven't been able to connect to my server for a few days
@VirMach @VirMach
Due to the poor network performance of TYOC027, I paid for a Japan migration request to change a node.
This request was not completed immediately, but in the queue.
Now that TYOC027 is normal, can I cancel this request?
The associated migration ticket is Ticket #734319.
repeat
According to your request, I submitted this ticket #297244, but I'm not sure it will take several months to get processed?
@VirMach
Operation Timed Out After 90001 Milliseconds With 0 Bytes Received
Is there any technician to solve this problem? I haven't had VPS available for 2 months since the renewal