New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More
This discussion has been closed.
Comments
Amsterdam 6 and 8 ARE up.
AMSKVM8 main IP is offline, switched to extra IP address 192.227.223.x and works
AMSKVM6 main IP is offline, switched to extra IP address 149.57.191.x and works
Did manual configuration of /etc/network/interfaces as Fix internet / Reconfigure network sets everything to main IP (which does not have internet connectivity).
yes , no backup hardware & no backup plan & no response & no .....
buyvm || hetzner and more provider using ryzen , no prob .
They are all back online.![:) :)](https://lowendtalk.com/resources/emoji/smile.png)
One migration to Phoenix that I was able to do before they stopped offering it seems to have been stable for the last five days, so they are not all terrible![:smiley: :smiley:](https://lowendtalk.com/resources/emoji/smiley.png)
Pretty stable in NY, 950+ days uptime, so..
It looks like all of them are up, but without proper IPs and networking inaccessible. It seems like he swapped IPs from a new location yesterday and then "forgot" to migrate them or something so the only way to get them back online is to manually switch back to the old IPs.
Sometimes I am wondering what he was thinking...
FFME002 is still down this morning. Regarding uptime my Buffalo server was very reliable for the 3 years i had,seems like teething problems following the migration. Lack of comms is a concern.
i want to know did VirMach change boss? technical is so poor now. since the delivery of ryzen, there should be few vms that can be used normally. either hardware or network issues, i don't know how this bad hardware can pass test and delivery to customers successfully
Thanks! Got my server up by connecting via "VNC / Desktop" button in the Virmach control panel and editing /etc/network/interfaces back to my old IP 172.245.52.___ from 149.57.191.___
@VirMach : So what's next?
No. Previously they were renting from colocrossing so colocrossing handled all that stuff. These ryzens are built by them and (maybe) their first go at colocation.
3 years without updating the running kernel
Any hint about this years BF flash sales?
Another fine day until Friday
No primary disk available on FFME005
No way. Not until if I get a scaling solution for IPv4. I need Virtfusion to support NAT first.
If somebody on Node: DALZ003 will do me the favor of checking the bandwidth reported in the billing panel to see if it is accumulating normally. Mine is offline and is currently accumulating about 20GB of transfer a day in the billing panel. I am about to go over my bandwidth limit so wanted to see if anyone else had this problem, on this node.
@VirMach Please give us an update and an expected date
when the problems are solved. For now nobody can't use this service for business purpose
FFME002 seems to be completely dead. No VNC or Recovery possible. I wonder why they have not given any status update for days now (last: 06/22/2022).
2 VPS @FFME001, started 11min ago, 4 days without disk before...![B) B)](https://lowendtalk.com/resources/emoji/sunglasses.png)
Looks like a reboot, mine has been up for about 40 mins
Mine is on Z003 and as of today (6/28 6:30 EDT) is
197.29 GB of 4 TB Used / 3.81 TB Free
.I'll check again tomorrow.
It's definitely higher than I expected because other machines are sitting at single figure (like 3, 5, 7 GB)
We probably have the same proportion of Ryzen and non-Ryzen having problems at this point if we average out the total quantity of VMs. There are more non-Ryzen nodes quantity-wise because old nodes have been emptying.
When the old nodes have problems we have situations like the RAID controller failing and then not being properly replaced or power supply failing and out of 50+ servers in a location they don't have a single power supply on hand to replace it and have to ship it out slowly. Sure, Ryzen has similar problems right now but only because we are also dealing with setting them up and initial issues with the hardware as well as being short on replacement initially. Once that's worked out any failures will be quickly resolved. I've already equipped multiple large locations with extra hardware for failures.
All of Frankfurt is being worked on. Most of it should be up, one or two still facing issues and being worked on. Unfortunately with how much work we have right now, the updates are definitely going to be sub-par. We're mostly focusing on actually getting the work done and it takes a good amount of time to coordinate the communication in between so I apologize in advance for that.
Amsterdam was having IP routing issues before the migration and one of the reasons we decided to move forward with the migration more quickly.
I didn't personally handle the assignment of new IPs and reconfiguration but it's possible depending on your specific OS configuration that the assignment of new IP and reconfiguration could break the networking temporarily until migration is complete. Migration is currently in progress with files being moved so I'm guessing initially yours might have had an issue with the IP configuration, and then now it may be inaccessible or it may be inaccessible soon as the data is migrated.
We used KernelCare for the majority of those days. Then we stopped using it near the end because it was constantly breaking the nodes and causing outages due to the KernelCare team being careless with how they pushed their updates. In the end it was causing more downtimes than just bringing them down to update kernels.
We're going to continue to be more or less quiet on this thread, I've refrained from posting replies to everyone here to focus on resolving everything and there's too many people asking about too many different things right now and I just think providing vague updates here would not be helpful right now.
A lot of what we're doing have timelines that are not up to us at this point, such as certain locations waiting on node setups/DC hands.
We have backups in most cases. The problem with some of the Los Angeles nodes that have had RAID failures is that backups also failed. Yes, backups can fail. They're not perfect.
For Ryzen we haven't had time to configure backups on all of them.
In the end the responsibility of data and backups is not with us and we've always told our customers to back up their important files. If it's important enough to where hardware failure can be devastating, then it's important enough to spend some time to ensure you have proper backups on your end. We still maintain disaster recovery backups but those are for catastrophic failures and they're not guaranteed. I'd say maybe 80-90% of time those backups are used and data is successfully recovered in terms of catastrophic failure, it's not 100%
The irony is, when he was constantly updating us with issues before, people were like "Heck why were you posting on the forum and not spending time building servers etc" (and also "You are lying"). Now he stays quiet and everyone wants update again😂
Down again
They are all tested before being sent out. Perhaps during shipping some sustain damage. I know a few servers that arrived in Tokyo had loose cables that had to be put back in place, and one in another location had a RAM stick get loose.
Then for others its just that our testing doesn't properly represent a bunch of VMs being fired up at the same time and running simultaneously running different operations, and it's down to the quality control of the parts at the factory which we have no reign over. We see this a lot with the disks, where after filling and being used for a few days they run into problems, similar to if you purchase any hardware yourself and on day 1 it's fine but after a week it runs into problems.
We've noticed a lot of the RAM from certain batches runs into ECC errors after a while, these were all purchased brand new and passed factory's QC but the rate of failure is high so I've made sure with new shipments to send excess RAM for more immediate replacements.
As for the issues, the majority of VMs are fine. We have over 10,000 VMs on Ryzen now. So it's easy for it to seem a lot worse than it is when a few clusters of people that specifically purchased this offer come here and report problems, but the majority is still doing just fine.
Yeah, it will be difficult to keep everyone happy on that.
I've honestly not had as much time to update people but it's mostly the fact that too many different things are happening at once now to where it's difficult to remember to check in after every single point of change to provide updates here. We've been working on a lot in the background, a lot of shipments, a lot of more thorough hardware testing, problems with Frankfurt, network on several nodes, and a lot of cleaning up of abuse. The networks should be a lot cleaner now just from how many hundreds if not thousand plus abusers we shut down. These were people causing a lot of the ARP packet issues, bruteforcing, outbound attacks, and people spamming and also doing a lot of IP stealing.
Then there's Amsterdam, trying to get in and activate more disk space for the migrations, a lot of servers being replaced in San Jose, Seattle, a lot of work for NYC, a lot of work for LAX, preparing and executing migrations, and more. Oh and then the power failures on Los Angeles old E5 nodes and the power supplies that died there, plus a host of other problems we had to look into from some of them going into overloading loops from the reboot.
Then more work done ensuring we have more recent and functional backups before all the said migrations.
@VirMach
Can you arrange the migration???
Ticket # 571560
The last time I replied to my message was 25 days ago, and now the product has not successfully completed the migration (paid migration >30 days), and now it seems that the VPS is offline, which is really terrible
According to your ticket tips, 25 days have passed since the payment is completed
It doesn't seems anything like that and it's not my VPS/OS specific as it's completely natural that newly assigned IPs/networking (from the new location) won't work on the old location with our VPSes still stucked there.
Problem here is that you or someone replaced old IP/networking with the new one yesterday and then didn't perform migration yet, so as long our stuff won't appear on the new location, things won't come online unless we remove new IPs and replace them with old ones again (as some from comments above did).
My guess is that this happened because of you or someone from your team swapped IPs and killed connectivity on the all old nodes at once and now slowly perform migration.
With all the crap goind around VirMach who gives a shit about some day or two of extra downtime, right? Even if we're talking about old nodes people actually use in production.
My guess is that this happened because of you or someone from your team swapped IPs and killed connectivity on the all old nodes at once and now slowly perform migration.
With all the crap goind around VirMach who gives a shit about some day or two of extra downtime, right? Even if we're talking about old nodes people actually use in production.
The way we add IPs before migration, both IP addresses get used by the VPS, meaning it shouldn't kill connectivity. I didn't handle this so I'll speak with the person who did further to ensure it was done correctly, but I'm not sure how it would have been done incorrectly. Since the old IP address is not removed it just shouldn't in any case kill connectivity in the way that you describe.
I did check a few of them when you earlier reported it and noticed they did have both IP addresses assigned.
The entire intention of the process is to reduce downtime by assigning both addresses and reconfiguring, otherwise customers would have to wait longer after migration to get their functional new IP. They are not done all at once, they're staggered over time to ensure reconfigures complete correctly, which means if we do it afterward, it'll take a few hours longer for some people. In previous migrations we did see an issue with Windows operating systems but not Linux. Linux requires the new IP changed to the main IP, while Windows requires it doesn't.
@Mumbly can you confirm if the issue only appeared after the migration IP was added? It worked previously, as in before around 2 days ago? I did mention Amsterdam was getting reports of weird connectivity issues and hence why we decided to migrate it sooner. I'm wondering if that could be the potential issue for your virtual server.
No, newly added IPs caused this. Our VPS connectivity was killed yesterday.
Here is your answer:
So unless we remove new IPs and replace them with the old ones our crap isn't reachable.