Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More - Page 269
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More

1266267269271272339

Comments

  • msattmsatt Member

    Amsterdam 6 and 8 ARE up.

    AMSKVM8 main IP is offline, switched to extra IP address 192.227.223.x and works
    AMSKVM6 main IP is offline, switched to extra IP address 149.57.191.x and works

    Did manual configuration of /etc/network/interfaces as Fix internet / Reconfigure network sets everything to main IP (which does not have internet connectivity).

    Thanked by 1Degelta
  • @henix said:
    this must be the worst provider ever in term of uptime

    yes , no backup hardware & no backup plan & no response & no .....
    buyvm || hetzner and more provider using ryzen , no prob .

  • @Lurkrazy said:
    TYOC027 and SJKVM8 is offline. @VirMach

    They are all back online. :)

  • CrabCrab Member

    One migration to Phoenix that I was able to do before they stopped offering it seems to have been stable for the last five days, so they are not all terrible :smiley:

  • netrixnetrix Member

    Pretty stable in NY, 950+ days uptime, so..

  • MumblyMumbly Member
    edited June 2022

    @msatt said: Amsterdam 6 and 8 ARE up.

    It looks like all of them are up, but without proper IPs and networking inaccessible. It seems like he swapped IPs from a new location yesterday and then "forgot" to migrate them or something so the only way to get them back online is to manually switch back to the old IPs.
    Sometimes I am wondering what he was thinking...

    Thanked by 1Degelta
  • cambbcambb Member

    FFME002 is still down this morning. Regarding uptime my Buffalo server was very reliable for the 3 years i had,seems like teething problems following the migration. Lack of comms is a concern.

  • grittygritty Member

    i want to know did VirMach change boss? technical is so poor now. since the delivery of ryzen, there should be few vms that can be used normally. either hardware or network issues, i don't know how this bad hardware can pass test and delivery to customers successfully

    Thanked by 1karjaj
  • @msatt said:
    Amsterdam 6 and 8 ARE up.

    AMSKVM8 main IP is offline, switched to extra IP address 192.227.223.x and works
    AMSKVM6 main IP is offline, switched to extra IP address 149.57.191.x and works

    Did manual configuration of /etc/network/interfaces as Fix internet / Reconfigure network sets everything to main IP (which does not have internet connectivity).

    Thanks! Got my server up by connecting via "VNC / Desktop" button in the Virmach control panel and editing /etc/network/interfaces back to my old IP 172.245.52.___ from 149.57.191.___

  • tuctuc Member

    @VirMach : So what's next?

  • NoCommentNoComment Member
    edited June 2022

    @gritty said:
    i want to know did VirMach change boss? technical is so poor now. since the delivery of ryzen, there should be few vms that can be used normally. either hardware or network issues, i don't know how this bad hardware can pass test and delivery to customers successfully

    No. Previously they were renting from colocrossing so colocrossing handled all that stuff. These ryzens are built by them and (maybe) their first go at colocation.

  • @netrix said: Pretty stable in NY, 950+ days uptime, so..

    3 years without updating the running kernel

    Thanked by 2FrankZ ariq01
  • edited June 2022

    Any hint about this years BF flash sales?

    Another fine day until Friday

  • No primary disk available on FFME005

  • ezethezeth Member, Host Rep

    @drizbo said:
    So yeah... My VM doesnt have working networking and their "fix networking" of course doesn't work. Classic Virmach.

    As everyone knows they dont have support either so I can just delete the VM and forget about it.

    @ezeth any refugee offers from Boomer?

    No way. Not until if I get a scaling solution for IPv4. I need Virtfusion to support NAT first.

  • FrankZFrankZ Veteran

    If somebody on Node: DALZ003 will do me the favor of checking the bandwidth reported in the billing panel to see if it is accumulating normally. Mine is offline and is currently accumulating about 20GB of transfer a day in the billing panel. I am about to go over my bandwidth limit so wanted to see if anyone else had this problem, on this node.

  • @VirMach Please give us an update and an expected date
    when the problems are solved. For now nobody can't use this service for business purpose

  • FFME002 seems to be completely dead. No VNC or Recovery possible. I wonder why they have not given any status update for days now (last: 06/22/2022).

    Thanked by 2cambb sangatsetia
  • karjajkarjaj Member

    2 VPS @FFME001, started 11min ago, 4 days without disk before... B)

    Thanked by 1FrankZ
  • Jake4Jake4 Member

    @fr33styl3 said:
    No primary disk available on FFME005

    Looks like a reboot, mine has been up for about 40 mins

  • @FrankZ said:
    If somebody on Node: DALZ003 will do me the favor of checking the bandwidth reported in the billing panel to see if it is accumulating normally.

    Mine is on Z003 and as of today (6/28 6:30 EDT) is 197.29 GB of 4 TB Used / 3.81 TB Free.
    I'll check again tomorrow.

    It's definitely higher than I expected because other machines are sitting at single figure (like 3, 5, 7 GB)

  • VirMachVirMach Member, Patron Provider

    @mrTom said:

    @henix said:
    this must be the worst provider ever in term of uptime

    In general they are pretty good. The new ryzen hardware seems to be a nightmare, but on the other hand 3 out of 4 of my ryzens idle steadily for weeks now. Nr 4 can't boot....

    We probably have the same proportion of Ryzen and non-Ryzen having problems at this point if we average out the total quantity of VMs. There are more non-Ryzen nodes quantity-wise because old nodes have been emptying.

    When the old nodes have problems we have situations like the RAID controller failing and then not being properly replaced or power supply failing and out of 50+ servers in a location they don't have a single power supply on hand to replace it and have to ship it out slowly. Sure, Ryzen has similar problems right now but only because we are also dealing with setting them up and initial issues with the hardware as well as being short on replacement initially. Once that's worked out any failures will be quickly resolved. I've already equipped multiple large locations with extra hardware for failures.

    @anthonyc said:
    FFME002 seems to be completely dead. No VNC or Recovery possible. I wonder why they have not given any status update for days now (last: 06/22/2022).

    All of Frankfurt is being worked on. Most of it should be up, one or two still facing issues and being worked on. Unfortunately with how much work we have right now, the updates are definitely going to be sub-par. We're mostly focusing on actually getting the work done and it takes a good amount of time to coordinate the communication in between so I apologize in advance for that.

    @Mumbly said:

    @msatt said: Amsterdam 6 and 8 ARE up.

    It looks like all of them are up, but without proper IPs and networking inaccessible. It seems like he swapped IPs from a new location yesterday and then "forgot" to migrate them or something so the only way to get them back online is to manually switch back to the old IPs.
    Sometimes I am wondering what he was thinking...

    Amsterdam was having IP routing issues before the migration and one of the reasons we decided to move forward with the migration more quickly.

    I didn't personally handle the assignment of new IPs and reconfiguration but it's possible depending on your specific OS configuration that the assignment of new IP and reconfiguration could break the networking temporarily until migration is complete. Migration is currently in progress with files being moved so I'm guessing initially yours might have had an issue with the IP configuration, and then now it may be inaccessible or it may be inaccessible soon as the data is migrated.

    @Murata_Chink_Best said:

    @netrix said: Pretty stable in NY, 950+ days uptime, so..

    3 years without updating the running kernel

    We used KernelCare for the majority of those days. Then we stopped using it near the end because it was constantly breaking the nodes and causing outages due to the KernelCare team being careless with how they pushed their updates. In the end it was causing more downtimes than just bringing them down to update kernels.

    @tuc said:
    @VirMach : So what's next?

    We're going to continue to be more or less quiet on this thread, I've refrained from posting replies to everyone here to focus on resolving everything and there's too many people asking about too many different things right now and I just think providing vague updates here would not be helpful right now.

    A lot of what we're doing have timelines that are not up to us at this point, such as certain locations waiting on node setups/DC hands.

    @eagle__pride said:

    @henix said:
    this must be the worst provider ever in term of uptime

    yes , no backup hardware & no backup plan & no response & no .....
    buyvm || hetzner and more provider using ryzen , no prob .

    We have backups in most cases. The problem with some of the Los Angeles nodes that have had RAID failures is that backups also failed. Yes, backups can fail. They're not perfect.

    For Ryzen we haven't had time to configure backups on all of them.

    In the end the responsibility of data and backups is not with us and we've always told our customers to back up their important files. If it's important enough to where hardware failure can be devastating, then it's important enough to spend some time to ensure you have proper backups on your end. We still maintain disaster recovery backups but those are for catastrophic failures and they're not guaranteed. I'd say maybe 80-90% of time those backups are used and data is successfully recovered in terms of catastrophic failure, it's not 100%

  • @jongerenchaos said:
    @VirMach Please give us an update and an expected date
    when the problems are solved. For now nobody can't use this service for business purpose

    The irony is, when he was constantly updating us with issues before, people were like "Heck why were you posting on the forum and not spending time building servers etc" (and also "You are lying"). Now he stays quiet and everyone wants update again😂

  • @Jake4 said:

    @fr33styl3 said:
    No primary disk available on FFME005

    Looks like a reboot, mine has been up for about 40 mins

    Down again

  • VirMachVirMach Member, Patron Provider
    edited June 2022

    @gritty said:
    i want to know did VirMach change boss? technical is so poor now. since the delivery of ryzen, there should be few vms that can be used normally. either hardware or network issues, i don't know how this bad hardware can pass test and delivery to customers successfully

    They are all tested before being sent out. Perhaps during shipping some sustain damage. I know a few servers that arrived in Tokyo had loose cables that had to be put back in place, and one in another location had a RAM stick get loose.

    Then for others its just that our testing doesn't properly represent a bunch of VMs being fired up at the same time and running simultaneously running different operations, and it's down to the quality control of the parts at the factory which we have no reign over. We see this a lot with the disks, where after filling and being used for a few days they run into problems, similar to if you purchase any hardware yourself and on day 1 it's fine but after a week it runs into problems.

    We've noticed a lot of the RAM from certain batches runs into ECC errors after a while, these were all purchased brand new and passed factory's QC but the rate of failure is high so I've made sure with new shipments to send excess RAM for more immediate replacements.

    As for the issues, the majority of VMs are fine. We have over 10,000 VMs on Ryzen now. So it's easy for it to seem a lot worse than it is when a few clusters of people that specifically purchased this offer come here and report problems, but the majority is still doing just fine.

    @xpreboun said:

    @jongerenchaos said:
    @VirMach Please give us an update and an expected date
    when the problems are solved. For now nobody can't use this service for business purpose

    The irony is, when he was constantly updating us with issues before, people were like "Heck why were you posting on the forum and not spending time building servers etc" (and also "You are lying"). Now he stays quiet and everyone wants update again😂

    Yeah, it will be difficult to keep everyone happy on that.

    I've honestly not had as much time to update people but it's mostly the fact that too many different things are happening at once now to where it's difficult to remember to check in after every single point of change to provide updates here. We've been working on a lot in the background, a lot of shipments, a lot of more thorough hardware testing, problems with Frankfurt, network on several nodes, and a lot of cleaning up of abuse. The networks should be a lot cleaner now just from how many hundreds if not thousand plus abusers we shut down. These were people causing a lot of the ARP packet issues, bruteforcing, outbound attacks, and people spamming and also doing a lot of IP stealing.

    Then there's Amsterdam, trying to get in and activate more disk space for the migrations, a lot of servers being replaced in San Jose, Seattle, a lot of work for NYC, a lot of work for LAX, preparing and executing migrations, and more. Oh and then the power failures on Los Angeles old E5 nodes and the power supplies that died there, plus a host of other problems we had to look into from some of them going into overloading loops from the reboot.

    Then more work done ensuring we have more recent and functional backups before all the said migrations.

  • JonesJones Barred
    edited June 2022

    @VirMach

    Can you arrange the migration???

    Ticket # 571560

    The last time I replied to my message was 25 days ago, and now the product has not successfully completed the migration (paid migration >30 days), and now it seems that the VPS is offline, which is really terrible

    According to your ticket tips, 25 days have passed since the payment is completed

  • MumblyMumbly Member
    edited June 2022

    @VirMach said: Amsterdam was having IP routing issues before the migration and one of the reasons we decided to move forward with the migration more quickly.

    I didn't personally handle the assignment of new IPs and reconfiguration but it's possible depending on your specific OS configuration that the assignment of new IP and reconfiguration could break the networking temporarily until migration is complete. Migration is currently in progress with files being moved so I'm guessing initially yours might have had an issue with the IP configuration, and then now it may be inaccessible or it may be inaccessible soon as the data is migrated.

    It doesn't seems anything like that and it's not my VPS/OS specific as it's completely natural that newly assigned IPs/networking (from the new location) won't work on the old location with our VPSes still stucked there.

    Problem here is that you or someone replaced old IP/networking with the new one yesterday and then didn't perform migration yet, so as long our stuff won't appear on the new location, things won't come online unless we remove new IPs and replace them with old ones again (as some from comments above did).
    My guess is that this happened because of you or someone from your team swapped IPs and killed connectivity on the all old nodes at once and now slowly perform migration.
    With all the crap goind around VirMach who gives a shit about some day or two of extra downtime, right? Even if we're talking about old nodes people actually use in production.

  • VirMachVirMach Member, Patron Provider
    edited June 2022

    @Mumbly said: Problem here is that you or someone replaced old IP/networking with the new one yesterday and then didn't perform migration yet, so as long our stuff won't appear on the new location, things won't come online unless we remove new IPs and replace them with old ones again (as some from comments above did).

    My guess is that this happened because of you or someone from your team swapped IPs and killed connectivity on the all old nodes at once and now slowly perform migration.
    With all the crap goind around VirMach who gives a shit about some day or two of extra downtime, right? Even if we're talking about old nodes people actually use in production.

    The way we add IPs before migration, both IP addresses get used by the VPS, meaning it shouldn't kill connectivity. I didn't handle this so I'll speak with the person who did further to ensure it was done correctly, but I'm not sure how it would have been done incorrectly. Since the old IP address is not removed it just shouldn't in any case kill connectivity in the way that you describe.

    I did check a few of them when you earlier reported it and noticed they did have both IP addresses assigned.

    The entire intention of the process is to reduce downtime by assigning both addresses and reconfiguring, otherwise customers would have to wait longer after migration to get their functional new IP. They are not done all at once, they're staggered over time to ensure reconfigures complete correctly, which means if we do it afterward, it'll take a few hours longer for some people. In previous migrations we did see an issue with Windows operating systems but not Linux. Linux requires the new IP changed to the main IP, while Windows requires it doesn't.

    @Mumbly can you confirm if the issue only appeared after the migration IP was added? It worked previously, as in before around 2 days ago? I did mention Amsterdam was getting reports of weird connectivity issues and hence why we decided to migrate it sooner. I'm wondering if that could be the potential issue for your virtual server.

  • MumblyMumbly Member
    edited June 2022

    @VirMach said:

    @Mumbly can you confirm if the issue only appeared after the migration IP was added? It worked previously, as in before around 2 days ago? I did mention Amsterdam was getting reports of weird connectivity issues and hence why we decided to migrate it sooner. I'm wondering if that could be the potential issue for your virtual server.

    No, newly added IPs caused this. Our VPS connectivity was killed yesterday.

    Here is your answer:

    @drizbo said:
    I never touched the migration button, didnt get any email with date or anything. Yet my AMS is down for 2 hours even tho its showing "online" in the panel.

    @padap said: Looks like they added the new IP to all VMs before migrating. Default route is through the new IP and thus isn't working.

    @msatt said:
    Did manual configuration of /etc/network/interfaces as Fix internet / Reconfigure network sets everything to main IP (which does not have internet connectivity).

    So unless we remove new IPs and replace them with the old ones our crap isn't reachable.

This discussion has been closed.