New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More
This discussion has been closed.
Comments
LA10GKVM21 yes but LAKVM25 no. both still on xeon node
I have no qualifications. But virmach told you this
You submitted a migration request, which means you wanted it to get stuck for some time. And it did get stuck. So I don't understand why you are unhappy, when things happened as expected.
They just told me through the ticket that they would migrate for me after paying. Can't the screenshot prove the truth? (if I knew at the beginning that I would face endless waiting after paying the money, would I choose to continue?)
I expressed what I saw. Before I paid, there was no content on their support ticket indicating that paid migration is equal to blank space and endless waiting
AMSKVM8 forced migration to FFME002 - Did not boot and needed manual fsck which fixed 3 problems.
Yabs looks much better
AMSKVM8 Geekbench 5 Benchmark Test:
Single Core | 486
Multi Core | 475
FFME002 Geekbench 5 Benchmark Test:
Single Core | 1292
Multi Core | 1282
(edit) - To avoid confusion this was a migration specal.
Why did they even move services from AMS to FRA? I had already Ryzen-migrated two of my servers from CC AMS to Ryzen AMS but kept the database server for now.
Now I ended up with a database server in FRA and two application servers in AMS which is far from ideal latency-wise.![:/ :/](https://lowendtalk.com/resources/emoji/confused.png)
Ryzen migrate always failed (try again tomorrow) for both of my servers so I just left them for Virmach to migrate. To be fair AMS > FFM was with data so saved me quite a bit of work.
At the moment I am same as you, have another migration special on AMSKVM6. I hope this will also move to Frankfurt !!!!
As far as choice - I was not asked if I wanted to be moved to Frankfurt however there is little difference from UK communication with either Amsterdam or Frankfurt.
edit - update
and ssh now fails, so will have to mess around with IP addresses and see which one is active. Via VNC there is no internet so definately an IP problem.
My AMSKVM6 shows that it has switched to FFME006 however if I ssh to it, it shows it is still
Intel(R) Xeon(R) CPU E5-2687W v2 @ 3.40GHz
After a cpanel reboot, VNC shows Ryzen
My VPS is back online from FFME003 (FFME003.VIRM.AC) node.
I was not able to access to the VPS, but after a "Reconfigure Networking" and a reboot (from the SoulsVM panel) looks like everything is fine.
Sadly the IP address what I got, is from USA instead Germany but the traceroute confirm, it hosted from Germany. I think I can live with this
but I hope they will solve this in the future.. (ARIN database update or don't know).
Edit: I can't request rDNS on the WHMCS, when I click the [rDNS] button, the list include only the old IP address instead of the new.
FFME002.VIRM.AC still down for me.
Unsurprisingly another anxious Chinese client. It's a culture gap to my understanding, China growing too fast while everybody have millions of deals with a VPS. It's a sensational event if the ticket delayed for months.
However Virmach is an American company, western people used to waiting patiently, generally a ticket can be hold for years while no one complaint with a single word. It's a decent way of doing things worth third world countries learn.
I have an Intel VPS on a SJKVM7 node. Can I migrate to an AMD node?
I can confirm that the physical server is working.
Check if VNC via SolusVM works, if so then likely problem is IP address change, so try the re-configure network option.
mine is NY10GKVM78
i know i paid peanuts for my BF special vps and i waited more than 12 hours, not a single acknowledgement reply. i'm taking a leap of faith they are super busy till no time to entertain the busy stream of ticketing system![=) =)](https://lowendtalk.com/resources/emoji/smiley.png)
I already tried all these things:
-Virmach Panel: Operation Timed Out After 90001 Milliseconds With 0 Bytes Received
-Solus Panel: VNC - Failed to connect to server (code: 1006)
-Solus Panel: Network Reconfigure - Networking could not be reconfigured. Unknown operating system
-Solus Panel: Reinstall does not work
-Solus Panel: Rescue does not work
I already have an FFM IP (149.57...) assigned as i did not migrate, but ordered the VPS just 7 days ago.
FFMME005 node is not locked anymore but still no change "Unknown Error"
If you have tried these, then say what you have tried instead of us having to try and guess your level of expertise. Also be polite - a thanks will always help.
Your original statement was incorrect i.e.
The physical server is functioning - so your problem is elsewhere.
FFME002.VIRM.AC is down, you must be confused with another node.
![](https://i.imgur.com/agsybna.png)
My vps was migrated from AMSKVM5 to AMSD027.
I can't connect via ssh or ping
Tried Debian and Alma Linux, reconfigured the network, rebooted and it still doesn't work
@VirMach
Any tips to fix this?
I don't think so -
Status Online
IPv4 Address 2
IP Address 149.57.161.xxx
Virtualization Type (KVM)
Hostname xxx
Node FFME002.VIRM.AC
Strange.
Main IP pings: false
Node Online: false
Service online: offline
Operating System: linux-ubuntu-20.04-x86_64-minimal-latest-v2
Service Status:Active
Registration Date:2022-06-21
FFME002 down for me as well.
Not sure if it may be the cause but on "Node Name" I have FFME002 without VIRM.AC suffix
EDIT: Looks to be FFME002.VIRM.AC in SolusVM and FFME002 in Virmach panel
149.57.161.xx for me as well by the way
You'll get a chance to move it back.
We resolved a lot of issues with FFM but multiple servers still keep dropping an NVMe once in a while. Luckily they're not dying -- it's just a configuration issue similar to what we faced in Tokyo, but what's different is that the fix for Tokyo isn't working on them for some reason. This one seems related to swap space and I've made some progress in fixing most of them, at least to the point where we achieved higher stability.
With that said it's possible for most of the VMs on the node to function while some don't and we're monitoring the situation closely.
Some of these also created incorrectly, it's on the queue to fix.
SolusVM spazzes out when a lot of reconfigurations happen at once. I'd recommend trying again now. I'll do another round of checks to see if the issue is widespread.
I tried again but it didn't work
Thanks
Looks like my little 384 MB instance force migrated from AMSKVM3 to FFME003.VIRM.AC today. (I didn't receive any advance email, notification, or ticket, so I wouldn't have known what was going on if not for LET.)
Once I was able to access the control panel I flipped on VNC and tried to boot, but I no joy. Fought the rescue system and the debian installer a bit, and finally got a fresh install working. (Odd stuff with disks and network not being detected, and I'm sure the installer wasn't happy with only 384 MB.)
Now that I'm back up the first impression is that the system is much faster! I'll have to let it run awhile then look at the munin graphs, but I'd expect the disk latency to be lower.
I also ran into cloudflare rate limits on the control panel, but I'm sure the system is getting hammered with everyone trying to spin up at the same time.
Private message me your IP, I want to check and make sure there's no mass configuration error on our end. I can only assist you if it's an error affecting everyone, otherwise I'll reply and let you know it's isolated so you can create a ticket.
>
Seen a few people have said this. Wondering, do you have a valid email on file, etc... Most of us got several.
Yep, because I got two copies of the "Upcoming service migration & upgrade" email explaining that I had services eligible for migration. And no, I didn't click on any migrate buttons.
Right on. Thanks for the clue. Sounds like a legit mistake was made.
FFME005 having IO errors since yesterday. FFME006 is starting to have similar errors.