New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More
This discussion has been closed.
Comments
As so many times I said there's a vlan issue you should not put all vps into one vlan or it will cause all vps crash by packets from others. You must split vlan or setup ovs virtual vlan to fix the issue. New nodes works fine because it only have few vps, once it incresed the packets will increse exponentially. Solusvm is shit however the unreachable isssue is not caused by its own bug but the connection dropped by slave node because so many packets it has to handle. Why you guys be so confidence thought the big vlan will work fine? Srry I'm drunk I have to say this again and again...
I had to post the test result again to show that I agree with you
Virmach
.)
So any update about the paid migration?
@VirMach
Is TYOC026 offline?
VPS suddenly disconnected and ping.pe 100% lost panel shows offline
Reinstall system is not working
Should I create a ticket about this?
yes me too.
Thank you Virmach for release finaly my server @Tokyo. I am happy, waiting for your to stabilize the situation.
This node should remain better than the others since it's filled with larger plans (and it's already pretty much full.) Generally lower quantity of people means lower abuse and less likelihood of running into weird problems with the network card trying to keep up with all the packets or dealing with a bunch of 384MB plans going into kernel panic and getting hacked.
I'll try to come up with a plan to spread out these 384MB packages to either [A] make all nodes equally bad or hopefully [B] make them more unaffected by them.
This is definitely not something we're confident in and it was a big concern of ours. I've monitored collisions and that does not seem to be the problem.
Everything points to it being something else.
For example, TYOC026 which s filled more exclusively with larger services most likely not utilized for VPNs, has a much lower packet count right now and networking is much smoother on it than the other nodes using the same motherboard, NIC, and same switch on the same VLAN.
TYOC033 is the node with the highest quantity of these smaller 384MB packages used for VPNs and has 3x the packet count. The networking on it looks OK as well.
TYOC029 looks like one of the worst right now, and it actually has lower CPU usage than TYOC033 and also lower quantity of VMs on it, and also has lower total network usage in terms of bits in/out, but spikes up to 6x the packet count of TYOC026 and remains 3x higher baseline. At this point, the NIC struggles to keep up and we need to make modifications to improve it. I know what we can do to fix it, the problem is that SolusVM doesn't support it. What we'd need has probably been in the feature requests for years so I need to try to find other solutions. We've got it stable enough for now and later I want to explore using multiple ports and balancing it between them since it does have another NIC available, I just need to do more research to see if this would actually help or if aggregating them would just add more problems. This node (TYOC029) has no collisions and has an extremely low error count (there's a lot of measures for this, this is just an overview of one of them) of 0.0000001778438% and of course ideally we'd want this to be perfect but it's not what's majorly contributing to the packet loss or latency.
I did have someone with more expertise in networking take a look, and he more or less agreed. Of course I'm open to all more specific suggestions as I'm still extremely limited on time.
I'm not saying the VLAN concern isn't there and we will just forget about it, but for this particular issue right now, that's not what is highly affecting it.
I've been working on this and got it stable. One of the specific kernel parameters I have to do for Gen4 NVMe not to go away failed to stick and I just had to fix that and reboot. Confirmed that it's there now.
Which node?
Well VLAN may cause hidden problems seems to be like others. Just like what you said VPN nodes or small RAM plan nodes, they crashed beacuse they have more VPS per node. In a VLAN, ARP packets per VPS received ≈ VPS amount per node * packets the whole VLAN delivered. The more VPS you have in a node as well as more VPS in the whole VLAN, the collapse can be more significant. Whatever you believe or not, try put a Node and /24 IP in an isolated VLAN, create VPS in it and try following commands:
[Debian]
apt install -y sysstat && sar -n DEV 2 5
[CentOS]
yum install -y sysstat && sar -n DEV 2 5
Check this in VPS in isolated VLAN and VPS in other Tokyo Nodes. You can see how many packets the VPS reveived per second. And he Node's CPU has to handle packets received per VPS * amount of VPS per Node, the more VPS you put in a Node the worser it will be.
How to solve it? TBH I'm not a prefessor of this. You should hire sb who have CCIE certification to help you redeign the network. I have one VPS in you Buffalo location and I've tested it works fine, the packets it received per second is around 4-6. (Compare ~2000 in Tokyo is abbormal)
Spilt VLAN on switch hardware is not a good idea as Frank said it will lose flexibility of IP assignment. However you can setup OVS instead, I dont know how to do this, find a professor or discuss with your network engineer. You've done a correct design in Buffalo, do it again in new locations, things will be fine.
TYOC030.
edit:
I think you can try to take his opinion, because virmach used to be known for a stable network beyond the price, and now the problem in Tokyo may be the network problem, not a hardware problem.
I am using google translate.
Dear VirMach, any storage offer?
@VirMach
My host was migratted to the SEAZ005 by you the company, after that it lost any connection from the world without any changes like IP or OS。The only way I could access it is to use VNC. Undoubtly I cannot successfully ping even your DHCP server IP in VNC.
I have post the ticket and almost waited for a week, could u pls just take a look?
My ticket id: #837953
Debug Info:
Main IP pings: false
Node Online: false
Service online: online
Operating System: linux-debian-9-x86_64-minimal-latest
Service Status:Active
(Today I reinstall the Ryzen Compatitable D10 but nth change still)
Thanks.
Does this mean fix for TYOC026 is done?
My VPS on TYOC026 is still offline, and thoubleshoot told me "Your server's host node appears to be down", so maybe TYOC026 ran into issues again? Or maybe it's me trying rescue and reinstall before realizing it was the host that ran into trouble.
If I get "The node is currently locked", does that mean I need to repent for something?
That means node (actual main node server, not your VPS) is locked for system maintenance aka VirMach changing something and you are denied access to panel so you won't break shit by like powering up something in middle of migration or something.
TL;DR: No, you are not "suspended" / "banned", just wait for VirMach to end working on that.
about 039 or 176.119..
2 days or maybe 3 days
network is dead
even ssh
i want to know what's going on actually
how to avoid it and what i can do for that
...
no!!
sorry~
......
DM me your IP address. I can't replicate that issue on that node.
I don´t expect VirMach will be doing more pre-orders on storage VPS. Some storage VPS locations will be up soon™ and after the previous pre-orders have been provisioned, and if all is stable. I would then expect VirMach to offer storage VPS again. Of course this is just my opinion and not an official VirMach comment
@StarlightX I wanted to confirm if your IP on the new VPS shows the same IP as the old VPS, or are they different?
wink
if you are wrong
are you willing do some push-ups
haaaahhhh
My feedback on VirMach's adjustments in Tokyo:
Node TYOC029 looking very good for me.
Larger image
Node TYOC039 looking perfect for me.
Larger image
Time on graphs is CDT
As a member of yoursunny's team push-ups I do push-ups every day.
Back order of storage plans has always been available even if Virmach knows he couldn't possibly handle all the orders and subsequent tickets.
https://billing.virmach.com/index.php?rp=/store/cloud-storage
Lemme see how Tokyo CPU and domestic network performance deteriorate.
I stand corrected as the 1TB, 2TB, and 4TB are available for pre-order,
I am going to stop posting in this thread now, as I see that I have out lived my usefulness.
If anyone needs me I'll be in 2018
Storage was switched a while ago to all the new locations to phase out the old ones as I didn't feel comfortable selling on the existing CC nodes since migrating large storage VMs is going to already take a long time.
Although at the time of doing that we definitely expected them to be up much sooner.
If it does bother anyone, I have a radical idea: don't purchase it and pretend I placed it as sold out instead.
the same, I mean ABSOLUTELY the same and UNCHANGE.
The thing I cannot understand is the SEAZ005 doesnt appear in the issue list which should mean that my server should run normally, or Am i the only one that suffering ORZ
or probably @VirMach forgets to give me new ip?
I double check email and find out that they notice the IP should be changed
but change to what?
"Your service will boot back up when migration is completed, with a new IP address. You can view your new IP address on your service details page."
the IP in my service details has NOT changed, neither the virmach default page nor the solus control panel, just the old one
That is something you are probably not going to be able to fix yourself, as that IP belongs to Colo-Crossings and will not work in the new DC. I expect that this migration issue is known to VirMach and is on the to do list to be fixed. This happened to some other people and I expect was one of the reasons VirMach stopped doing migrations for the moment.
@VirMach Quick one: Is Los Angeles and NYC Metro ready for new services? If so can you help us activate the pre-orders? I know some have already been migrated to NYC Metro...
I wanted to ask IPv6 availability but I think that can wait till everything else are sorted out.
So the "Do nothing." is NOT true bcos do noting blackhole my server
But it is a truth also, now I can't do anything
Well the only thing I can do is to catching my homework deadline, which is the reason I didnt notice that mail
How are we supposed to limit the I/O speeds?
Most programs including rclone can enforce Mbps limits.
How to limit IOPS and average utilization?
Well for example I used this command in the past to limit usage in general, I used it to avoid hitting limits that would cause my IP to be nullrouted in a DC that I had a server in the past.
rsync --rsync-path="ionice -c 3 nice rsync" -ave "ssh -p22" /sourcepath/ root@myip:/destinationpath/
Something like that or similar would most probably work in this case aswell.