Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More - Page 214
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More

1211212214216217339

Comments

  • lowendclientlowendclient Member
    edited May 2022

    As so many times I said there's a vlan issue you should not put all vps into one vlan or it will cause all vps crash by packets from others. You must split vlan or setup ovs virtual vlan to fix the issue. New nodes works fine because it only have few vps, once it incresed the packets will increse exponentially. Solusvm is shit however the unreachable isssue is not caused by its own bug but the connection dropped by slave node because so many packets it has to handle. Why you guys be so confidence thought the big vlan will work fine? Srry I'm drunk I have to say this again and again...

  • windtunewindtune Member
    edited May 2022

    @lowendclient said:
    As so many times I said there's a vlan issue you should not put all vps into one vlan or it will cause all vps crash by packets from others. You must split vlan or setup ovs virtual vlan to fix the issue. New nodes works fine because it only have few vps, once it incresed the packets will increse exponentially. Solusvm is shit however the unreachable isssue is not caused by its own bug but the connection dropped by slave node because so many packets it has to handle. Why you guys be so confidence thought the big vlan will work fine? Srry I'm drunk I have to say this again and again...

    I had to post the test result again to show that I agree with you
    Virmach

  • pddpdd Member

    @VirMach said:

    .)
    So any update about the paid migration?

  • sky3967sky3967 Member

    @VirMach
    Is TYOC026 offline?
    VPS suddenly disconnected and ping.pe 100% lost panel shows offline
    Reinstall system is not working

    Should I create a ticket about this?

  • DursDurs Member

    @sky3967 said:
    @VirMach
    Is TYOC026 offline?
    VPS suddenly disconnected and ping.pe 100% lost panel shows offline
    Reinstall system is not working

    Should I create a ticket about this?

    yes me too.

  • DursDurs Member
    edited May 2022

    Thank you Virmach for release finaly my server @Tokyo. I am happy, waiting for your to stabilize the situation. :#

  • VirMachVirMach Member, Patron Provider

    @kzed said:
    got provisioned on TYO26, so far pretty snappy.
    good job @VirMach!

    This node should remain better than the others since it's filled with larger plans (and it's already pretty much full.) Generally lower quantity of people means lower abuse and less likelihood of running into weird problems with the network card trying to keep up with all the packets or dealing with a bunch of 384MB plans going into kernel panic and getting hacked.

    I'll try to come up with a plan to spread out these 384MB packages to either [A] make all nodes equally bad or hopefully [B] make them more unaffected by them.

    @lowendclient said:
    As so many times I said there's a vlan issue you should not put all vps into one vlan or it will cause all vps crash by packets from others. You must split vlan or setup ovs virtual vlan to fix the issue. New nodes works fine because it only have few vps, once it incresed the packets will increse exponentially. Solusvm is shit however the unreachable isssue is not caused by its own bug but the connection dropped by slave node because so many packets it has to handle. Why you guys be so confidence thought the big vlan will work fine? Srry I'm drunk I have to say this again and again...

    This is definitely not something we're confident in and it was a big concern of ours. I've monitored collisions and that does not seem to be the problem.

    Everything points to it being something else.

    For example, TYOC026 which s filled more exclusively with larger services most likely not utilized for VPNs, has a much lower packet count right now and networking is much smoother on it than the other nodes using the same motherboard, NIC, and same switch on the same VLAN.

    TYOC033 is the node with the highest quantity of these smaller 384MB packages used for VPNs and has 3x the packet count. The networking on it looks OK as well.

    TYOC029 looks like one of the worst right now, and it actually has lower CPU usage than TYOC033 and also lower quantity of VMs on it, and also has lower total network usage in terms of bits in/out, but spikes up to 6x the packet count of TYOC026 and remains 3x higher baseline. At this point, the NIC struggles to keep up and we need to make modifications to improve it. I know what we can do to fix it, the problem is that SolusVM doesn't support it. What we'd need has probably been in the feature requests for years so I need to try to find other solutions. We've got it stable enough for now and later I want to explore using multiple ports and balancing it between them since it does have another NIC available, I just need to do more research to see if this would actually help or if aggregating them would just add more problems. This node (TYOC029) has no collisions and has an extremely low error count (there's a lot of measures for this, this is just an overview of one of them) of 0.0000001778438% and of course ideally we'd want this to be perfect but it's not what's majorly contributing to the packet loss or latency.

    I did have someone with more expertise in networking take a look, and he more or less agreed. Of course I'm open to all more specific suggestions as I'm still extremely limited on time.

    I'm not saying the VLAN concern isn't there and we will just forget about it, but for this particular issue right now, that's not what is highly affecting it.

    @sky3967 said:
    @VirMach
    Is TYOC026 offline?
    VPS suddenly disconnected and ping.pe 100% lost panel shows offline
    Reinstall system is not working

    Should I create a ticket about this?

    @lemoncube said:

    @kzed said: anyone from same node have same issue?

    yep, not reachable since around 10:52:00 utc.

    @kzed said:

    @kzed said:
    got provisioned on TYO26, so far pretty snappy.
    good job @VirMach!

    might have said that too early, it's seem down/unreachable now? noVNC stuck at "Starting VNC handshake" anyone from same node have same issue?

    @louiejordan said:
    is TYOC026 down?

    I've been working on this and got it stable. One of the specific kernel parameters I have to do for Gen4 NVMe not to go away failed to stick and I just had to fix that and reboot. Confirmed that it's there now.

  • VirMachVirMach Member, Patron Provider

    @windtune said:

    @lowendclient said:
    As so many times I said there's a vlan issue you should not put all vps into one vlan or it will cause all vps crash by packets from others. You must split vlan or setup ovs virtual vlan to fix the issue. New nodes works fine because it only have few vps, once it incresed the packets will increse exponentially. Solusvm is shit however the unreachable isssue is not caused by its own bug but the connection dropped by slave node because so many packets it has to handle. Why you guys be so confidence thought the big vlan will work fine? Srry I'm drunk I have to say this again and again...

    I had to post the test result again to show that I agree with you
    Virmach

    Which node?

  • lowendclientlowendclient Member
    edited May 2022

    @VirMach said:

    Well VLAN may cause hidden problems seems to be like others. Just like what you said VPN nodes or small RAM plan nodes, they crashed beacuse they have more VPS per node. In a VLAN, ARP packets per VPS received ≈ VPS amount per node * packets the whole VLAN delivered. The more VPS you have in a node as well as more VPS in the whole VLAN, the collapse can be more significant. Whatever you believe or not, try put a Node and /24 IP in an isolated VLAN, create VPS in it and try following commands:

    [Debian]
    apt install -y sysstat && sar -n DEV 2 5

    [CentOS]
    yum install -y sysstat && sar -n DEV 2 5

    Check this in VPS in isolated VLAN and VPS in other Tokyo Nodes. You can see how many packets the VPS reveived per second. And he Node's CPU has to handle packets received per VPS * amount of VPS per Node, the more VPS you put in a Node the worser it will be.

    How to solve it? TBH I'm not a prefessor of this. You should hire sb who have CCIE certification to help you redeign the network. I have one VPS in you Buffalo location and I've tested it works fine, the packets it received per second is around 4-6. (Compare ~2000 in Tokyo is abbormal)

    Spilt VLAN on switch hardware is not a good idea as Frank said it will lose flexibility of IP assignment. However you can setup OVS instead, I dont know how to do this, find a professor or discuss with your network engineer. You've done a correct design in Buffalo, do it again in new locations, things will be fine.

  • windtunewindtune Member
    edited May 2022

    @VirMach said:

    @windtune said:

    @lowendclient said:
    As so many times I said there's a vlan issue you should not put all vps into one vlan or it will cause all vps crash by packets from others. You must split vlan or setup ovs virtual vlan to fix the issue. New nodes works fine because it only have few vps, once it incresed the packets will increse exponentially. Solusvm is shit however the unreachable isssue is not caused by its own bug but the connection dropped by slave node because so many packets it has to handle. Why you guys be so confidence thought the big vlan will work fine? Srry I'm drunk I have to say this again and again...

    I had to post the test result again to show that I agree with you
    Virmach

    Which node?

    TYOC030.
    edit:
    I think you can try to take his opinion, because virmach used to be known for a stable network beyond the price, and now the problem in Tokyo may be the network problem, not a hardware problem.
    I am using google translate.

    Thanked by 1lowendclient
  • ariq01ariq01 Member

    Dear VirMach, any storage offer?

  • @VirMach
    My host was migratted to the SEAZ005 by you the company, after that it lost any connection from the world without any changes like IP or OS。The only way I could access it is to use VNC. Undoubtly I cannot successfully ping even your DHCP server IP in VNC.

    I have post the ticket and almost waited for a week, could u pls just take a look?
    My ticket id: #837953

    Debug Info:
    Main IP pings: false
    Node Online: false
    Service online: online
    Operating System: linux-debian-9-x86_64-minimal-latest
    Service Status:Active

    (Today I reinstall the Ryzen Compatitable D10 but nth change still)

    Thanks.

  • TYTYTYTY Member

    @VirMach said: I've been working on this and got it stable. One of the specific kernel parameters I have to do for Gen4 NVMe not to go away failed to stick and I just had to fix that and reboot. Confirmed that it's there now.

    Does this mean fix for TYOC026 is done?

    My VPS on TYOC026 is still offline, and thoubleshoot told me "Your server's host node appears to be down", so maybe TYOC026 ran into issues again? Or maybe it's me trying rescue and reinstall before realizing it was the host that ran into trouble.

    Main IP pings: false
    Node Online: false
    Service online: offline
    Operating System: linux-debian-11-x86_64-gen2-v1
    Service Status:Active
    Registration Date:2022-03-13
    
  • If I get "The node is currently locked", does that mean I need to repent for something?

  • JabJabJabJab Member
    edited May 2022

    @dirtminer said: If I get "The node is currently locked", does that mean I need to repent for something?

    That means node (actual main node server, not your VPS) is locked for system maintenance aka VirMach changing something and you are denied access to panel so you won't break shit by like powering up something in middle of migration or something.

    TL;DR: No, you are not "suspended" / "banned", just wait for VirMach to end working on that.

    Thanked by 2dirtminer FrankZ
  • qwerttaaqwerttaa Member
    edited May 2022

    about 039 or 176.119..
    2 days or maybe 3 days
    network is dead
    even ssh
    i want to know what's going on actually
    how to avoid it and what i can do for that

    ...
    no!!
    sorry~
    ......

  • VirMachVirMach Member, Patron Provider

    @qwerttaa said:
    about 039 or 176.119..
    2 days or maybe 3 days
    network is dead
    even ssh
    i want to know what's going on actually
    how to avoid it and what i can do for that

    ...
    no!!
    sorry~
    ......

    DM me your IP address. I can't replicate that issue on that node.

  • FrankZFrankZ Veteran
    edited May 2022

    @ariq01 said: Dear VirMach, any storage offer?

    I don´t expect VirMach will be doing more pre-orders on storage VPS. Some storage VPS locations will be up soon™ and after the previous pre-orders have been provisioned, and if all is stable. I would then expect VirMach to offer storage VPS again. Of course this is just my opinion and not an official VirMach comment :sunglasses:


    @StarlightX I wanted to confirm if your IP on the new VPS shows the same IP as the old VPS, or are they different?

    Thanked by 1ariq01
  • @VirMach said:

    @qwerttaa said:
    about 039 or 176.119..
    2 days or maybe 3 days
    network is dead
    even ssh
    i want to know what's going on actually
    how to avoid it and what i can do for that

    ...
    no!!
    sorry~
    ......

    DM me your IP address. I can't replicate that issue on that node.

    wink

  • @FrankZ said:

    @ariq01 said: Dear VirMach, any storage offer?

    I don´t expect VirMach will be doing more pre-orders on storage VPS. Some storage VPS locations will be up soon™ and after the previous pre-orders have been provisioned, and if all is stable. I would then expect VirMach to offer storage VPS again. Of course this is just my opinion and not an official VirMach comment :sunglasses:


    @StarlightX I wanted to confirm if your IP on the new VPS shows the same IP as the old VPS, or are they different?

    if you are wrong

    are you willing do some push-ups
    haaaahhhh

  • FrankZFrankZ Veteran
    edited May 2022

    My feedback on VirMach's adjustments in Tokyo:

    Node TYOC029 looking very good for me.

    Larger image

    Node TYOC039 looking perfect for me.

    Larger image

    Time on graphs is CDT


    @qwerttaa said: if you are wrong

    are you willing do some push-ups
    haaaahhhh

    As a member of yoursunny's team push-ups I do push-ups every day. :sunglasses:

  • foitinfoitin Member

    @FrankZ said:

    @ariq01 said: Dear VirMach, any storage offer?

    I don´t expect VirMach will be doing more pre-orders on storage VPS.

    Back order of storage plans has always been available even if Virmach knows he couldn't possibly handle all the orders and subsequent tickets.

    https://billing.virmach.com/index.php?rp=/store/cloud-storage

    Lemme see how Tokyo CPU and domestic network performance deteriorate. :(

  • FrankZFrankZ Veteran
    edited May 2022

    @foitin said:

    @FrankZ said:

    @ariq01 said: Dear VirMach, any storage offer?

    I don´t expect VirMach will be doing more pre-orders on storage VPS.

    Back order of storage plans has always been available even if Virmach knows he couldn't possibly handle all the orders and subsequent tickets.

    https://billing.virmach.com/index.php?rp=/store/cloud-storage

    Lemme see how Tokyo CPU and domestic network performance deteriorate. :(

    I stand corrected as the 1TB, 2TB, and 4TB are available for pre-order,

    I am going to stop posting in this thread now, as I see that I have out lived my usefulness.
    If anyone needs me I'll be in 2018

  • VirMachVirMach Member, Patron Provider

    @foitin said:

    @FrankZ said:

    @ariq01 said: Dear VirMach, any storage offer?

    I don´t expect VirMach will be doing more pre-orders on storage VPS.

    Back order of storage plans has always been available even if Virmach knows he couldn't possibly handle all the orders and subsequent tickets.

    https://billing.virmach.com/index.php?rp=/store/cloud-storage

    Lemme see how Tokyo CPU and domestic network performance deteriorate. :(

    Storage was switched a while ago to all the new locations to phase out the old ones as I didn't feel comfortable selling on the existing CC nodes since migrating large storage VMs is going to already take a long time.

    Although at the time of doing that we definitely expected them to be up much sooner.

    If it does bother anyone, I have a radical idea: don't purchase it and pretend I placed it as sold out instead.

  • StarlightXStarlightX Member
    edited May 2022

    @FrankZ said:

    @ariq01 said: Dear VirMach, any storage offer?

    I don´t expect VirMach will be doing more pre-orders on storage VPS. Some storage VPS locations will be up soon™ and after the previous pre-orders have been provisioned, and if all is stable. I would then expect VirMach to offer storage VPS again. Of course this is just my opinion and not an official VirMach comment :sunglasses:


    @StarlightX I wanted to confirm if your IP on the new VPS shows the same IP as the old VPS, or are they different?

    the same, I mean ABSOLUTELY the same and UNCHANGE. o:)

    The thing I cannot understand is the SEAZ005 doesnt appear in the issue list which should mean that my server should run normally, or Am i the only one that suffering ORZ

    or probably @VirMach forgets to give me new ip? :'(

    I double check email and find out that they notice the IP should be changed

    but change to what?

    "Your service will boot back up when migration is completed, with a new IP address. You can view your new IP address on your service details page."

    the IP in my service details has NOT changed, neither the virmach default page nor the solus control panel, just the old one

  • FrankZFrankZ Veteran

    @StarlightX said: the same, I mean ABSOLUTELY the same and UNCHANGE.

    That is something you are probably not going to be able to fix yourself, as that IP belongs to Colo-Crossings and will not work in the new DC. I expect that this migration issue is known to VirMach and is on the to do list to be fixed. This happened to some other people and I expect was one of the reasons VirMach stopped doing migrations for the moment.

    Thanked by 3StarlightX fan skorous
  • @VirMach Quick one: Is Los Angeles and NYC Metro ready for new services? If so can you help us activate the pre-orders? I know some have already been migrated to NYC Metro...

    I wanted to ask IPv6 availability but I think that can wait till everything else are sorted out.

  • @FrankZ said:

    @ariq01 said: Dear VirMach, any storage offer?

    I don´t expect VirMach will be doing more pre-orders on storage VPS. Some storage VPS locations will be up soon™ and after the previous pre-orders have been provisioned, and if all is stable. I would then expect VirMach to offer storage VPS again. Of course this is just my opinion and not an official VirMach comment :sunglasses:


    @StarlightX I wanted to confirm if your IP on the new VPS shows the same IP as the old VPS, or are they different?

    @FrankZ said:

    @StarlightX said: the same, I mean ABSOLUTELY the same and UNCHANGE.

    That is something you are probably not going to be able to fix yourself, as that IP belongs to Colo-Crossings and will not work in the new DC. I expect that this migration issue is known to VirMach and is on the to do list to be fixed. This happened to some other people and I expect was one of the reasons VirMach stopped doing migrations for the moment.

    So the "Do nothing." is NOT true o:) bcos do noting blackhole my server

    But it is a truth also, now I can't do anything :'(

    Well the only thing I can do is to catching my homework deadline, which is the reason I didnt notice that mail

  • yoursunnyyoursunny Member, IPv6 Advocate
    edited May 2022

    @FrankZ said:
    It might be more reasonable for you to appeal your suspension and say you are sorry for not reading the AUP beforehand and in the future you will limit the speed of your backups in accordance with the I/O AUP specs, which are:

    Customer’s Service cannot average more than 80 IOPS within any two (2) hour period, cannot burst above 300MB/s disk write average for more than ten (10) minutes, cannot average more than 300 write operations per second for more than 1 hour, and cannot be above 20% average utilization within any six (6) hour period.

    Complete AUP here

    and then wait patiently as suspension appeals are the lowest priority tickets. Then in the future actually limit the speed of your backups to stay within the AUP limits.

    How are we supposed to limit the I/O speeds?
    Most programs including rclone can enforce Mbps limits.
    How to limit IOPS and average utilization?

  • zafouharzafouhar Veteran
    edited May 2022

    @yoursunny said:

    @FrankZ said:
    It might be more reasonable for you to appeal your suspension and say you are sorry for not reading the AUP beforehand and in the future you will limit the speed of your backups in accordance with the I/O AUP specs, which are:

    Customer’s Service cannot average more than 80 IOPS within any two (2) hour period, cannot burst above 300MB/s disk write average for more than ten (10) minutes, cannot average more than 300 write operations per second for more than 1 hour, and cannot be above 20% average utilization within any six (6) hour period.

    Complete AUP here

    and then wait patiently as suspension appeals are the lowest priority tickets. Then in the future actually limit the speed of your backups to stay within the AUP limits.

    How are we supposed to limit the I/O speeds?
    Most 1 programs including rclone can enforce Mbps limits.
    How to limit IOPS and average utilization?

    Well for example I used this command in the past to limit usage in general, I used it to avoid hitting limits that would cause my IP to be nullrouted in a DC that I had a server in the past.

    rsync --rsync-path="ionice -c 3 nice rsync" -ave "ssh -p22" /sourcepath/ root@myip:/destinationpath/

    Something like that or similar would most probably work in this case aswell.

This discussion has been closed.