Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More - Page 122
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More

1119120122124125339

Comments

  • @sryan said:

    @vhhjkgl said:
    来自 176.119.148.1 的回复: 字节=32 时间=520ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=538ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=375ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=324ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=328ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=234ms TTL=44
    Local problems? node 40

    The boss already working on it, so wait patiently. I think it will be solved

    Yes,I know the boss already working on it,I was just thinking about the huge difference compared to your latency .

  • @sryan said:

    @VirMach said:

    @umzak said:

    @VirMach said:
    TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.

    thanks, because the first reboot made my vps slower and ping worse

    TYOC040 update

    The kernel was previously upgraded to help with the disk issues during beta at some point in time. This was later found out not to be necessary since we went with the other kernel parameters fix. The newer kernel was causing compatibility issues with certain guest operating systems, related to libvirt/qemu. This was combined with the previous issue of the certain really old/incompatible operating systems overloading the system, except in this case it was causing certain VMs to constantly shut off and on and go into various different states of semi-usability, hence not just maxing out the CPU and being easily identifiable.

    Interrupts are still high but now at a manageable level until we clear out the rest. There's still some phantom guests being started and stopped but much lower quantity and I'll have to look into those still.

    Anyway, with the improvements so far, before:

    iperf3 Network Speed Tests (IPv4):
    > > ---------------------------------
    > > Provider        | Location (Link)           | Send Speed      | Recv Speed
    > >                 |                           |                 |
    > > Clouvider       | London, UK (10G)          | busy            | 5.00 Mbits/sec
    > > Online.net      | Paris, FR (10G)           | busy            | 3.97 Mbits/sec
    > > WorldStream     | The Netherlands (10G)     | busy            | busy
    > > WebHorizon      | Singapore (400M)          | busy            | 3.78 Mbits/sec
    > > Clouvider       | NYC, NY, US (10G)         | busy            | 5.72 Mbits/sec
    > > 

    After:

    iperf3 Network Speed Tests (IPv4):
    > > ---------------------------------
    > > Provider        | Location (Link)           | Send Speed      | Recv Speed
    > >                 |                           |                 |
    > > Clouvider       | London, UK (10G)          | busy            | 15.1 Mbits/sec
    > > Online.net      | Paris, FR (10G)           | busy            | 34.4 Mbits/sec
    > > WorldStream     | The Netherlands (10G)     | busy            | busy
    > > WebHorizon      | Singapore (400M)          | busy            | 34.8 Mbits/sec
    > > Clouvider       | NYC, NY, US (10G)         | busy            | 13.7 Mbits/sec
    > > 

    Still a long way to go but we have demonstrated that it can be resolved by getting people on compatible operating systems and that it was definitely related to that initial issue as I expressed. The issue was just exasperated by the kernel version.

    I'll post another update soon.

    @guagua_ya66 said:
    40 nodes. After restarting, its latency has returned to normal, and the network bandwidth is still a bit low.

    Good to hear. I'm looking at all the virtualization logs in bulk right now to see who we need to power down. I'm going to power the ones that have issues, if they get powered back on and not re-installed and left in the kernel panic screen again, they will be temporarily suspended and ticket created with the customer so we have an understanding that it needs to be re-installed.

    These pretty much have no customer data on them in most cases since it's broken but we have to be 100% sure before we just force a re-install so it's best customers opt to re-install their broken VMs on their own.

    I do know this will be helped by testing the functionality of each operating system, which is still on my to-do list for tonight.

    The latency is decreased in average, but it still has large latency to gateway from my instance.

    PING 45.66.128.1 (45.66.128.1) 56(84) bytes of data.
    64 bytes from 45.66.128.1: icmp_seq=2 ttl=64 time=81.5 ms
    64 bytes from 45.66.128.1: icmp_seq=4 ttl=64 time=19.6 ms
    64 bytes from 45.66.128.1: icmp_seq=5 ttl=64 time=11.3 ms
    64 bytes from 45.66.128.1: icmp_seq=6 ttl=64 time=6.11 ms
    64 bytes from 45.66.128.1: icmp_seq=7 ttl=64 time=11.1 ms
    64 bytes from 45.66.128.1: icmp_seq=9 ttl=64 time=5.10 ms
    64 bytes from 45.66.128.1: icmp_seq=10 ttl=64 time=13.9 ms
    64 bytes from 45.66.128.1: icmp_seq=13 ttl=64 time=44.8 ms
    64 bytes from 45.66.128.1: icmp_seq=14 ttl=64 time=54.0 ms
    64 bytes from 45.66.128.1: icmp_seq=16 ttl=64 time=27.9 ms
    64 bytes from 45.66.128.1: icmp_seq=17 ttl=64 time=2.67 ms
    64 bytes from 45.66.128.1: icmp_seq=18 ttl=64 time=0.988 ms
    64 bytes from 45.66.128.1: icmp_seq=19 ttl=64 time=3.02 ms
    64 bytes from 45.66.128.1: icmp_seq=21 ttl=64 time=1.38 ms
    64 bytes from 45.66.128.1: icmp_seq=22 ttl=64 time=7.51 ms
    64 bytes from 45.66.128.1: icmp_seq=23 ttl=64 time=7.31 ms
    64 bytes from 45.66.128.1: icmp_seq=24 ttl=64 time=15.7 ms
    64 bytes from 45.66.128.1: icmp_seq=26 ttl=64 time=1.35 ms
    64 bytes from 45.66.128.1: icmp_seq=27 ttl=64 time=1.18 ms
    64 bytes from 45.66.128.1: icmp_seq=28 ttl=64 time=2.32 ms
    64 bytes from 45.66.128.1: icmp_seq=29 ttl=64 time=1.49 ms
    64 bytes from 45.66.128.1: icmp_seq=30 ttl=64 time=22.0 ms
    64 bytes from 45.66.128.1: icmp_seq=31 ttl=64 time=2.16 ms
    64 bytes from 45.66.128.1: icmp_seq=32 ttl=64 time=113 ms
    64 bytes from 45.66.128.1: icmp_seq=33 ttl=64 time=27.5 ms
    64 bytes from 45.66.128.1: icmp_seq=34 ttl=64 time=36.9 ms
    64 bytes from 45.66.128.1: icmp_seq=35 ttl=64 time=77.6 ms
    64 bytes from 45.66.128.1: icmp_seq=36 ttl=64 time=96.9 ms
    64 bytes from 45.66.128.1: icmp_seq=38 ttl=64 time=182 ms
    64 bytes from 45.66.128.1: icmp_seq=39 ttl=64 time=93.9 ms
    64 bytes from 45.66.128.1: icmp_seq=40 ttl=64 time=70.2 ms
    64 bytes from 45.66.128.1: icmp_seq=41 ttl=64 time=142 ms
    64 bytes from 45.66.128.1: icmp_seq=42 ttl=64 time=105 ms
    64 bytes from 45.66.128.1: icmp_seq=43 ttl=64 time=90.8 ms
    64 bytes from 45.66.128.1: icmp_seq=44 ttl=64 time=107 ms
    64 bytes from 45.66.128.1: icmp_seq=45 ttl=64 time=119 ms
    64 bytes from 45.66.128.1: icmp_seq=46 ttl=64 time=101 ms
    64 bytes from 45.66.128.1: icmp_seq=47 ttl=64 time=162 ms
    64 bytes from 45.66.128.1: icmp_seq=48 ttl=64 time=230 ms
    64 bytes from 45.66.128.1: icmp_seq=49 ttl=64 time=147 ms

    whatever, thanks for your work on solving the problem on node 40

    Please stop participation in spreading rumors.We have never discriminated against MJJ or Chinese customers.
    Please think about why you are the only one with a problem and no one else has a problem.

  • @sryan said:

    @vhhjkgl said:
    来自 176.119.148.1 的回复: 字节=32 时间=520ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=538ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=375ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=324ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=328ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=234ms TTL=44
    Local problems? node 40

    The boss already working on it, so wait patiently. I think it will be solved

    When the ping was good the first few times, the latency gradually became worse.

  • edited April 2022

    @VirMach said:

    @umzak said:

    @VirMach said:
    TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.

    thanks, because the first reboot made my vps slower and ping worse

    TYOC040 update

    The kernel was previously upgraded to help with the disk issues during beta at some point in time. This was later found out not to be necessary since we went with the other kernel parameters fix. The newer kernel was causing compatibility issues with certain guest operating systems, related to libvirt/qemu. This was combined with the previous issue of the certain really old/incompatible operating systems overloading the system, except in this case it was causing certain VMs to constantly shut off and on and go into various different states of semi-usability, hence not just maxing out the CPU and being easily identifiable.

    Interrupts are still high but now at a manageable level until we clear out the rest. There's still some phantom guests being started and stopped but much lower quantity and I'll have to look into those still.

    Anyway, with the improvements so far, before:

    iperf3 Network Speed Tests (IPv4):
    > ---------------------------------
    > Provider        | Location (Link)           | Send Speed      | Recv Speed
    >                 |                           |                 |
    > Clouvider       | London, UK (10G)          | busy            | 5.00 Mbits/sec
    > Online.net      | Paris, FR (10G)           | busy            | 3.97 Mbits/sec
    > WorldStream     | The Netherlands (10G)     | busy            | busy
    > WebHorizon      | Singapore (400M)          | busy            | 3.78 Mbits/sec
    > Clouvider       | NYC, NY, US (10G)         | busy            | 5.72 Mbits/sec
    > 

    After:

    iperf3 Network Speed Tests (IPv4):
    > ---------------------------------
    > Provider        | Location (Link)           | Send Speed      | Recv Speed
    >                 |                           |                 |
    > Clouvider       | London, UK (10G)          | busy            | 15.1 Mbits/sec
    > Online.net      | Paris, FR (10G)           | busy            | 34.4 Mbits/sec
    > WorldStream     | The Netherlands (10G)     | busy            | busy
    > WebHorizon      | Singapore (400M)          | busy            | 34.8 Mbits/sec
    > Clouvider       | NYC, NY, US (10G)         | busy            | 13.7 Mbits/sec
    > 

    Still a long way to go but we have demonstrated that it can be resolved by getting people on compatible operating systems and that it was definitely related to that initial issue as I expressed. The issue was just exasperated by the kernel version.

    I'll post another update soon.

    @guagua_ya66 said:
    40 nodes. After restarting, its latency has returned to normal, and the network bandwidth is still a bit low.

    Good to hear. I'm looking at all the virtualization logs in bulk right now to see who we need to power down. I'm going to power the ones that have issues, if they get powered back on and not re-installed and left in the kernel panic screen again, they will be temporarily suspended and ticket created with the customer so we have an understanding that it needs to be re-installed.

    These pretty much have no customer data on them in most cases since it's broken but we have to be 100% sure before we just force a re-install so it's best customers opt to re-install their broken VMs on their own.

    I do know this will be helped by testing the functionality of each operating system, which is still on my to-do list for tonight

    I am looking forward to your good news.

  • @kaokao2222 said:
    Hi @VirMach
    Can you answer when will my order be approved
    I placed it on 7th April

    I have already asked you 2 times before but haven't received your answer yet :(

    My Invoice ID is #1414080

    Thank you in advance !

    ours is even March 12, and its not yet activated. 😬you need a lot of patience my friend. like really a lot. bigtime.

  • vhhjkglvhhjkgl Member
    edited April 2022

    @sryan said:

    @VirMach said:

    @umzak said:

    @VirMach said:
    TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.

    thanks, because the first reboot made my vps slower and ping worse

    TYOC040 update

    The kernel was previously upgraded to help with the disk issues during beta at some point in time. This was later found out not to be necessary since we went with the other kernel parameters fix. The newer kernel was causing compatibility issues with certain guest operating systems, related to libvirt/qemu. This was combined with the previous issue of the certain really old/incompatible operating systems overloading the system, except in this case it was causing certain VMs to constantly shut off and on and go into various different states of semi-usability, hence not just maxing out the CPU and being easily identifiable.

    Interrupts are still high but now at a manageable level until we clear out the rest. There's still some phantom guests being started and stopped but much lower quantity and I'll have to look into those still.

    Anyway, with the improvements so far, before:

    iperf3 Network Speed Tests (IPv4):
    > > ---------------------------------
    > > Provider        | Location (Link)           | Send Speed      | Recv Speed
    > >                 |                           |                 |
    > > Clouvider       | London, UK (10G)          | busy            | 5.00 Mbits/sec
    > > Online.net      | Paris, FR (10G)           | busy            | 3.97 Mbits/sec
    > > WorldStream     | The Netherlands (10G)     | busy            | busy
    > > WebHorizon      | Singapore (400M)          | busy            | 3.78 Mbits/sec
    > > Clouvider       | NYC, NY, US (10G)         | busy            | 5.72 Mbits/sec
    > > 

    After:

    iperf3 Network Speed Tests (IPv4):
    > > ---------------------------------
    > > Provider        | Location (Link)           | Send Speed      | Recv Speed
    > >                 |                           |                 |
    > > Clouvider       | London, UK (10G)          | busy            | 15.1 Mbits/sec
    > > Online.net      | Paris, FR (10G)           | busy            | 34.4 Mbits/sec
    > > WorldStream     | The Netherlands (10G)     | busy            | busy
    > > WebHorizon      | Singapore (400M)          | busy            | 34.8 Mbits/sec
    > > Clouvider       | NYC, NY, US (10G)         | busy            | 13.7 Mbits/sec
    > > 

    Still a long way to go but we have demonstrated that it can be resolved by getting people on compatible operating systems and that it was definitely related to that initial issue as I expressed. The issue was just exasperated by the kernel version.

    I'll post another update soon.

    @guagua_ya66 said:
    40 nodes. After restarting, its latency has returned to normal, and the network bandwidth is still a bit low.

    Good to hear. I'm looking at all the virtualization logs in bulk right now to see who we need to power down. I'm going to power the ones that have issues, if they get powered back on and not re-installed and left in the kernel panic screen again, they will be temporarily suspended and ticket created with the customer so we have an understanding that it needs to be re-installed.

    These pretty much have no customer data on them in most cases since it's broken but we have to be 100% sure before we just force a re-install so it's best customers opt to re-install their broken VMs on their own.

    I do know this will be helped by testing the functionality of each operating system, which is still on my to-do list for tonight.

    The latency is decreased in average, but it still has large latency to gateway from my instance.

    PING 45.66.128.1 (45.66.128.1) 56(84) bytes of data.
    64 bytes from 45.66.128.1: icmp_seq=2 ttl=64 time=81.5 ms
    64 bytes from 45.66.128.1: icmp_seq=4 ttl=64 time=19.6 ms
    64 bytes from 45.66.128.1: icmp_seq=5 ttl=64 time=11.3 ms
    64 bytes from 45.66.128.1: icmp_seq=6 ttl=64 time=6.11 ms
    64 bytes from 45.66.128.1: icmp_seq=7 ttl=64 time=11.1 ms
    64 bytes from 45.66.128.1: icmp_seq=9 ttl=64 time=5.10 ms
    64 bytes from 45.66.128.1: icmp_seq=10 ttl=64 time=13.9 ms
    64 bytes from 45.66.128.1: icmp_seq=13 ttl=64 time=44.8 ms
    64 bytes from 45.66.128.1: icmp_seq=14 ttl=64 time=54.0 ms
    64 bytes from 45.66.128.1: icmp_seq=16 ttl=64 time=27.9 ms
    64 bytes from 45.66.128.1: icmp_seq=17 ttl=64 time=2.67 ms
    64 bytes from 45.66.128.1: icmp_seq=18 ttl=64 time=0.988 ms
    64 bytes from 45.66.128.1: icmp_seq=19 ttl=64 time=3.02 ms
    64 bytes from 45.66.128.1: icmp_seq=21 ttl=64 time=1.38 ms
    64 bytes from 45.66.128.1: icmp_seq=22 ttl=64 time=7.51 ms
    64 bytes from 45.66.128.1: icmp_seq=23 ttl=64 time=7.31 ms
    64 bytes from 45.66.128.1: icmp_seq=24 ttl=64 time=15.7 ms
    64 bytes from 45.66.128.1: icmp_seq=26 ttl=64 time=1.35 ms
    64 bytes from 45.66.128.1: icmp_seq=27 ttl=64 time=1.18 ms
    64 bytes from 45.66.128.1: icmp_seq=28 ttl=64 time=2.32 ms
    64 bytes from 45.66.128.1: icmp_seq=29 ttl=64 time=1.49 ms
    64 bytes from 45.66.128.1: icmp_seq=30 ttl=64 time=22.0 ms
    64 bytes from 45.66.128.1: icmp_seq=31 ttl=64 time=2.16 ms
    64 bytes from 45.66.128.1: icmp_seq=32 ttl=64 time=113 ms
    64 bytes from 45.66.128.1: icmp_seq=33 ttl=64 time=27.5 ms
    64 bytes from 45.66.128.1: icmp_seq=34 ttl=64 time=36.9 ms
    64 bytes from 45.66.128.1: icmp_seq=35 ttl=64 time=77.6 ms
    64 bytes from 45.66.128.1: icmp_seq=36 ttl=64 time=96.9 ms
    64 bytes from 45.66.128.1: icmp_seq=38 ttl=64 time=182 ms
    64 bytes from 45.66.128.1: icmp_seq=39 ttl=64 time=93.9 ms
    64 bytes from 45.66.128.1: icmp_seq=40 ttl=64 time=70.2 ms
    64 bytes from 45.66.128.1: icmp_seq=41 ttl=64 time=142 ms
    64 bytes from 45.66.128.1: icmp_seq=42 ttl=64 time=105 ms
    64 bytes from 45.66.128.1: icmp_seq=43 ttl=64 time=90.8 ms
    64 bytes from 45.66.128.1: icmp_seq=44 ttl=64 time=107 ms
    64 bytes from 45.66.128.1: icmp_seq=45 ttl=64 time=119 ms
    64 bytes from 45.66.128.1: icmp_seq=46 ttl=64 time=101 ms
    64 bytes from 45.66.128.1: icmp_seq=47 ttl=64 time=162 ms
    64 bytes from 45.66.128.1: icmp_seq=48 ttl=64 time=230 ms
    64 bytes from 45.66.128.1: icmp_seq=49 ttl=64 time=147 ms

    whatever, thanks for your work on solving the problem on node 40

    来自 176.119.148.1 的回复: 字节=32 时间=88ms TTL=44
    请求超时。
    来自 176.119.148.1回复: 字节=32 时间=60ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=35ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=38ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=231ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=285ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=683ms TTL=44
    来自 176.119.148.1的回复: 字节=32 时间=158ms TTL=44
    来自 176.119.148. 1的回复: 字节=32 时间=76ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=157ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=110ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=35ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=36ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=37ms TTL=44
    来自 176.119.148.1的回复: 字节=32 时间=35ms TTL=44
    请求超时。
    来自 176.119.148.1的回复: 字节=32 时间=183ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=399ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=391ms TTL=44
    This is my ping value fluctuation

  • zhuyijunzhuyijun Member
    edited April 2022

    @VirMach said:

    @tangming said:
    @VirMach
    Brother, I understand your hard work very much. I won't urge you when my order will be activated. However, I really find that the number of 384m activated is more than 768m. I hope it is activated according to the principle of first come, first served .

    Sadly,my 768m order number is earlier than my friends' activated 384m,but it is still pending.

    We look at the service that has been activated and the order number is later than ours every day, then we feel very sad .

    This is just because 384MB is the most popular and 768MB is the least popular plan, so obviously due to the proportions it will be more 384MB activations.

    I keep forgetting the ratios but it's something like 50% of them are 384MB of all plans, 15% are 768MB, 25% are 1.5GB, and 10% are 2.5GB. I'm probably screwing something up there, maybe mixing up 1.5GB and 2.5GB or maybe 384MB is only like 35% but you get the point.

    @VirMach I am afraid you are wrong ,that activate the server should do depend on the order time instead of on proportions , so what you do now which is unfair is the reason so many people complain and sad

  • @ravenchad said:

    @kaokao2222 said:
    Hi @VirMach
    Can you answer when will my order be approved
    I placed it on 7th April

    I have already asked you 2 times before but haven't received your answer yet :(

    My Invoice ID is #1414080

    Thank you in advance !

    ours is even March 12, and its not yet activated. 😬you need a lot of patience my friend. like really a lot. bigtime.

    Please stop participation in spreading rumors.We have never discriminated against MJJ or Chinese customers.
    Please think about why you are the only one with a problem and no one else has a problem.

  • sryansryan Member

    @vhhjkgl said:

    @sryan said:

    @VirMach said:

    @umzak said:

    @VirMach said:
    TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.

    thanks, because the first reboot made my vps slower and ping worse

    TYOC040 update

    The kernel was previously upgraded to help with the disk issues during beta at some point in time. This was later found out not to be necessary since we went with the other kernel parameters fix. The newer kernel was causing compatibility issues with certain guest operating systems, related to libvirt/qemu. This was combined with the previous issue of the certain really old/incompatible operating systems overloading the system, except in this case it was causing certain VMs to constantly shut off and on and go into various different states of semi-usability, hence not just maxing out the CPU and being easily identifiable.

    Interrupts are still high but now at a manageable level until we clear out the rest. There's still some phantom guests being started and stopped but much lower quantity and I'll have to look into those still.

    Anyway, with the improvements so far, before:

    iperf3 Network Speed Tests (IPv4):
    > > > ---------------------------------
    > > > Provider        | Location (Link)           | Send Speed      | Recv Speed
    > > >                 |                           |                 |
    > > > Clouvider       | London, UK (10G)          | busy            | 5.00 Mbits/sec
    > > > Online.net      | Paris, FR (10G)           | busy            | 3.97 Mbits/sec
    > > > WorldStream     | The Netherlands (10G)     | busy            | busy
    > > > WebHorizon      | Singapore (400M)          | busy            | 3.78 Mbits/sec
    > > > Clouvider       | NYC, NY, US (10G)         | busy            | 5.72 Mbits/sec
    > > > 

    After:

    iperf3 Network Speed Tests (IPv4):
    > > > ---------------------------------
    > > > Provider        | Location (Link)           | Send Speed      | Recv Speed
    > > >                 |                           |                 |
    > > > Clouvider       | London, UK (10G)          | busy            | 15.1 Mbits/sec
    > > > Online.net      | Paris, FR (10G)           | busy            | 34.4 Mbits/sec
    > > > WorldStream     | The Netherlands (10G)     | busy            | busy
    > > > WebHorizon      | Singapore (400M)          | busy            | 34.8 Mbits/sec
    > > > Clouvider       | NYC, NY, US (10G)         | busy            | 13.7 Mbits/sec
    > > > 

    Still a long way to go but we have demonstrated that it can be resolved by getting people on compatible operating systems and that it was definitely related to that initial issue as I expressed. The issue was just exasperated by the kernel version.

    I'll post another update soon.

    @guagua_ya66 said:
    40 nodes. After restarting, its latency has returned to normal, and the network bandwidth is still a bit low.

    Good to hear. I'm looking at all the virtualization logs in bulk right now to see who we need to power down. I'm going to power the ones that have issues, if they get powered back on and not re-installed and left in the kernel panic screen again, they will be temporarily suspended and ticket created with the customer so we have an understanding that it needs to be re-installed.

    These pretty much have no customer data on them in most cases since it's broken but we have to be 100% sure before we just force a re-install so it's best customers opt to re-install their broken VMs on their own.

    I do know this will be helped by testing the functionality of each operating system, which is still on my to-do list for tonight.

    The latency is decreased in average, but it still has large latency to gateway from my instance.

    PING 45.66.128.1 (45.66.128.1) 56(84) bytes of data.
    64 bytes from 45.66.128.1: icmp_seq=2 ttl=64 time=81.5 ms
    64 bytes from 45.66.128.1: icmp_seq=4 ttl=64 time=19.6 ms
    64 bytes from 45.66.128.1: icmp_seq=5 ttl=64 time=11.3 ms
    64 bytes from 45.66.128.1: icmp_seq=6 ttl=64 time=6.11 ms
    64 bytes from 45.66.128.1: icmp_seq=7 ttl=64 time=11.1 ms
    64 bytes from 45.66.128.1: icmp_seq=9 ttl=64 time=5.10 ms
    64 bytes from 45.66.128.1: icmp_seq=10 ttl=64 time=13.9 ms
    64 bytes from 45.66.128.1: icmp_seq=13 ttl=64 time=44.8 ms
    64 bytes from 45.66.128.1: icmp_seq=14 ttl=64 time=54.0 ms
    64 bytes from 45.66.128.1: icmp_seq=16 ttl=64 time=27.9 ms
    64 bytes from 45.66.128.1: icmp_seq=17 ttl=64 time=2.67 ms
    64 bytes from 45.66.128.1: icmp_seq=18 ttl=64 time=0.988 ms
    64 bytes from 45.66.128.1: icmp_seq=19 ttl=64 time=3.02 ms
    64 bytes from 45.66.128.1: icmp_seq=21 ttl=64 time=1.38 ms
    64 bytes from 45.66.128.1: icmp_seq=22 ttl=64 time=7.51 ms
    64 bytes from 45.66.128.1: icmp_seq=23 ttl=64 time=7.31 ms
    64 bytes from 45.66.128.1: icmp_seq=24 ttl=64 time=15.7 ms
    64 bytes from 45.66.128.1: icmp_seq=26 ttl=64 time=1.35 ms
    64 bytes from 45.66.128.1: icmp_seq=27 ttl=64 time=1.18 ms
    64 bytes from 45.66.128.1: icmp_seq=28 ttl=64 time=2.32 ms
    64 bytes from 45.66.128.1: icmp_seq=29 ttl=64 time=1.49 ms
    64 bytes from 45.66.128.1: icmp_seq=30 ttl=64 time=22.0 ms
    64 bytes from 45.66.128.1: icmp_seq=31 ttl=64 time=2.16 ms
    64 bytes from 45.66.128.1: icmp_seq=32 ttl=64 time=113 ms
    64 bytes from 45.66.128.1: icmp_seq=33 ttl=64 time=27.5 ms
    64 bytes from 45.66.128.1: icmp_seq=34 ttl=64 time=36.9 ms
    64 bytes from 45.66.128.1: icmp_seq=35 ttl=64 time=77.6 ms
    64 bytes from 45.66.128.1: icmp_seq=36 ttl=64 time=96.9 ms
    64 bytes from 45.66.128.1: icmp_seq=38 ttl=64 time=182 ms
    64 bytes from 45.66.128.1: icmp_seq=39 ttl=64 time=93.9 ms
    64 bytes from 45.66.128.1: icmp_seq=40 ttl=64 time=70.2 ms
    64 bytes from 45.66.128.1: icmp_seq=41 ttl=64 time=142 ms
    64 bytes from 45.66.128.1: icmp_seq=42 ttl=64 time=105 ms
    64 bytes from 45.66.128.1: icmp_seq=43 ttl=64 time=90.8 ms
    64 bytes from 45.66.128.1: icmp_seq=44 ttl=64 time=107 ms
    64 bytes from 45.66.128.1: icmp_seq=45 ttl=64 time=119 ms
    64 bytes from 45.66.128.1: icmp_seq=46 ttl=64 time=101 ms
    64 bytes from 45.66.128.1: icmp_seq=47 ttl=64 time=162 ms
    64 bytes from 45.66.128.1: icmp_seq=48 ttl=64 time=230 ms
    64 bytes from 45.66.128.1: icmp_seq=49 ttl=64 time=147 ms

    whatever, thanks for your work on solving the problem on node 40

    来自 176.119.148.1 的回复: 字节=32 时间=88ms TTL=44
    请求超时。
    来自 176.119.148.1回复: 字节=32 时间=60ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=35ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=38ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=231ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=285ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=683ms TTL=44
    来自 176.119.148.1的回复: 字节=32 时间=158ms TTL=44
    来自 176.119.148. 1的回复: 字节=32 时间=76ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=157ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=110ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=35ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=36ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=37ms TTL=44
    来自 176.119.148.1的回复: 字节=32 时间=35ms TTL=44
    请求超时。
    来自 176.119.148.1的回复: 字节=32 时间=183ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=399ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=391ms TTL=44
    This is mine ping value fluctuation

    It's not stable, some times getting very large (just as you), some times around 40ms

  • AlwaysSkintAlwaysSkint Member
    edited April 2022

    @sryan said: some times getting very large (just as you)

    In these politically correct days, it's not very nice, calling out someone for being obese!

    Oh my God: 400ms!! Holy shit the World is gonna collapse around us. Anyone for a dial-up modem? I'll let it go cheap - it's just lying in a box somewhere in the attic.

  • @sryan said:

    @vhhjkgl said:
    来自 176.119.148.1 的回复: 字节=32 时间=520ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=538ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=375ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=324ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=328ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=234ms TTL=44
    Local problems? node 40

    The boss already working on it, so wait patiently. I think it will be solved

    Is it a kernel incompatibility problem, when I upgrade the kernel from 4.9 to 5.3, the ping value becomes very high

  • VirMachVirMach Member, Patron Provider

    @sryan said: It's not stable, some times getting very large (just as you), some times around 40ms

    I'm logging everything as these spikes happen and taking notes. Those periods of low time are when I take action, and then periods of it being bad again is everything breaking, as in a bunch of VMs crashing. I've controlled it to a good level now but I need to finish keeping track of these and mass contact the people with issues.

    TYOC040 Update

    Still not perfect and it took way too long and was super annoying to do and I don't even want to explain any of it right now but... whenever the VMs are not crashing it looks something like this to Vultr Tokyo, so it's definitely this and once it's resolved we'll have full port speed for everyone.

    wget https://hnd-jp-ping.vultr.com/vultr.com.1000MB.bin
    --2022-04-10 05:01:54--  https://hnd-jp-ping.vultr.com/vultr.com.1000MB.bin
    Resolving hnd-jp-ping.vultr.com (hnd-jp-ping.vultr.com)... 108.61.201.151
    Connecting to hnd-jp-ping.vultr.com (hnd-jp-ping.vultr.com)|108.61.201.151|:443... connected.
    HTTP request sent, awaiting response... 200 OK
    Length: 1048576000 (1000M) [application/octet-stream]
    Saving to: ‘vultr.com.1000MB.bin.1’
    
    vultr.com.1000MB.bin.1                           100%[=========================================================================================================>]   1000M   104MB/s    in 9.9s
    
    2022-04-10 05:02:04 (101 MB/s) - ‘vultr.com.1000MB.bin.1’ saved [1048576000/1048576000]
    
    

    I'd do a YABS but it's impossible to time it right. Well it's possible but I'd have to sit around for half an hour between VMs crashing and other people running tests.

    Overall the network graph looks much better than before.

    Keep in mind it's still as easy to get a 1MB/s result as it is getting a 100MB/s result.

    Thanked by 1FrankZ
  • VirMachVirMach Member, Patron Provider

    @vhhjkgl said:

    @sryan said:

    @vhhjkgl said:
    来自 176.119.148.1 的回复: 字节=32 时间=520ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=538ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=375ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=324ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=328ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=234ms TTL=44
    Local problems? node 40

    The boss already working on it, so wait patiently. I think it will be solved

    Is it a kernel incompatibility problem, when I upgrade the kernel from 4.9 to 5.3, the ping value becomes very high

    It's just random at this time.

    Whenever a VM is rebooting and crashing, it's slow. Whenever it's clear, it's fast. And I've got the duration of it being slow to go down, and duration of it being fast to go up but there's still more work to be done.

    Thanked by 1FrankZ
  • VirMachVirMach Member, Patron Provider
    edited April 2022

    And here's a little network graph screenshot for you guys, you can tell the difference by looking at it. All those spikes up to 2Gbps isn't actually 2Gbps, it's just how the monitoring software shows spikes over a long duration (which are most likely people testing the network.)

    Before around 3:30 is before the reboots and fixes.

  • VirMachVirMach Member, Patron Provider
    edited April 2022

    Here's another screenshot side by side load versus network. Notice how those large spikes mostly went away after some of the fixes. Those large spikes are when VMs would mass kernel panic and crash. Notice if you overlay them together, the period with these spikes was when networking was worse.

    Edit -- this is what I mean

  • @VirMach said:
    Here's another screenshot side by side load versus network. Notice how those large spikes mostly went away after some of the fixes. Those large spikes are when VMs would mass kernel panic and crash. Notice if you overlay them together, the period with these spikes was when networking was worse.

    Edit -- this is what I mean


    Reason for suspension: Banned for 40 login attempts. Suspension expires: 04/10/2022, I want to know why the official website login restricts my ip access, and the same is true for changing several ips? What are your intentions? I don't know if the others are the same

  • ehabehab Member

    @VirMach , i miss those all location update 1 paragraph summery.

    maybe if you create a page in the virmach web to check weekly the status for example would be just great.

    Thanks for any consideration.

  • VirMachVirMach Member, Patron Provider

    I'm beginning to suspend people booting into kernel panic on TYOC040. Suspension reason will be: "OS is causing overloading. Contact Priority Support."

    Just make a ticket if you're one of these people, in the priority department, and I'll try to get it sorted with you ASAP.

    Thanked by 2ZA_capetown FrankZ
  • xprebounxpreboun Member
    edited April 2022

    @tangming said:
    Reason for suspension: Banned for 40 login attempts. Suspension expires: 04/10/2022, I want to know why the official website login restricts my ip access, and the same is true for changing several ips? What are your intentions? I don't know if the others are the same

    It clearly shows you did not even read first few pages on this thread or read some of Virmach's earlier comments before posting this.

    Seriously, this has happened too many times before during this preorder promo and I was even banned for several days straight. After Virmach strengthened their anti-something system it no longer happened to me and most people though.

    So in your case, either you logged in way too many times, or there's something wrong with your browser that forces you to visit login page everytime you refreshed the webpage.

    P.S. Why would you change several IPs to login? That isn't normal for most people and normally won't happen unless someone want to do something horrible...

    Edit:
    I went back in pages and noticed I am the first user to report that (on page 8)... https://lowendtalk.com/discussion/comment/3390364/#Comment_3390364

    As Virmach explained on page 9, this system is in place to prevent bruteforcing. I haven't seen any ban after he said he improved the system and I'm rather a quite inpatient person, so I think it's adjusted to a reasonable level.
    https://lowendtalk.com/discussion/comment/3392054/#Comment_3392054

    Thanked by 1FrankZ
  • @VirMach said:
    I'm beginning to suspend people booting into kernel panic on TYOC040. Suspension reason will be: "OS is causing overloading. Contact Priority Support."

    Just make a ticket if you're one of these people, in the priority department, and I'll try to get it sorted with you ASAP.

    Please check ticket #685653, i have make .Thanks

  • @VirMach When will the next batch be turned on? Will the 2.5GB RAM ordered on March 12th be scheduled to be turned on before April 15th?

  • jsfoxjsfox Member

    @VirMach said:
    I'm beginning to suspend people booting into kernel panic on TYOC040. Suspension reason will be: "OS is causing overloading. Contact Priority Support."

    Just make a ticket if you're one of these people, in the priority department, and I'll try to get it sorted with you ASAP.

    May I ask when you will handle the Offline VMs on TYOC033 & TYOC035? thank you.

  • @xpreboun said:

    @tangming said:
    Reason for suspension: Banned for 40 login attempts. Suspension expires: 04/10/2022, I want to know why the official website login restricts my ip access, and the same is true for changing several ips? What are your intentions? I don't know if the others are the same

    It clearly shows you did not even read first few pages on this thread or read some of Virmach's earlier comments before posting this.

    Seriously, this has happened too many times before during this preorder promo and I was even banned for several days straight. After Virmach strengthened their anti-something system it no longer happened to me and most people though.

    So in your case, either you logged in way too many times, or there's something wrong with your browser that forces you to visit login page everytime you refreshed the webpage.

    P.S. Why would you change several IPs to login? That isn't normal for most people and normally won't happen unless someone want to do something horrible...

    Edit:
    I went back in pages and noticed I am the first user to report that (on page 8)... https://lowendtalk.com/discussion/comment/3390364/#Comment_3390364

    As Virmach explained on page 9, this system is in place to prevent bruteforcing. I haven't seen any ban after he said he improved the system and I'm rather a quite inpatient person, so I think it's adjusted to a reasonable level.
    https://lowendtalk.com/discussion/comment/3392054/#Comment_3392054

    Because virmach does not send emails after booting, it can only log in regularly to view

  • VirMachVirMach Member, Patron Provider
    edited April 2022

    @ehab said:
    @VirMach , i miss those all location update 1 paragraph summery.

    maybe if you create a page in the virmach web to check weekly the status for example would be just great.

    Thanks for any consideration.

    I'll see what I can do, no promises as usual and be prepared to be disappointed.

    @vhhjkgl said:

    @VirMach said:
    I'm beginning to suspend people booting into kernel panic on TYOC040. Suspension reason will be: "OS is causing overloading. Contact Priority Support."

    Just make a ticket if you're one of these people, in the priority department, and I'll try to get it sorted with you ASAP.

    Please check ticket #685653, i have make .Thanks

    Unsuspended but please re-install your OS if you can or check the logs inside to see what's causing the problem. I noticed a lot of these are Debian 10 and 11. If possible try another OS. If not, try installing from ISO. If that doesn't work either, try Debian 11 instead of Debian 10 at least, that seems to be less problematic.

    Thanked by 2ehab FrankZ
  • VirMachVirMach Member, Patron Provider

    @hhoosstt said:
    @VirMach When will the next batch be turned on? Will the 2.5GB RAM ordered on March 12th be scheduled to be turned on before April 15th?

    I need to do the following first:

    • Fix TYOC040
    • Fix VMs on TYOC035 and TYOC033 that were on failed disk
    • Set up TYOC038

    I'm going to be away for 1 or 2 hours right now, preparing some shipments. So I'd say realistically in 4 to 6 hours maybe more creations will resume again. Sorry for the continued delay.

    @jsfox said:

    @VirMach said:
    I'm beginning to suspend people booting into kernel panic on TYOC040. Suspension reason will be: "OS is causing overloading. Contact Priority Support."

    Just make a ticket if you're one of these people, in the priority department, and I'll try to get it sorted with you ASAP.

    May I ask when you will handle the Offline VMs on TYOC033 & TYOC035? thank you.

    In a few hours I'll look at these next.

  • @passwa said:

    Because virmach does not send emails after booting, it can only log in regularly to view

    I kept service page open on my browser and refresh periodically (sometimes I do see Cloudflare challenges) but it never redirected me back to login page in the past 48 hours.

  • VirMachVirMach Member, Patron Provider
    edited April 2022

    @xpreboun said:

    @passwa said:

    Because virmach does not send emails after booting, it can only log in regularly to view

    I kept service page open on my browser and refresh periodically (sometimes I do see Cloudflare challenges) but it never redirected me back to login page in the past 48 hours.

    If you have a login page (in any tabs) open on Chrome or any other browser that tries to save memory, it will constantly refresh the page in the background.

    Thanked by 2AlwaysSkint FrankZ
  • @VirMach said:

    @ehab said:
    @VirMach , i miss those all location update 1 paragraph summery.

    maybe if you create a page in the virmach web to check weekly the status for example would be just great.

    Thanks for any consideration.

    I'll see what I can do, no promises as usual and be prepared to be disappointed.

    @vhhjkgl said:

    @VirMach said:
    I'm beginning to suspend people booting into kernel panic on TYOC040. Suspension reason will be: "OS is causing overloading. Contact Priority Support."

    Just make a ticket if you're one of these people, in the priority department, and I'll try to get it sorted with you ASAP.

    Please check ticket #685653, i have make .Thanks

    Unsuspended but please re-install your OS if you can or check the logs inside to see what's causing the problem. I noticed a lot of these are Debian 10 and 11. If possible try another OS. If not, try installing from OS. If that doesn't work either, try Debian 11 instead of Debian 10 at least, that seems to be less problematic.

    I first installed C7, out of the question modified D10, I go to install a D11

  • VirMachVirMach Member, Patron Provider

    @vhhjkgl said:

    @VirMach said:

    @ehab said:
    @VirMach , i miss those all location update 1 paragraph summery.

    maybe if you create a page in the virmach web to check weekly the status for example would be just great.

    Thanks for any consideration.

    I'll see what I can do, no promises as usual and be prepared to be disappointed.

    @vhhjkgl said:

    @VirMach said:
    I'm beginning to suspend people booting into kernel panic on TYOC040. Suspension reason will be: "OS is causing overloading. Contact Priority Support."

    Just make a ticket if you're one of these people, in the priority department, and I'll try to get it sorted with you ASAP.

    Please check ticket #685653, i have make .Thanks

    Unsuspended but please re-install your OS if you can or check the logs inside to see what's causing the problem. I noticed a lot of these are Debian 10 and 11. If possible try another OS. If not, try installing from OS. If that doesn't work either, try Debian 11 instead of Debian 10 at least, that seems to be less problematic.

    I first installed C7, out of the question modified D10, I go to install a D11

    I'll try to test these operating systems as well to see why they're still having problems. We're using the official SolusVM templates that are supposed to be up to date and working on Alma Linux host so I'm not sure why they're broken. I haven't had time but it's on my priority list.

    Thanked by 1FrankZ
  • @VirMach said:

    @vhhjkgl said:

    @VirMach said:

    @ehab said:
    @VirMach , i miss those all location update 1 paragraph summery.

    maybe if you create a page in the virmach web to check weekly the status for example would be just great.

    Thanks for any consideration.

    I'll see what I can do, no promises as usual and be prepared to be disappointed.

    @vhhjkgl said:

    @VirMach said:
    I'm beginning to suspend people booting into kernel panic on TYOC040. Suspension reason will be: "OS is causing overloading. Contact Priority Support."

    Just make a ticket if you're one of these people, in the priority department, and I'll try to get it sorted with you ASAP.

    Please check ticket #685653, i have make .Thanks

    Unsuspended but please re-install your OS if you can or check the logs inside to see what's causing the problem. I noticed a lot of these are Debian 10 and 11. If possible try another OS. If not, try installing from OS. If that doesn't work either, try Debian 11 instead of Debian 10 at least, that seems to be less problematic.

    I first installed C7, out of the question modified D10, I go to install a D11

    I'll try to test these operating systems as well to see why they're still having problems. We're using the official SolusVM templates that are supposed to be up to date and working on Alma Linux host so I'm not sure why they're broken. I haven't had time but it's on my priority list.

    OK

This discussion has been closed.