Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More - Page 121
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More

1118119121123124339

Comments

  • VirMachVirMach Member, Patron Provider

    @miau said:
    Is it just me or Alma's NetworkManager wasting unreasonably high cpu just for sitting there?

    I have to uninstall it and use the good old network-script to let my load to idle at the expected level.

    A lot of these operating systems that are idling are doing an awful job at it but we're living in the modern age, where even Linux is so "advanced" that it's bloated on the minimal version.

    I'm getting old.

    Thanked by 2AlwaysSkint karjaj
  • @VirMach
    Brother, I understand your hard work very much. I won't urge you when my order will be activated. However, I really find that the number of 384m activated is more than 768m. I hope it is activated according to the principle of first come, first served .

    Sadly,my 768m order number is earlier than my friends' activated 384m,but it is still pending.

    We look at the service that has been activated and the order number is later than ours every day, then we feel very sad .

  • @VirMach
    Can I request to be moved to tokyo via ticket? I can wait a whole year for you I don't mind.
    But I don't want to open a ticket if it means just wasting your time. I'm just asking.

    Thing is seeing to much tokyo "want" made me realize I should play there one day.
    If not, that is fine, I love LA since it's 10ms away from me :D

  • @tangming said:
    @VirMach
    Brother, I understand your hard work very much. I won't urge you when my order will be activated. However, I really find that the number of 384m activated is more than 768m. I hope it is activated according to the principle of first come, first served .

    Sadly,my 768m order number is earlier than my friends' activated 384m,but it is still pending.

    We look at the service that has been activated and the order number is later than ours every day, then we feel very sad .

    brother, if you're are urging, say you're urging.

  • VirMachVirMach Member, Patron Provider

    @tangming said:
    @VirMach
    Brother, I understand your hard work very much. I won't urge you when my order will be activated. However, I really find that the number of 384m activated is more than 768m. I hope it is activated according to the principle of first come, first served .

    Sadly,my 768m order number is earlier than my friends' activated 384m,but it is still pending.

    We look at the service that has been activated and the order number is later than ours every day, then we feel very sad .

    This is just because 384MB is the most popular and 768MB is the least popular plan, so obviously due to the proportions it will be more 384MB activations.

    I keep forgetting the ratios but it's something like 50% of them are 384MB of all plans, 15% are 768MB, 25% are 1.5GB, and 10% are 2.5GB. I'm probably screwing something up there, maybe mixing up 1.5GB and 2.5GB or maybe 384MB is only like 35% but you get the point.

  • VirMachVirMach Member, Patron Provider

    @duckeeyuck said:
    @VirMach
    Can I request to be moved to tokyo via ticket? I can wait a whole year for you I don't mind.
    But I don't want to open a ticket if it means just wasting your time. I'm just asking.

    Thing is seeing to much tokyo "want" made me realize I should play there one day.
    If not, that is fine, I love LA since it's 10ms away from me :D

    No tickets for these please, if it's offered it will 100% be without tickets and appear on your service details page.

  • VirMachVirMach Member, Patron Provider

    TYOC040 Update

    I performed some standard cleanup related to libvirt and it briefly helped. I'm seeing some spikes for qemu-kvm still and slowness but less than before.

    It looks like a lot of processes seem to still be running for VMs that never created properly as I'm seeing many log files without KVM IDs attached, that continue outputting errors. It coincides with these spikes, where they get terminated by libvirtd but seemingly come back over and over.

    I'm most likely going to have to reboot this node. I'm sending out emergency reboot emails now.

    Thanked by 1FrankZ
  • skyeyskyey Member

    @VirMach said:
    TYOC035 and TYOC033 update:

    These are being rebooted now in 10~ minutes, and will have two disks taken out of the disk pool and VMs rebooted. The XPG disks are receiving another error. They will be replaced by Samsung Pro or WD or equivalent. We don't suspect anyone to have data loss as it appears the disks failed relatively quickly. Once again, we're throwing all these disks out of the servers we're sending to the datacenters so hopefully we won't have further issues with them.

    So to confirm, this is not the same issue as before, it appears it is a new issue related to the disks failing. If anyone does have important data on it that they need to retrieve, the disk will be kept and sent back to us so we can likely still recover data, just open a ticket.

    This means that we just lost some space in Tokyo. We're sending more servers this week, and we should still get close to all deployments as I'm setting up another server now and waiting for IPMI reset by the DC on another. I'm checking with the DC if they have the other disks we sent on-hand so we can put the new disks in these new nodes and avoid further problems but they may be at an office for storage away from the datacenter and it's the weekend. We'll see. (Later on we'll streamline this process.)

    Everyone "offline" on these will be recreated on another node.

    Dose it mean that u still need another several days to set up all the pre-order(not inculde storage plan) in tokyo?

  • VirMachVirMach Member, Patron Provider

    @skyey said: Dose it mean that u still need another several days to set up all the pre-order(not inculde storage plan) in tokyo?

    It just means we lost about 8% of our total capacity for now. I'll have to do the math to see what that will impact in terms of the timeline. It also means about 60~ people need to be regenerated right now manually which will take some time.

    Thanked by 2hotsnow FrankZ
  • @duckeeyuck said:
    @VirMach
    Can I request to be moved to tokyo via ticket? I can wait a whole year for you I don't mind.
    But I don't want to open a ticket if it means just wasting your time. I'm just asking.

    Thing is seeing to much tokyo "want" made me realize I should play there one day.
    If not, that is fine, I love LA since it's 10ms away from me :D

    Please stop participation in spreading rumors.We have never discriminated against MJJ or Chinese customers.
    Please think about why you are the only one with a problem and no one else has a problem.

  • VirMachVirMach Member, Patron Provider

    TYOC040 is definitely the initial issue described, it's just more complicated as suspected. There's VMs causing high interrupts but not necessarily showing themselves easily with high CPU usage. I'm digging deeper right now to see if we can flag these down quickly.

    Thanked by 1FrankZ
  • 360717125360717125 Member
    edited April 2022

    @VirMach said:
    TYOC040 is definitely the initial issue described, it's just more complicated as suspected. There's VMs causing high interrupts but not necessarily showing themselves easily with high CPU usage. I'm digging deeper right now to see if we can flag these down quickly.

    My pending ticket is #836141 Please refund I need to buy another machine

  • @360717125 said:

    @VirMach said:
    TYOC040 is definitely the initial issue described, it's just more complicated as suspected. There's VMs causing high interrupts but not necessarily showing themselves easily with high CPU usage. I'm digging deeper right now to see if we can flag these down quickly.

    I need to refund a waiting 768 to buy a 2T machine. Is it okay to issue a service order for a refund?My ticket is #836141 Please refund

    Please stop participation in spreading rumors.We have never discriminated against MJJ or Chinese customers.
    Please think about why you are the only one with a problem and no one else has a problem.

  • @360717125 said:

    @VirMach said:
    TYOC040 is definitely the initial issue described, it's just more complicated as suspected. There's VMs causing high interrupts but not necessarily showing themselves easily with high CPU usage. I'm digging deeper right now to see if we can flag these down quickly.

    I need to refund a waiting 768 to buy a 2T machine. Is it okay to issue a service order for a refund?My ticket is #836141 Please refund

    It is more efficient to transfer to someone in need than to refund!

  • 360717125360717125 Member
    edited April 2022

    My machine is not activated so I need a refund to buy a higher configuration machine

  • VirMachVirMach Member, Patron Provider

    TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.

    Thanked by 1FrankZ
  • @VirMach said:
    TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.

    How long will it take for my refund request to be approved

  • umzakumzak Member

    @VirMach said:
    TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.

    thanks, because the first reboot made my vps slower and ping worse

  • stormstorm Member

    @360717125 said:

    @VirMach said:
    TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.

    How long will it take for my refund request to be approved

    More than 10 minutes.

    Thanked by 2AlwaysSkint FrankZ
  • vhhjkglvhhjkgl Member
    edited April 2022

    @VirMach To be honest,give up the Node 40

  • @vhhjkgl said:
    @VirMach To be honest,give up the node 40 , it should not be a software problem is a hardware problem. No matter what do you fix, it is no use. Just waste of time.But why did it run quite smoothly some time ago? What modifications did you make?

    Please stop participation in spreading rumors.We have never discriminated against MJJ or Chinese customers.
    Please think about why you are the only one with a problem and no one else has a problem.

  • 40 nodes. After restarting, its latency has returned to normal, and the network bandwidth is still a bit low.

  • @VirMach said:
    It also means about 60~ people need to be regenerated right now manually which will take some time.

    when will this can be done, perhaps will recreate on TYOC034? :p

  • VirMachVirMach Member, Patron Provider

    @umzak said:

    @VirMach said:
    TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.

    thanks, because the first reboot made my vps slower and ping worse

    TYOC040 update

    The kernel was previously upgraded to help with the disk issues during beta at some point in time. This was later found out not to be necessary since we went with the other kernel parameters fix. The newer kernel was causing compatibility issues with certain guest operating systems, related to libvirt/qemu. This was combined with the previous issue of the certain really old/incompatible operating systems overloading the system, except in this case it was causing certain VMs to constantly shut off and on and go into various different states of semi-usability, hence not just maxing out the CPU and being easily identifiable.

    Interrupts are still high but now at a manageable level until we clear out the rest. There's still some phantom guests being started and stopped but much lower quantity and I'll have to look into those still.

    Anyway, with the improvements so far, before:

    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed
                    |                           |                 |
    Clouvider       | London, UK (10G)          | busy            | 5.00 Mbits/sec
    Online.net      | Paris, FR (10G)           | busy            | 3.97 Mbits/sec
    WorldStream     | The Netherlands (10G)     | busy            | busy
    WebHorizon      | Singapore (400M)          | busy            | 3.78 Mbits/sec
    Clouvider       | NYC, NY, US (10G)         | busy            | 5.72 Mbits/sec
    

    After:

    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed
                    |                           |                 |
    Clouvider       | London, UK (10G)          | busy            | 15.1 Mbits/sec
    Online.net      | Paris, FR (10G)           | busy            | 34.4 Mbits/sec
    WorldStream     | The Netherlands (10G)     | busy            | busy
    WebHorizon      | Singapore (400M)          | busy            | 34.8 Mbits/sec
    Clouvider       | NYC, NY, US (10G)         | busy            | 13.7 Mbits/sec
    

    Still a long way to go but we have demonstrated that it can be resolved by getting people on compatible operating systems and that it was definitely related to that initial issue as I expressed. The issue was just exasperated by the kernel version.

    I'll post another update soon.

    @guagua_ya66 said:
    40 nodes. After restarting, its latency has returned to normal, and the network bandwidth is still a bit low.

    Good to hear. I'm looking at all the virtualization logs in bulk right now to see who we need to power down. I'm going to power the ones that have issues, if they get powered back on and not re-installed and left in the kernel panic screen again, they will be temporarily suspended and ticket created with the customer so we have an understanding that it needs to be re-installed.

    These pretty much have no customer data on them in most cases since it's broken but we have to be 100% sure before we just force a re-install so it's best customers opt to re-install their broken VMs on their own.

    I do know this will be helped by testing the functionality of each operating system, which is still on my to-do list for tonight.

    Thanked by 3tototo FAT32 FrankZ
  • @VirMach
    Hello, I'd like to ask when the 2T storage service machine in Tokyo is expected to open?


  • There is always inexplicable data consuming my bandwidth. . . .

  • sryansryan Member

    @VirMach said:

    @umzak said:

    @VirMach said:
    TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.

    thanks, because the first reboot made my vps slower and ping worse

    TYOC040 update

    The kernel was previously upgraded to help with the disk issues during beta at some point in time. This was later found out not to be necessary since we went with the other kernel parameters fix. The newer kernel was causing compatibility issues with certain guest operating systems, related to libvirt/qemu. This was combined with the previous issue of the certain really old/incompatible operating systems overloading the system, except in this case it was causing certain VMs to constantly shut off and on and go into various different states of semi-usability, hence not just maxing out the CPU and being easily identifiable.

    Interrupts are still high but now at a manageable level until we clear out the rest. There's still some phantom guests being started and stopped but much lower quantity and I'll have to look into those still.

    Anyway, with the improvements so far, before:

    iperf3 Network Speed Tests (IPv4):
    > ---------------------------------
    > Provider        | Location (Link)           | Send Speed      | Recv Speed
    >                 |                           |                 |
    > Clouvider       | London, UK (10G)          | busy            | 5.00 Mbits/sec
    > Online.net      | Paris, FR (10G)           | busy            | 3.97 Mbits/sec
    > WorldStream     | The Netherlands (10G)     | busy            | busy
    > WebHorizon      | Singapore (400M)          | busy            | 3.78 Mbits/sec
    > Clouvider       | NYC, NY, US (10G)         | busy            | 5.72 Mbits/sec
    > 

    After:

    iperf3 Network Speed Tests (IPv4):
    > ---------------------------------
    > Provider        | Location (Link)           | Send Speed      | Recv Speed
    >                 |                           |                 |
    > Clouvider       | London, UK (10G)          | busy            | 15.1 Mbits/sec
    > Online.net      | Paris, FR (10G)           | busy            | 34.4 Mbits/sec
    > WorldStream     | The Netherlands (10G)     | busy            | busy
    > WebHorizon      | Singapore (400M)          | busy            | 34.8 Mbits/sec
    > Clouvider       | NYC, NY, US (10G)         | busy            | 13.7 Mbits/sec
    > 

    Still a long way to go but we have demonstrated that it can be resolved by getting people on compatible operating systems and that it was definitely related to that initial issue as I expressed. The issue was just exasperated by the kernel version.

    I'll post another update soon.

    @guagua_ya66 said:
    40 nodes. After restarting, its latency has returned to normal, and the network bandwidth is still a bit low.

    Good to hear. I'm looking at all the virtualization logs in bulk right now to see who we need to power down. I'm going to power the ones that have issues, if they get powered back on and not re-installed and left in the kernel panic screen again, they will be temporarily suspended and ticket created with the customer so we have an understanding that it needs to be re-installed.

    These pretty much have no customer data on them in most cases since it's broken but we have to be 100% sure before we just force a re-install so it's best customers opt to re-install their broken VMs on their own.

    I do know this will be helped by testing the functionality of each operating system, which is still on my to-do list for tonight.

    The latency is decreased in average, but it still has large latency to gateway from my instance.

    PING 45.66.128.1 (45.66.128.1) 56(84) bytes of data.
    64 bytes from 45.66.128.1: icmp_seq=2 ttl=64 time=81.5 ms
    64 bytes from 45.66.128.1: icmp_seq=4 ttl=64 time=19.6 ms
    64 bytes from 45.66.128.1: icmp_seq=5 ttl=64 time=11.3 ms
    64 bytes from 45.66.128.1: icmp_seq=6 ttl=64 time=6.11 ms
    64 bytes from 45.66.128.1: icmp_seq=7 ttl=64 time=11.1 ms
    64 bytes from 45.66.128.1: icmp_seq=9 ttl=64 time=5.10 ms
    64 bytes from 45.66.128.1: icmp_seq=10 ttl=64 time=13.9 ms
    64 bytes from 45.66.128.1: icmp_seq=13 ttl=64 time=44.8 ms
    64 bytes from 45.66.128.1: icmp_seq=14 ttl=64 time=54.0 ms
    64 bytes from 45.66.128.1: icmp_seq=16 ttl=64 time=27.9 ms
    64 bytes from 45.66.128.1: icmp_seq=17 ttl=64 time=2.67 ms
    64 bytes from 45.66.128.1: icmp_seq=18 ttl=64 time=0.988 ms
    64 bytes from 45.66.128.1: icmp_seq=19 ttl=64 time=3.02 ms
    64 bytes from 45.66.128.1: icmp_seq=21 ttl=64 time=1.38 ms
    64 bytes from 45.66.128.1: icmp_seq=22 ttl=64 time=7.51 ms
    64 bytes from 45.66.128.1: icmp_seq=23 ttl=64 time=7.31 ms
    64 bytes from 45.66.128.1: icmp_seq=24 ttl=64 time=15.7 ms
    64 bytes from 45.66.128.1: icmp_seq=26 ttl=64 time=1.35 ms
    64 bytes from 45.66.128.1: icmp_seq=27 ttl=64 time=1.18 ms
    64 bytes from 45.66.128.1: icmp_seq=28 ttl=64 time=2.32 ms
    64 bytes from 45.66.128.1: icmp_seq=29 ttl=64 time=1.49 ms
    64 bytes from 45.66.128.1: icmp_seq=30 ttl=64 time=22.0 ms
    64 bytes from 45.66.128.1: icmp_seq=31 ttl=64 time=2.16 ms
    64 bytes from 45.66.128.1: icmp_seq=32 ttl=64 time=113 ms
    64 bytes from 45.66.128.1: icmp_seq=33 ttl=64 time=27.5 ms
    64 bytes from 45.66.128.1: icmp_seq=34 ttl=64 time=36.9 ms
    64 bytes from 45.66.128.1: icmp_seq=35 ttl=64 time=77.6 ms
    64 bytes from 45.66.128.1: icmp_seq=36 ttl=64 time=96.9 ms
    64 bytes from 45.66.128.1: icmp_seq=38 ttl=64 time=182 ms
    64 bytes from 45.66.128.1: icmp_seq=39 ttl=64 time=93.9 ms
    64 bytes from 45.66.128.1: icmp_seq=40 ttl=64 time=70.2 ms
    64 bytes from 45.66.128.1: icmp_seq=41 ttl=64 time=142 ms
    64 bytes from 45.66.128.1: icmp_seq=42 ttl=64 time=105 ms
    64 bytes from 45.66.128.1: icmp_seq=43 ttl=64 time=90.8 ms
    64 bytes from 45.66.128.1: icmp_seq=44 ttl=64 time=107 ms
    64 bytes from 45.66.128.1: icmp_seq=45 ttl=64 time=119 ms
    64 bytes from 45.66.128.1: icmp_seq=46 ttl=64 time=101 ms
    64 bytes from 45.66.128.1: icmp_seq=47 ttl=64 time=162 ms
    64 bytes from 45.66.128.1: icmp_seq=48 ttl=64 time=230 ms
    64 bytes from 45.66.128.1: icmp_seq=49 ttl=64 time=147 ms

    whatever, thanks for your work on solving the problem on node 40

  • umzakumzak Member

    After 2nd reboot, ping is better.

  • vhhjkglvhhjkgl Member
    edited April 2022

    176.119.148.1xx 的 Ping 统计信息:
    数据包: 已发送 = 20,已接收 = 18,丢失 = 2 (10% 丢失),
    往返行程的估计时间(以毫秒为单位):
    最短 = 285ms,最长 = 656ms,平均 = 435ms
    Local problems? node 40. I use Tencent Virtual Host(windoss,Shanghai )

  • sryansryan Member

    @vhhjkgl said:
    来自 176.119.148.1 的回复: 字节=32 时间=520ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=538ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=375ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=324ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=328ms TTL=44
    来自 176.119.148.1 的回复: 字节=32 时间=234ms TTL=44
    Local problems? node 40

    The boss already working on it, so wait patiently. I think it will be solved

This discussion has been closed.