New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More
This discussion has been closed.
Comments
A lot of these operating systems that are idling are doing an awful job at it but we're living in the modern age, where even Linux is so "advanced" that it's bloated on the minimal version.
I'm getting old.
@VirMach
Brother, I understand your hard work very much. I won't urge you when my order will be activated. However, I really find that the number of 384m activated is more than 768m. I hope it is activated according to the principle of first come, first served .
Sadly,my 768m order number is earlier than my friends' activated 384m,but it is still pending.
We look at the service that has been activated and the order number is later than ours every day, then we feel very sad .
@VirMach
Can I request to be moved to tokyo via ticket? I can wait a whole year for you I don't mind.
But I don't want to open a ticket if it means just wasting your time. I'm just asking.
Thing is seeing to much tokyo "want" made me realize I should play there one day.
If not, that is fine, I love LA since it's 10ms away from me
brother, if you're are urging, say you're urging.
This is just because 384MB is the most popular and 768MB is the least popular plan, so obviously due to the proportions it will be more 384MB activations.
I keep forgetting the ratios but it's something like 50% of them are 384MB of all plans, 15% are 768MB, 25% are 1.5GB, and 10% are 2.5GB. I'm probably screwing something up there, maybe mixing up 1.5GB and 2.5GB or maybe 384MB is only like 35% but you get the point.
No tickets for these please, if it's offered it will 100% be without tickets and appear on your service details page.
TYOC040 Update
I performed some standard cleanup related to libvirt and it briefly helped. I'm seeing some spikes for qemu-kvm still and slowness but less than before.
It looks like a lot of processes seem to still be running for VMs that never created properly as I'm seeing many log files without KVM IDs attached, that continue outputting errors. It coincides with these spikes, where they get terminated by libvirtd but seemingly come back over and over.
I'm most likely going to have to reboot this node. I'm sending out emergency reboot emails now.
Dose it mean that u still need another several days to set up all the pre-order(not inculde storage plan) in tokyo?
It just means we lost about 8% of our total capacity for now. I'll have to do the math to see what that will impact in terms of the timeline. It also means about 60~ people need to be regenerated right now manually which will take some time.
Please stop participation in spreading rumors.We have never discriminated against MJJ or Chinese customers.
Please think about why you are the only one with a problem and no one else has a problem.
TYOC040 is definitely the initial issue described, it's just more complicated as suspected. There's VMs causing high interrupts but not necessarily showing themselves easily with high CPU usage. I'm digging deeper right now to see if we can flag these down quickly.
My pending ticket is #836141 Please refund I need to buy another machine
Please stop participation in spreading rumors.We have never discriminated against MJJ or Chinese customers.
Please think about why you are the only one with a problem and no one else has a problem.
It is more efficient to transfer to someone in need than to refund!
My machine is not activated so I need a refund to buy a higher configuration machine
TYOC040 needs to be rebooted again, I have an idea of what might be able to fix this more quickly related to the kernel.
How long will it take for my refund request to be approved
thanks, because the first reboot made my vps slower and ping worse
More than 10 minutes.
@VirMach To be honest,give up the Node 40
Please stop participation in spreading rumors.We have never discriminated against MJJ or Chinese customers.
Please think about why you are the only one with a problem and no one else has a problem.
40 nodes. After restarting, its latency has returned to normal, and the network bandwidth is still a bit low.
when will this can be done, perhaps will recreate on TYOC034?
TYOC040 update
The kernel was previously upgraded to help with the disk issues during beta at some point in time. This was later found out not to be necessary since we went with the other kernel parameters fix. The newer kernel was causing compatibility issues with certain guest operating systems, related to libvirt/qemu. This was combined with the previous issue of the certain really old/incompatible operating systems overloading the system, except in this case it was causing certain VMs to constantly shut off and on and go into various different states of semi-usability, hence not just maxing out the CPU and being easily identifiable.
Interrupts are still high but now at a manageable level until we clear out the rest. There's still some phantom guests being started and stopped but much lower quantity and I'll have to look into those still.
Anyway, with the improvements so far, before:
After:
Still a long way to go but we have demonstrated that it can be resolved by getting people on compatible operating systems and that it was definitely related to that initial issue as I expressed. The issue was just exasperated by the kernel version.
I'll post another update soon.
Good to hear. I'm looking at all the virtualization logs in bulk right now to see who we need to power down. I'm going to power the ones that have issues, if they get powered back on and not re-installed and left in the kernel panic screen again, they will be temporarily suspended and ticket created with the customer so we have an understanding that it needs to be re-installed.
These pretty much have no customer data on them in most cases since it's broken but we have to be 100% sure before we just force a re-install so it's best customers opt to re-install their broken VMs on their own.
I do know this will be helped by testing the functionality of each operating system, which is still on my to-do list for tonight.
@VirMach
Hello, I'd like to ask when the 2T storage service machine in Tokyo is expected to open?
There is always inexplicable data consuming my bandwidth. . . .
The latency is decreased in average, but it still has large latency to gateway from my instance.
PING 45.66.128.1 (45.66.128.1) 56(84) bytes of data.
64 bytes from 45.66.128.1: icmp_seq=2 ttl=64 time=81.5 ms
64 bytes from 45.66.128.1: icmp_seq=4 ttl=64 time=19.6 ms
64 bytes from 45.66.128.1: icmp_seq=5 ttl=64 time=11.3 ms
64 bytes from 45.66.128.1: icmp_seq=6 ttl=64 time=6.11 ms
64 bytes from 45.66.128.1: icmp_seq=7 ttl=64 time=11.1 ms
64 bytes from 45.66.128.1: icmp_seq=9 ttl=64 time=5.10 ms
64 bytes from 45.66.128.1: icmp_seq=10 ttl=64 time=13.9 ms
64 bytes from 45.66.128.1: icmp_seq=13 ttl=64 time=44.8 ms
64 bytes from 45.66.128.1: icmp_seq=14 ttl=64 time=54.0 ms
64 bytes from 45.66.128.1: icmp_seq=16 ttl=64 time=27.9 ms
64 bytes from 45.66.128.1: icmp_seq=17 ttl=64 time=2.67 ms
64 bytes from 45.66.128.1: icmp_seq=18 ttl=64 time=0.988 ms
64 bytes from 45.66.128.1: icmp_seq=19 ttl=64 time=3.02 ms
64 bytes from 45.66.128.1: icmp_seq=21 ttl=64 time=1.38 ms
64 bytes from 45.66.128.1: icmp_seq=22 ttl=64 time=7.51 ms
64 bytes from 45.66.128.1: icmp_seq=23 ttl=64 time=7.31 ms
64 bytes from 45.66.128.1: icmp_seq=24 ttl=64 time=15.7 ms
64 bytes from 45.66.128.1: icmp_seq=26 ttl=64 time=1.35 ms
64 bytes from 45.66.128.1: icmp_seq=27 ttl=64 time=1.18 ms
64 bytes from 45.66.128.1: icmp_seq=28 ttl=64 time=2.32 ms
64 bytes from 45.66.128.1: icmp_seq=29 ttl=64 time=1.49 ms
64 bytes from 45.66.128.1: icmp_seq=30 ttl=64 time=22.0 ms
64 bytes from 45.66.128.1: icmp_seq=31 ttl=64 time=2.16 ms
64 bytes from 45.66.128.1: icmp_seq=32 ttl=64 time=113 ms
64 bytes from 45.66.128.1: icmp_seq=33 ttl=64 time=27.5 ms
64 bytes from 45.66.128.1: icmp_seq=34 ttl=64 time=36.9 ms
64 bytes from 45.66.128.1: icmp_seq=35 ttl=64 time=77.6 ms
64 bytes from 45.66.128.1: icmp_seq=36 ttl=64 time=96.9 ms
64 bytes from 45.66.128.1: icmp_seq=38 ttl=64 time=182 ms
64 bytes from 45.66.128.1: icmp_seq=39 ttl=64 time=93.9 ms
64 bytes from 45.66.128.1: icmp_seq=40 ttl=64 time=70.2 ms
64 bytes from 45.66.128.1: icmp_seq=41 ttl=64 time=142 ms
64 bytes from 45.66.128.1: icmp_seq=42 ttl=64 time=105 ms
64 bytes from 45.66.128.1: icmp_seq=43 ttl=64 time=90.8 ms
64 bytes from 45.66.128.1: icmp_seq=44 ttl=64 time=107 ms
64 bytes from 45.66.128.1: icmp_seq=45 ttl=64 time=119 ms
64 bytes from 45.66.128.1: icmp_seq=46 ttl=64 time=101 ms
64 bytes from 45.66.128.1: icmp_seq=47 ttl=64 time=162 ms
64 bytes from 45.66.128.1: icmp_seq=48 ttl=64 time=230 ms
64 bytes from 45.66.128.1: icmp_seq=49 ttl=64 time=147 ms
whatever, thanks for your work on solving the problem on node 40
After 2nd reboot, ping is better.
176.119.148.1xx 的 Ping 统计信息:
数据包: 已发送 = 20,已接收 = 18,丢失 = 2 (10% 丢失),
往返行程的估计时间(以毫秒为单位):
最短 = 285ms,最长 = 656ms,平均 = 435ms
Local problems? node 40. I use Tencent Virtual Host(windoss,Shanghai )
The boss already working on it, so wait patiently. I think it will be solved