Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More - Page 209
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

★ VirMach ★ RYZEN ★ NVMe ★★ $8.88/YR- 384MB ★★ $21.85/YR- 2.5GB ★ Instant ★ Japan Pre-order ★ & More

1206207209211212339

Comments

  • digitalwickeddigitalwicked Member
    edited May 2022

    I've got six servers in San Jose across RYZE.SJC-Z006.VMS & RYZE.SJC-Z008.VMS. I've found pinging to 1.1.1.1 fairly stable but outside of that to other servers in LA or beyond a pretty consistent rollercoaster for ping times... ~10ms then it works its way up to ~3000ms then back down. Appears to be congestion on one of the upstream links? Across both nodes I'm seeing an average steal of 3-10, they are snappy just need to fix the network.

  • fanfan Veteran

    @pdd said: Yeah.My 029 the same,at least most of the time!

    Better than before but not yet, it's still struggling.

    Thanked by 1pdd
  • it seems @Virmach doesn't know how to get along with AMD Ryzen. since migration in Phoenix began last year, problems began irrupting with network, overheating, etc. hope Virmach find an expert on AMD cores to make everything compatible.

  • Actually this company just took it easy after he scammed you all for hundreds of thousands and that's how he's now a MILLIONAIRE

    hahahahahahaa you fools

  • @duckeeyuck said:
    Actually this company just took it easy after he scammed you all for hundreds of thousands and that's how he's now a MILLIONAIRE

    hahahahahahaa you fools

    Thanked by 1ElonBezos
  • _MS__MS_ Member

    @cybertech said:

    @duckeeyuck said:
    Actually this company just took it easy after he scammed you all for hundreds of thousands and that's how he's now a MILLIONAIRE

    hahahahahahaa you fools

    Drake?

  • CalypsoCalypso Member

    @duckeeyuck said:
    Actually this company just took it easy after he scammed you all for hundreds of thousands and that's how he's now a MILLIONAIRE

    hahahahahahaa you fools

    Nah, @VirMach is just giving you an opportunity to keep on trolling and bashing. In this way he's doing society a favor because otherwise you'd be pushing old ladies on the streets, putting spraypaint tags on other people's property or would be bullying farmyard animals.

    Thanked by 1FrankZ
  • randomqrandomq Member

    If I were paying for gigabits of commit and getting shitty routing like that I'd kill a bitch. #VirMachArmy #TakeTheUpstreamsByForce

  • @duckeeyuck said:
    Actually this company just took it easy after he scammed you all for hundreds of thousands and that's how he's now a MILLIONAIRE

    hahahahahahaa you fools

    Punchline

  • tokyo 30 is down for 2 day,when fix it

  • If I were paying for gigabits of commit and getting shitty routing like that I'd kill a bitch. #VirMachArmy #TakeTheUpstreamsByForce

    So weird how the routing is suboptimal, INAP's backbone is absolutely excellent. Might be something on the DediPath end.

  • @LiliLabs said:

    If I were paying for gigabits of commit and getting shitty routing like that I'd kill a bitch. #VirMachArmy #TakeTheUpstreamsByForce

    So weird how the routing is suboptimal, INAP's backbone is absolutely excellent. Might be something on the DediPath end.

    Previously, Virmach said the packet loss in tokyo was due to their NIC and too many people running VPNs. They probably use the same NICs that come with the asrock mobos, and san jose is also a prime location for asians/mjjs. It's less likely due to the transit considering the exact same thing happens at tokyo too. Maybe it's due to the NIC. Who knows? Maybe it could even be a router issue.

    Thanked by 1dev077
  • AlwaysSkintAlwaysSkint Member
    edited May 2022

    @NoComment said: .. prime location for asians/mjjs.

    Hence me not having VPSes in West coast USA, if at all possible.

    Thanked by 1skorous
  • KensouKensou Member

    @hostlocmjj said:
    tokyo 30 is down for 2 day,when fix it

    vps is online,but network is too terrible.

  • @Kensou said:

    @hostlocmjj said:
    tokyo 30 is down for 2 day,when fix it

    vps is online,but network is too terrible.

    by ping.pe
    so terrible

  • @ripeapple said:
    it seems @Virmach doesn't know how to get along with AMD Ryzen. since migration in Phoenix began last year, problems began irrupting with network, overheating, etc. hope Virmach find an expert on AMD cores to make everything compatible.

    It's not a CPU issue but a wrong network structure.
    Apparently they put all VPS into a single VLAN.

  • lowendclientlowendclient Member
    edited May 2022

    @FrankZ said:

    @serverfunlet said:
    BTW, have you ever consider to isolate the subnet to minimal the ARP or broadcast packet? I think that might be helpful because those small packets could drop a lot so server NIC doesn't need to forward/handle such amount of packets, and what about the NIC upgrade or perf opt to make the networking better?

    smaller VLAN may be useful

    I though about this earlier and you are most likely correct, but then VirMach would need to change the VPS IP every time a VPS was moved to a different Node. Which at least in the beginning stages of Tokyo would have been a major headache to say the least.

    Use PVLAN / Open vSwitch instead, they've setup a correct network in Buffalo.
    The new team didn't use the same structure in Tokyo and new San Jose.
    They need to hire a CCIE engineer...

    Thanked by 2fan windtune
  • AlwaysSkintAlwaysSkint Member
    edited May 2022

    @lowendclient said: Seems a network engineer quitted redacted

    Quit or acquitted? >:)

    Thanked by 1bdl
  • @lowendclient said:
    It's not a CPU issue but a wrong network structure.
    Apparently they put all VPS into a single VLAN.

    Maybe that´s why i get 150k traffic on idle VPS xD ( Only Ryzen servers )... On Dallas is 1.2M of constant traffic

  • zafouharzafouhar Veteran
    edited May 2022

    @lowendclient said:

    @ripeapple said:
    it seems @Virmach doesn't know how to get along with AMD Ryzen. since migration in Phoenix began last year, problems began irrupting with network, overheating, etc. hope Virmach find an expert on AMD cores to make everything compatible.

    It's not a CPU issue but a wrong network structure.
    Apparently they put all VPS into a single VLAN.

    It is not necessarily a wrong network structure as there's benefits in doing so aswell particularly on a VPS infrastructure. Having multiple VLAN's would cause more issues that would affect the end user.

    Thanked by 1FrankZ
  • plumbergplumberg Veteran

    8+ hours and dead silence... what has happened

  • @zafouhar said:
    It is not necessarily a wrong network structure as there's benefits in doing so aswell particularly on a VPS infrastructure. Having multiple VLAN's would cause more issues that would affect the end user.

    Not necessarily split VLANs, can use OVS instead, better than nothing. Now a single VPS in Tokyo can receive 1000+ ARP etc. packets per second, it should cause noise.

  • 7cloud7cloud Member
    edited May 2022

    @VirMach Your TYO 30 has been downtime for 2 days,When can solve? Have You think compensate money for these customer on this node?

  • windtunewindtune Member
    edited May 2022

    Totally agree with @lowendclient , this is the result of my test yesterday.
    virmach
    Please try to fix it or optimize it.
    @VirMach

  • jickurjickur Member

    the ticket #266828 which named Switch from Tokyo to San Jose can't be replied.i want my vps migrate from San Jose to Tokyo.can you help me deal with it?@VirMach,thanks

  • jickurjickur Member

    my vps ip is 194.33.39.200

  • KensouKensou Member

    @VirMach Tokyo node 30 is down for 2days...

  • FrankZFrankZ Veteran
    edited May 2022

    @Kensou said: Tokyo node 30 is down for 2days...
    @7cloud said: Your TYO 30 has been downtime for 2 days,

    You know there is a status page at https://billing.virmach.com/serverstatus.php after you log in.

    This page states...

    Update 10:20 AM 05/12/2022 - TYOC030 is currently undergoing hardware maintenance and may need hardware replacement

    Thanks,
    VirMach Support

  • [Investigating] Unexpected Outage - TYOC030 (Reported)
    Affecting System - TYOC030
    
     05/11/2022 12:22  Last Updated 05/12/2022 10:21
    Dear VirMach Customers,
    
     We are currently investigating an unexpected outage with TYOC030 and SJCZ007.
    
    Update 10:20 AM 05/12/2022 - TYOC030 is currently undergoing hardware maintenance and may need hardware replacement
    
     Thanks,
    VirMach Support
    

    @Kensou @hostlocmjj @lowendclient @7cloud @jickur
    Thank you for reporting this. It seems that Virmach is already aware of it. And working hard to resolve it.

    You can check: https://billing.virmach.com/serverstatus.php
    to see it as being actively worked on to repair to full functionality.

    @7cloud said:
    @VirMach Your TYO 30 has been downtime for 2 days,When can solve? Have You think compensate money for these customer on this node?

    Yes. You will probably be compensated by more days added than the days that you've "lost". You do not have to worry about that.
    Virmach is probably the most fair and most generous provider there is on here.
    (In the odd case that he forgets, just ask and remind him after things are stable again and I am sure he'll compensate it.)

    Remember that this was a massive new deployment and expansion location.
    So it is expected to have a couple of teething problems. And that a couple of the servers might have some quirks at the beginning, that needs fixing.

    But one thing you can be sure about, is that Virmach will always be working hard to fix everything that needs to be fixed and to give you the best value for your money possible.

    Just try to be patient and understanding, and as soon as everything is sorted out and stable you should have many happy years of great service with probably very few issues after that. :)

  • Lol official Unofficial @FrankZ support beat me to it again. :smiley:

This discussion has been closed.