Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


★ VirMach ★ Black Friday & Cyber Week 2018 ★ RAID 10 SSD ★ OpenVZ & KVM ★ Check inside for offers! - Page 537
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

★ VirMach ★ Black Friday & Cyber Week 2018 ★ RAID 10 SSD ★ OpenVZ & KVM ★ Check inside for offers!

1534535537539540547

Comments

  • hk5354hk5354 Member
    edited May 2022

    ======================================
    Which serves are affected?

    San Jose: SJKVM8, SJKVM11
    Atlanta: ATLKVM11, ATLKVM12, ATLKVM13
    Seattle: SEAKVM15
    Dallas: DAL10GKVM2
    Buffalo: NY10GKVM88, NY10GKVM82, NY10GKVM38, NY10GKVM33, NY10GKVM30, NY10GKVM27, NY10GKVM19, NYKVM21L
    Piscataway: NYCKVM16, NYCKVM12
    Los Angeles: LAKVM9, LAKVM16, LAKVM26

    ======================================

    LAKVM9 migrated ? Still showing "Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz"

  • JabJabJabJab Member

    No.

  • MikePTMikePT Moderator, Patron Provider, Veteran

    MS said:

    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    #              Yet-Another-Bench-Script              #
    #                     v2022-05-06                    #
    # https://github.com/masonr/yet-another-bench-script #
    # ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## ## #
    
    Tue 10 May 2022 04:03:32 PM EDT
    
    Basic System Information:
    ---------------------------------
    Uptime     : 0 days, 9 hours, 59 minutes
    Processor  : AMD Ryzen 9 3950X 16-Core Processor
    CPU cores  : 1 @ 3499.998 MHz
    AES-NI     : ✔ Enabled
    VM-x/AMD-V : ✔ Enabled
    RAM        : 4.9 GiB
    Swap       : 256.0 MiB
    Disk       : 14.6 GiB
    Distro     : Debian GNU/Linux 11 (bullseye)
    Kernel     : 5.10.0-8-amd64
    
    fio Disk Speed Tests (Mixed R/W 50/50):
    ---------------------------------
    Block Size | 4k            (IOPS) | 64k           (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 412.19 MB/s (103.0k) | 3.53 GB/s    (55.2k)
    Write      | 413.28 MB/s (103.3k) | 3.55 GB/s    (55.4k)
    Total      | 825.48 MB/s (206.3k) | 7.08 GB/s   (110.7k)
               |                      |
    Block Size | 512k          (IOPS) | 1m            (IOPS)
      ------   | ---            ----  | ----           ----
    Read       | 3.76 GB/s     (7.3k) | 3.89 GB/s     (3.8k)
    Write      | 3.96 GB/s     (7.7k) | 4.15 GB/s     (4.0k)
    Total      | 7.73 GB/s    (15.1k) | 8.05 GB/s     (7.8k)
    
    iperf3 Network Speed Tests (IPv4):
    ---------------------------------
    Provider        | Location (Link)           | Send Speed      | Recv Speed
                    |                           |                 |
    Clouvider       | London, UK (10G)          | 12.0 Mbits/sec  | 250 Mbits/sec
    Online.net      | Paris, FR (10G)           | 813 Mbits/sec   | 445 Mbits/sec
    Hybula          | The Netherlands (40G)     | 786 Mbits/sec   | 799 Mbits/sec
    Clouvider       | NYC, NY, US (10G)         | 842 Mbits/sec   | 899 Mbits/sec
    Velocity Online | Tallahassee, FL, US (10G) | 836 Mbits/sec   | 515 Mbits/sec
    Clouvider       | Los Angeles, CA, US (10G) | 731 Mbits/sec   | 751 Mbits/sec
    
    Geekbench 5 Benchmark Test:
    ---------------------------------
    Test            | Value
                    |
    Single Core     | 1091
    Multi Core      | 1129
    Full Test       | https://browser.geekbench.com/v5/cpu/14830229
    

    VirMach - NYC Metro - $12.48 USD Annually

    Impressive IOPS. That's really awesome.

    Thanked by 1_MS_
  • donlidonli Member

    537

    Thanked by 2randomq FrankZ
  • FrankZFrankZ Veteran
  • imokimok Member

    It's been some days since my last connection.

    Thanked by 1randomq
  • I couldn't connect to my server, LA10GKVM26.

  • randomqrandomq Member

    @imok said:
    It's been some days since my last connection.

    Say 10 Hail Marys and try using a VPN.

    Thanked by 1FrankZ
  • VirMachVirMach Member, Patron Provider
    edited May 2022

    Okay so what I've been learning from this experience is that ASRock motherboards are awful and the only real way to avoid them is going with AMD Epyc. I don't know if we just got really unlucky with these batches but I've been diving deeper into testing them before sending them out and I replicated similar issues to what San Jose faced and it doesn't really end up being the CPU or memory even though that's the kind of errors thrown, it's just the motherboard.

    Anyway, what do you guys think? And if we went with Epyc would you guys want highest clock rate at similar pricing to Ryzen or it's own line with lower clock rate and processing power share in general and more RAM at a lower price? There are also some ASUS boards that could work that @LiliLabs mentioned at some point but I haven't actually got into them deep enough to see if it's truly more reliable. None of them have randomly broken in testing but the quantity is low and I'd need to find a reliable KVM switch for them. The cool thing about doing those is due to their lower cost I can probably do smaller nodes and we can skip all the other problems that come with trying to put a higher quantity of VMs on the same NIC, giving it a lot more breathing room for that & the Gen4 NVMe SSDs and we can also be a little more lenient on CPU usage all around.

    Node for that idea would be like:

    64GB RAM / 3900X/5800X/5900X / 2x2TB Gen4 NVMe

    And it'd only be in locations with lower power cost since overall power usage would be more for the same amount of RAM/disk. Cool part about these is I could also see us selling them as dedicated servers at around the $99 per month pricepoint.

  • FrankZFrankZ Veteran

    @VirMach said: if we went with Epyc would you guys want highest clock rate at similar pricing to Ryzen or it's own line with lower clock rate and processing power share in general and more RAM at a lower price?

    EPYC with lower clock and more RAM if the network would not be over saturated.
    Although the ASUS boards with lower density sounds best to me.

  • Fuck Asrock. I did a Ryzen build recently. The first motherboard never booted, although RGB LED were on. Requested replacement from seller. The second was ok at first. When the return period was over the card started to randomly shutdown. I got a new one again. Asus this time, but when I removed the CPU I bend some pin really hard because thermal paste was really hard, so I had to buy again a new Ryzen. Now the clueless Asrock support suggest a Bios update that obviously not gonna work and I don't plan to swap it back. And this isn't even all the problem I got with this build. I'm really close to be able to do a complete broken build. So, fuck Asrock.

    Thanked by 1FrankZ
  • @VirMach said:
    The cool thing about doing those is due to their lower cost I can probably do smaller nodes and we can skip all the other problems that come with trying to put a higher quantity of VMs on the same NIC, giving it a lot more breathing room for that & the Gen4 NVMe SSDs and we can also be a little more lenient on CPU usage all around.

    Are you using the motherboard NIC? The NIC on those AsRock boards can be really flaky, same with the Asus ones. Best to get a nicer dedicated Intel NIC, will save you lots of headache down the line.

    Node for that idea would be like:

    64GB RAM / 3900X/5800X/5900X / 2x2TB Gen4 NVMe

    And it'd only be in locations with lower power cost since overall power usage would be more for the same amount of RAM/disk. Cool part about these is I could also see us selling them as dedicated servers at around the $99 per month pricepoint.

    I'd love this, if only because it'd mean I'd get my dedis at long last! :smiley: This would also be a huge win for the VPS customers since they'd be able to burst higher. Which locations were you thinking of here?

  • fanfan Veteran

    @VirMach said: do smaller nodes and we can skip all the other problems that come with trying to put a higher quantity of VMs on the same NIC, giving it a lot more breathing room for that & the Gen4 NVMe SSDs and we can also be a little more lenient on CPU usage all around.

    What about another line of products focusing on higher specs, better reliability, and lower density per node, like these VDS's appearing in the offers section?

    As you're now coloing with xTom or Dedipath for a lot of locations, I'd like to pay more for something production-ready.

  • hk5354hk5354 Member
    edited May 2022

    @hk5354 said:
    @virmach A node down for 24 hours and not resumed yet....LAKVM9
    Could you please help to have a look?

    WntZ8b.md.jpg
    WnmYn2.md.jpg

    @VirMach LAKVM9 LAKVM9 seems to hang again, the panel does not work to reboot VM.

    WbB3cF.md.jpg

  • YunenYunen Member

    Migrate vps to San Jose localtion is a bad choice which have so high lost nobody can use fluently. Any plan here to fix this problem?

  • ddantasddantas Member

    One vote for "lower clock rate and processing power share in general and more RAM at a lower price".

  • @VirMach said:
    Okay so what I've been learning from this experience is that ASRock motherboards are awful and the only real way to avoid them is going with AMD Epyc. I don't know if we just got really unlucky with these batches but I've been diving deeper into testing them before sending them out and I replicated similar issues to what San Jose faced and it doesn't really end up being the CPU or memory even though that's the kind of errors thrown, it's just the motherboard.

    Anyway, what do you guys think? And if we went with Epyc would you guys want highest clock rate at similar pricing to Ryzen or it's own line with lower clock rate and processing power share in general and more RAM at a lower price? There are also some ASUS boards that could work that @LiliLabs mentioned at some point but I haven't actually got into them deep enough to see if it's truly more reliable. None of them have randomly broken in testing but the quantity is low and I'd need to find a reliable KVM switch for them. The cool thing about doing those is due to their lower cost I can probably do smaller nodes and we can skip all the other problems that come with trying to put a higher quantity of VMs on the same NIC, giving it a lot more breathing room for that & the Gen4 NVMe SSDs and we can also be a little more lenient on CPU usage all around.

    Node for that idea would be like:

    64GB RAM / 3900X/5800X/5900X / 2x2TB Gen4 NVMe

    And it'd only be in locations with lower power cost since overall power usage would be more for the same amount of RAM/disk. Cool part about these is I could also see us selling them as dedicated servers at around the $99 per month pricepoint.

    prepare for 7950X with AM5>

  • JarryJarry Member

    @JabJab said:

    btw. @VirMach last active on forum 8 minutes ago.

    Maybe it's time VirMach gets active on its support instead! Two weeks since screwed "migration", my vps not running, and no reply to the ticket...

  • hk5354hk5354 Member

    @Yunen said:
    Migrate vps to San Jose localtion is a bad choice which have so high lost nobody can use fluently. Any plan here to fix this problem?

    how bad is that? :'( seems no ping lost for the looking glass IP

  • FrankZFrankZ Veteran
  • yoursunnyyoursunny Member, IPv6 Advocate

    @VirMach said:
    Anyway, what do you guys think? And if we went with Epyc would you guys want highest clock rate at similar pricing to Ryzen or it's own line with lower clock rate and processing power share in general and more RAM at a lower price?

    Node spec:

    • 1x 7702P or 7713P 64-core processor 2.0 GHz, hyperthreading enabled
    • 1TB 3200MHz ECC RAM in 8 channels
    • 1x 256GB M.2 disk for hypervisor OS
    • 6x 2TB hot swap NVMe for VPS
    • 10Gbps network interface

    VPS spec:

    • 2 cores, 0.5 persistent usage allowed
    • 4GB RAM
    • 40GB NVMe
    • 1Gbps port
    • $28.88/year regular, $16.66/year promotion, $8.88/year flash deal

    You can fit 250 VPS per node.

    Thanked by 2_MS_ FrankZ
  • YunenYunen Member

    @hk5354 said:

    @Yunen said:
    Migrate vps to San Jose localtion is a bad choice which have so high lost nobody can use fluently. Any plan here to fix this problem?

    how bad is that? :'( seems no ping lost for the looking glass IP

    Happy to see that virmach has fix this situation, a few days ago, maybe 30% ping lost at my testing.

  • hk5354hk5354 Member

    @Yunen said:

    @hk5354 said:

    @Yunen said:
    Migrate vps to San Jose localtion is a bad choice which have so high lost nobody can use fluently. Any plan here to fix this problem?

    how bad is that? :'( seems no ping lost for the looking glass IP

    Happy to see that virmach has fix this situation, a few days ago, maybe 30% ping lost at my testing.

    resume to normal now?

  • VirMachVirMach Member, Patron Provider

    We got more direct communication ongoing in a chat with xTom and that has improved the situation. AMSD025 is back online, sorry for the extreme downtime. Working on the rest now.

  • JabJabJabJab Member
    edited May 2022

    Can confirm. AMSD025 is online, my system booted!

    Start Time       End Time           Down    
    19 May 21:08    22 May 17:19    2d 20hr 11min
    

    F

  • Mine too.
    18:25:19 up 7 min, 1 user, load average: 0.00, 0.02, 0.00 :)

    @VirMach out of curiosity:

    • are all XPG disks replaced now with NVMe as it was planned before?
    • are AMS servers still in the temporary cabinet with plan to move them some time later into their own cabinet?
  • VirMachVirMach Member, Patron Provider

    @Rockster said: are all XPG disks replaced now with NVMe as it was planned before?

    Yes, all gone for Amsterdam.

    @Rockster said: are AMS servers still in the temporary cabinet with plan to move them some time later into their own cabinet?

    This already got moved. We didn't get to send out emails because of some miscommunication on the timing and then again for the XPG drive removals, and further miscommunication (this one not on our end) resulting in the wrong disks being removed and causing the outage. That's been resolved now, and XPG 4TB drive replaced with 2 x 2TB Samsung drives.

  • YunenYunen Member

    @hk5354 said:

    @Yunen said:

    @hk5354 said:

    @Yunen said:
    Migrate vps to San Jose localtion is a bad choice which have so high lost nobody can use fluently. Any plan here to fix this problem?

    how bad is that? :'( seems no ping lost for the looking glass IP

    Happy to see that virmach has fix this situation, a few days ago, maybe 30% ping lost at my testing.

    resume to normal now?

    Not normal but better than before, less lost ping packet now. 40% before, 10% now. And all packet lost at the same time range, I think there are some problem in the router of virmach. But, it use more fluente than before at least.

    20220523160145.png

  • VirMachVirMach Member, Patron Provider

    @Yunen said:

    @hk5354 said:

    @Yunen said:

    @hk5354 said:

    @Yunen said:
    Migrate vps to San Jose localtion is a bad choice which have so high lost nobody can use fluently. Any plan here to fix this problem?

    how bad is that? :'( seems no ping lost for the looking glass IP

    Happy to see that virmach has fix this situation, a few days ago, maybe 30% ping lost at my testing.

    resume to normal now?

    Not normal but better than before, less lost ping packet now. 40% before, 10% now. And all packet lost at the same time range, I think there are some problem in the router of virmach. But, it use more fluente than before at least.

    20220523160145.png

    Sorry, which node again?

  • imokimok Member

    @VirMach said:
    Sorry, which node again?

    Some.

    Thanked by 1randomq
Sign In or Register to comment.