New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
======================================
Which serves are affected?
San Jose: SJKVM8, SJKVM11
Atlanta: ATLKVM11, ATLKVM12, ATLKVM13
Seattle: SEAKVM15
Dallas: DAL10GKVM2
Buffalo: NY10GKVM88, NY10GKVM82, NY10GKVM38, NY10GKVM33, NY10GKVM30, NY10GKVM27, NY10GKVM19, NYKVM21L
Piscataway: NYCKVM16, NYCKVM12
Los Angeles: LAKVM9, LAKVM16, LAKVM26
======================================
LAKVM9 migrated ? Still showing "Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz"
No.
Impressive IOPS. That's really awesome.
It's been some days since my last connection.
I couldn't connect to my server, LA10GKVM26.
Say 10 Hail Marys and try using a VPN.
Okay so what I've been learning from this experience is that ASRock motherboards are awful and the only real way to avoid them is going with AMD Epyc. I don't know if we just got really unlucky with these batches but I've been diving deeper into testing them before sending them out and I replicated similar issues to what San Jose faced and it doesn't really end up being the CPU or memory even though that's the kind of errors thrown, it's just the motherboard.
Anyway, what do you guys think? And if we went with Epyc would you guys want highest clock rate at similar pricing to Ryzen or it's own line with lower clock rate and processing power share in general and more RAM at a lower price? There are also some ASUS boards that could work that @LiliLabs mentioned at some point but I haven't actually got into them deep enough to see if it's truly more reliable. None of them have randomly broken in testing but the quantity is low and I'd need to find a reliable KVM switch for them. The cool thing about doing those is due to their lower cost I can probably do smaller nodes and we can skip all the other problems that come with trying to put a higher quantity of VMs on the same NIC, giving it a lot more breathing room for that & the Gen4 NVMe SSDs and we can also be a little more lenient on CPU usage all around.
Node for that idea would be like:
64GB RAM / 3900X/5800X/5900X / 2x2TB Gen4 NVMe
And it'd only be in locations with lower power cost since overall power usage would be more for the same amount of RAM/disk. Cool part about these is I could also see us selling them as dedicated servers at around the $99 per month pricepoint.
EPYC with lower clock and more RAM if the network would not be over saturated.
Although the ASUS boards with lower density sounds best to me.
Fuck Asrock. I did a Ryzen build recently. The first motherboard never booted, although RGB LED were on. Requested replacement from seller. The second was ok at first. When the return period was over the card started to randomly shutdown. I got a new one again. Asus this time, but when I removed the CPU I bend some pin really hard because thermal paste was really hard, so I had to buy again a new Ryzen. Now the clueless Asrock support suggest a Bios update that obviously not gonna work and I don't plan to swap it back. And this isn't even all the problem I got with this build. I'm really close to be able to do a complete broken build. So, fuck Asrock.
Are you using the motherboard NIC? The NIC on those AsRock boards can be really flaky, same with the Asus ones. Best to get a nicer dedicated Intel NIC, will save you lots of headache down the line.
I'd love this, if only because it'd mean I'd get my dedis at long last! This would also be a huge win for the VPS customers since they'd be able to burst higher. Which locations were you thinking of here?
What about another line of products focusing on higher specs, better reliability, and lower density per node, like these VDS's appearing in the offers section?
As you're now coloing with xTom or Dedipath for a lot of locations, I'd like to pay more for something production-ready.
@VirMach LAKVM9 LAKVM9 seems to hang again, the panel does not work to reboot VM.
Migrate vps to San Jose localtion is a bad choice which have so high lost nobody can use fluently. Any plan here to fix this problem?
One vote for "lower clock rate and processing power share in general and more RAM at a lower price".
prepare for 7950X with AM5>
Maybe it's time VirMach gets active on its support instead! Two weeks since screwed "migration", my vps not running, and no reply to the ticket...
how bad is that? seems no ping lost for the looking glass IP
Node spec:
VPS spec:
You can fit 250 VPS per node.
Happy to see that virmach has fix this situation, a few days ago, maybe 30% ping lost at my testing.
resume to normal now?
We got more direct communication ongoing in a chat with xTom and that has improved the situation. AMSD025 is back online, sorry for the extreme downtime. Working on the rest now.
Can confirm. AMSD025 is online, my system booted!
F
Mine too.
18:25:19 up 7 min, 1 user, load average: 0.00, 0.02, 0.00
@VirMach out of curiosity:
Yes, all gone for Amsterdam.
This already got moved. We didn't get to send out emails because of some miscommunication on the timing and then again for the XPG drive removals, and further miscommunication (this one not on our end) resulting in the wrong disks being removed and causing the outage. That's been resolved now, and XPG 4TB drive replaced with 2 x 2TB Samsung drives.
Not normal but better than before, less lost ping packet now. 40% before, 10% now. And all packet lost at the same time range, I think there are some problem in the router of virmach. But, it use more fluente than before at least.
Sorry, which node again?
Some.