New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
We both didn't know the actual hardware. I thought it was in a RAID and you thought it was a single NVMe on PCIe 4.0 instead of PCIe 3.0. As Virmach said, the additional M.2 slots come from the chipset and only certain chipsets. You might want to keep up with the latest chipsets to know.
Your English is failing you. I never said otherwise.
Wait, so at that point you were under the impression they were running RAID configuration or not? That wouldn't make sense with your previous argument of 8GB/s because of one PCIe 4.0 x4 limit.
Actually, forget it, the language barrier is too much.
whoosh
No, you dolt. Instead of one bicycle carrying one person, I'm saying there could be several bicycles each carrying a person. But forget it, this miscommunication is annoying.
That wasn't your argument. Your argument was that the motherboard couldn't have three M.2 slots and a x16 slot which have been out for a while now.
there are (i.e. premium X570) motherboards with three M.2 slots and PCI-e x16 slots, but issue is that not all of these slots are Gen4 and some of them share bandwidth (if you populate one slot, than other components/slots are not active/available)
so there is no performance benefit, if you gain other M.2 slot if it means than you go down from Gen4 to Gen3 (this is to give some flexibility not not bypass of physical limits of CPU)
this is why you can't have more then 24 Gen4 lanes, no matter of how many sockets are soldered
you simply don't understand meaning of physical limits of bandwidth available out of specific CPU class/platform
no, it is not a matter of miscommunication, but a matter of your fantasies:
-that VirMach could silently put HEDT class Threadripper CPUs in LET Ryzen VPSes. no, he did not!
-that you could have more bandwidth than 24 lanes of Gen4 lanes from some magical AM4 motherboard. no, you can't!
-that he could put $1000 RAID card just for fun. no, he did not (clearly stated that Gen3 drives are attached directly on motherboard and faster ones to dumb PCI-e riser, and allocated as first ordered VPS first allocated drive),
which you are trying to justify wrong result.
you simply don't understand meaning of physical limits of bandwidth available out of specific CPU class/platform
and if you do not get all these above facts so far than this discussion is pointless, so please just stop as I don't have more time to educate you
For fucks sake, it is a communication issue. I gave an example. A mother fucking example. I repeat, example. I was just showing RAID wasn't limited to single slot because I thought Virmach was running in RAID. Again, a fucking example. I never once said Virmach had a threadripper and you're being an idiot thinking I did. If you're still confused, look up the definition of "example".
You mentioned lanes only because you didn't know about chipset lanes. You're changing your argument.
I didn't justify wrong result, I said I knew it be wrong. I've been very vocal for two years vpsbench is nonsensical. Again, you're having a miscommunication or just plain making shit up.
You don't, you just changed your argument when shown wrong.
When you can understand English better, you'll realize where you went wrong.
you are proving once again that you have no idea what are you talking about
THERE IS NO BANDWIDTH PRODUCED BY CHIPSET ITSELF! ANY BANDWIDTH OUTGOING FROM AM4 CHIPSET TO M.2 SOCKETS, PCI-E SLOTS (AND OTHER DEVICES CONNECTED TO CHIPSET) IS MADE/PRODUCED/CONSUMED/BORROWED FROM MENTIONED 24 GEN4 LANES FROM THIS CPU. IT DOES NOT MATTER HOW MANY THESE CPU LANES YOU CONNECT DIRECTLY TO DEVICES AND HOW MANY THROUGH CHIPSET. YOU STILL HAVE LIMIT OF TOTAL BANDWIDTH OF 24 OF GEN4 LANES
for future I suggest you to be more focused on quality of your posts than only their mass quantity, and rather restrain yourself from posting if you have no idea what is thread about. you will not have to pretend you didn't write what you wrote or did not really mean it, but that was only hypothetical non-binding "examples"
btw. there you have playground for you to exercise your outer space "examples" with unobtanium PCI-e lane bandwidth multiplication drives: EPIC GAMES PC Building Simulator game (available tomorrow for free)
https://epicgames.com/store/en-US/p/pc-building-simulator
play it all days and nights and for "fucks sake" just stop bothering me man with your childish "examples" explanations and pc hardware fantasies
You're the only one having this conversation. At no time did I say there was more than 24 lanes from the CPU. You're arguing something I never said so it's unclear wtf you're droning on about. You first talked about a single x4 slot with 8GB/s bandwidth limit and now you keep whining about 24 lanes, which was never relevant! My whole post could have been replaced with "you're assuming they're using a single NVMe drive on a PCIe 4.0 x4 slot when he could be running it in RAID with more than one NVMe". That's been my point this whole fucking time! I only found out afterwards he was running PCIe 3.0 x2 slot and thought he was running RAID based on a previous post from a while back. I have no idea why you keep talking about 24 lanes, it's irrelevant.
You need to focus on improving reading comprehension. You need to quote what I said if you're saying a specific statement is wrong. I've pointed out that you're the one changing what you've said.
Fucking idiot, example doesn't require air quotes when it was correctly used as an example and clearly stated from the start. Improve your English if you're going to shit on English forums.
if you still don't understand why this specific benchmark result IS NOT POSSIBLE on this specific VPS, and do not comprehend level of stupidity of your statements (now called "examples") so far than I will not explain it 10th time, as you have elementary gaps in understanding of PC architecture, and still denying basic details of configuration of this specific VPS
just read VirMach's detailed post, and there is exactly stated that this VPS can't have such speed and why
if still do not understand than repeat above...
till you finally understand
hope it will help you to deal with your frustration, and spare us reading all these your pathetic verbal perversions (nobody will want to discuss with you in that mental state)
@Andrews @TimboJones
Can I kindly ask you to move to separated topic? You started nicely [and on topic] and now it's insults and shit having nothing to do with VirMach test results and comments.
Thanks. I'm done with him. Everything has already been said by me and VirMach.
When reinstalling from SolusVM
CentOS7 template - works.
CentOS8 template - kernel panic
Fedora 23 x64 Minimal template - works
Debian 9 64bit Minimal template - Shows following error booting from Hard Disk "error: file '/boot/grub/i386-pc/normal.mod' not found" Entering rescue mode...
Ubuntu Server 18.04 LTS 64bit Desktop template - kernel panic
Ubuntu Server 20.04 LTS 64bit Desktop template - kernel panic
Install from CD does not work, VPS does not come online after restart.
I tried 6 different CDs
VNC works with tightVNC
HTML5 VNC from SolusVM does not work.
Monster benchmark from 2021-09-29 with CentOS 7 3.14 kernel
Monster benchmark from 2021-10-7 with Fedora 23 4.8 kernel.
Well, that escalated quickly.
Debian 8.0 x86_64 Minimal - works
Ubuntu 14.04 x64 - works
Ubuntu Server 16.04 LTS 64bit Desktop - kernel panic
Updated YABS - Seem to be having through put issues today. Starting about 12PM CDT.
https://i.postimg.cc/5y7s95CP/Screenshot-2021-10-19-at-21-10-27-Backup-Stats.png
At least yours is online. Mine is offline, can't reboot via control panel ("An error occurred processing your request. The host is currently unavailable. Please try again later"), not pinging, no ssh, hetrix says it's offline for 2 days already.
I guess we broke it :-)
It's an Alpha test, so these things are to be expected
Yes!I reinstall my vps,I lost it!😭
Still looking good here...
https://i.postimg.cc/L4LjRYSp/Screenshot-2021-10-25-at-05-16-34-Backup-Stats.png
HTML5 VNC in SolusVM and WHMCS does work now.
No outside network available as of ~10pm PST Nov 10th.
Outbound I can ping the gateway from the VPS using VNC, but cannot ping anywhere else. Inbound no ping, no other services available.
Return to normal operation as of ~2am PST Nov 11th. All services responding.
Some parts of routing are still dead - whole Cloudflare for example
To clarify I am only monitoring from 12 locations. Sydney, AU, San Jose, Los Angeles, Dallas, Houston, Chicago, Atlanta, New York City, Norway, Amsterdam, Madrid, and Milan. And only three protocols, ping, TCP (SSH) and UDP (DNS). So there could be things that I miss.
https://www.lowendtalk.com/discussion/comment/3343557/#Comment_3343557
Oops... I think @JabJab broke RYZE.PHX-Z002.VMS - I'm not getting any response from the control panels either
you weren't suppose to break it during peak cyber-month pressure!
Solus gives me:
An error occurred processing your request. The host is currently unavailable. Please try again later
WHMCS gives me:
Operation Timed Out After 90001 Milliseconds With 0 Bytes Received
Oh man and just when this thread was getting good for a minute