New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
A little premature to say ARM vs x86 is already settled. the race in RISC is not even settled, ARM vs RISC-V is the race there and open-source RISC-V is gaining traction and performance. The lack of licencing charges, means manufactures may find more profit in RISC-V, and Chinese companies may/are starting slowly to move to RISC-V based designs.
CISC x86 is very competitive Intel and AMD have traded blows for decades and still no clear winner there yet.
As for RISC vs CISC, the massive gains in IPC enjoyed by specifically ARM in the last decade are slowing down, 40% generational gains are a thing of the past at least with ARM. RISC-V is closing the gap on the more mature closed source ARM designs.
The lines between CISC and RISC are also become blurred, with RISC especially ARM designs becoming more complex and CISC reducing complexity in some cases.
From ChatGPT...
In summary, the distinction between RISC and CISC has become less clear-cut in modern processor designs. Processors incorporate a mix of features and principles from both paradigms to provide better performance across a wide range of workloads. The goal is to achieve high IPC, efficiency, and flexibility while making it easier for compilers to generate optimized code. The specific microarchitecture and design choices vary among different processor manufacturers, resulting in a wide range of architectures with varying characteristics.
I do love a quote from chat GPT
So....
opinions is this junk ? if it works it works right?
https://www.aliexpress.com/item/1005005823145892.html?
100% junk, something similar (Huanan) is used for cheap PCs
UUU, ZSUS, shiny...... I radther use a HP G6 with DDR3, even if it drains 160W at the wall.
Litmus test, if that board were any good it wouldn't be on Aliexpress. Period. I mean it. Really.
@Swiftnode maybe this ZSUS board would be a good addition to your servers
No. Yes, both Intel and AMD still have a lot of money to throw away and will do all they can to stem the tide, but it ain't going to work.
Yes, the massive gains ARM was making some time ago are a thing of the past, naturally, it went from such a low baseline, now it has been noticed and directly targeted by very rich companies aiming to eat away at their foundation but it is too little, too late.
The arms race is, indeed, intensifying and we, as users, only stand to benefit, we will get more computer power at less power drain and that is the whole point, but, if I were to bet, unless Intel and AMD are going fully behind an ARM-style design or coming up with their own and use the microcode to add an x86 compatibility layer, they are out even in the middle term.
The point is that the x86 behemoth, the "wintel" monopoly and money printing machine are a thing of the past. Good riddance.
How do most people start their hosting companies? I assume they buy the proper equipment and space to keep it there. It just sounds very complex.
these are fine for desktop use, they use chipsets ripped off of server boards. Parts quality is pretty terrible but they technically function fine. Not that I'd ever use one though.
Tell that for example to Toyota, BMW or Honda lol.
It just takes the right engineering. Oh also Toyota built 230+hp 13k RPM 1.6L naturally aspirated engines in the 80s targeted for 150hr+ run time for a racing series, 4A-GE for Formula Atlantic, they called it "The Equalizer" lol -- that's exponentially more demanding than boosted. These days rally guys are running years on same power level on the same base engine, just lower revs, thousands of hours of lifetime in racing.
Ah, dang it -- i need to get back on the track.
Also, these days Honda Civic Type-R comes with 2.0L Boosted engine with 330hp from factory... Factory 2.0L engine producing 330hp, you know how mild tune that is. Probably makes 400+hp just after a tune, and 500+ hp after tune+e85 (and maybe turbo, depends on the turbine sizing).
This. Used servers are very very cheap for most purposes.
And that is true for one reason only (well, 2 reasons, but related): Rack space and power drain.
Having the HW is good for hobbies, labs and such, training, learning, but it is NOT for running 24/7.
You can run regular consumer HW (okay, not the cheapest kind, at least average) 24/7 in a home environment if you have UPS, battery (for laptops) and such.
You CAN'T run obsolete server hardware, not because it would break, but because it would cost you an arm and a leg in power drain.
Let's take 2 examples (we assume you can't use a Kimsufi or some other server at a fraction of the cost of building own and the power used):
1. My homelab project needs little resources, ram, cpu etc. Say, 32-64 RAM, a CPU to handle that (which is pretty beefy but most current Ryzens would do) and I already have a decent internet connection, enough for my project's needs (1 gbps should suffice).
2. My homelab project involves massive resources, huge virtualization project, server farm for deep learning and the like.
Would I, in any of the 2 cases use SH server-grade hardware?
Nope, in the first case, I wouldn't need it, I could also buy the consumer stuff used (you can find amazing deals if you know how to search) but even if I would buy anew and calculate my needs correctly, I would still get it cheaper than the power and cooling big blade servers would need. I am not even talking about the noise. If you buy new laptops, then you would likely get more computing power than a literal ton of used servers at a fraction of the socket drain and space taken, let alone noise, cooling requirements etc.
As a conclusion, unless you want to learn IPMI and stuff, have it on hand when you screw up, test various components you would later take to the DC, there is no case for used server hardware, not at home, anyway and at a colo, the rack space and power drain would soon make up for the difference in renting some place a decent server.
Define me cheap? So I can understand your baseline.
https://www.ebay.com/itm/186115196074?hash=item2b555484aa:g:VvgAAOSwJu9lJYPn&amdata=enc:AQAIAAAA4CaDQAXcpCvE2f3ByYzJPJyH/CVcj5kkuN91PSYZvHOpmxExWCNJxTu6JtxFY+irafo5iKEiSggBPVqdG4d7TIPpx70IY/GnLFFOPHTwfIHRBD/TUQW41e/9jSSAaqeBb20p10rTgHBGkO5tOCxUT6KjbcxfsftnduAeOKpWxCs2ilzo8yXrHWrCbSRRSG5wBDoipnbWLba9h+LCHObKmHy023OtHoTt0fYXL+Sn1Rs7gLN/6wcjjgLJfZVF8/eJX59r5atdUQTVbMvPj9o7jffRX7cYSm1Yv97xj6WSJKwk|tkp:BFBM-K79geVi
That is a 6000 USD server ( new probably 30K ) and still it needs some USD to make it usable for hosting.
I would trust this more than 10 desktop ryzens or i9, or whatever desktop series CPU some use today, including "E" series single CPU servers.
Because Cheap is on thing for Joe and another Gus for example.
As a baseline for us, nothing under a V4 is used, newer acquisitions are only Xeon Gold, we gave away our Intel Xeon 5600 Series to the scrap yard years ago.
It is all relative.
Did you spend a lot of time purposefully trying to find the relatively most expensive unit? You do realize you can do that in single socket EPYC for much less money?
What consumer grade has 1TB of memory? Based on the memory you have to build 8x consumer grade systems.
If you need say 24x 2.5" Bays (or U.2/U.3), it is cheaper to buy a used server than trying to do the same with consumer grade stuff. Or if you need 1TB+ of RAM. This has both. 1TB of RAM is not cheap no matter what.
Regardless, that's probably the least efficiently composed server i've seen in a while 🤷🤷
8x 8core, 128G RAM consumer systems would outpace that tho, but take much more space and much more power when loaded. Therefore costing more in the long run. Hence, even that pile of crap server costs less than building say 8x 1U AM4 servers instead. Tho, if you need CPU performance then these consumer systems are cheaper by far, also better resource isolation, and potential for higher storage I/O total. But these would cost much more to operate with a silly config like this.
Funny you have spent the whole thread trying to say consumer is shit, only use server grade stuff, and here you are now arguing against server grade stuff. Go figure.
Not really, want to send what we buy for networking?
We do run other business than LET, and we need High performance IO for our customers. So yes, for us, something like that is a baseline, on the higher end customer.
If you get a Dell/HP FCLGA3647 chassy, and start adding CPU, RAM, Storage, cards, it will cost you much more than that unit.
Either way, I see no benefit in consumer grade hardware for Hosting if you do not own the location, and a nice DC with all at standard is extremely expensive to build on your own.
That is my opinion.
As for home use, it really depends on what you do, definitely a server at home will not bring you piece and silence.
PS: here is a nice and "cheap" one
https://www.ebay.com/itm/115914762973?_trkparms=amclksrc=ITM&aid=1110013&algo=HOMESPLICE.SIMRXI&ao=1&asc=255896&meid=336defec270e43cab37b76544ca5fcd0&pid=101196&rk=4&rkt=6&sd=386132158952&itm=115914762973&pmt=1&noa=0&pg=4429486&algv=PromotedRVIPbooster&brand=Cisco&_trksid=p4429486.c101196.m2219&amdata=cksum:115914762973336defec270e43cab37b76544ca5fcd0|enc:AQAIAAAA8JsbtOKd5uIU0OgsJNCgXCOhBN3bzud19O5upYxlLfpB3s2FbGg1F%2F06QxTlr8Axxuf0%2BMPnHS96Q%2FD8YOrlxHf3vWzXeDiRn2iRWaa0XAnkHVJud%2B0EIzIbcaYquua5Ugj4HLpUFqoPKlw6u2i33Q9xh7MVTyxB84KYBsuirnyP0jDhXSGqYucxZyyl8x0tj4rzTyBj%2BV%2Ba0YrfN2a7SIn2vCYLh3J860uq5QkBear3glKPQkLTaORXQ3bIaAnu0mkhQwJ84T2AQb6BP%2Fx80PGX9VKcReQGYGySmF00yXlkc35JEjJJ9fPn5ECR7%2BnmuQ%3D%3D|ampid:PL_CLK|clp:4429486
Ah Fuck, typo, would not trust. Damn it.
More typos, you quoted that like i would have said that ... Which i did not.
Please do not change your quotes to read like i would have said something which i have not said.
@emgh comparing $500 and $3000 CPU is pointless.> @davide said:
Ive been using hetzner dedis since... idk, 2018? Never seen single random reboot.
The examples was chosen to show typical differences between consumer grade and server grade CPUs, which it did
Price is such a difference
Choosing a 5-10 year old EPYC to have an "equal" price makes no sense
I feel we are not getting anywhere with this .
Let's break it down.
User perspective:
I got a Ryzen 5/7/9 I5/I7/i9 system, it works = I am pleased with it. End of story
He ( the user/client ) could not care less about remote access ( BMC/ILO/IDRAC) and drive failure, mobo and so on. He is the Client, and this is perfectly normal
Provider perspective:
I have to sell, as I have monthly bills, what can I do with my CASH to maximize the profit? What do users want? 3 our survey, hey, users kinda like these desktop grade systems as they have high CPU clock and are fast.
Now here, the provider, has to take a decision.
Now whatever factors are in play here, some sell PC grade systems, some do not.
I worked in a PC assembly shop when I was in my early 20’s, and boy do I hate pc systems. From a lot of 20 Motherboards, same brand and model, 1 usualy has all kind of problems, and this after it gets HOT by heavy use, or in a mix of configurations/ram/cpu/firmwares/vga cards and so on.
In a server, this type of problems are relatively inexistent, at HP when I worked, other than user misconfiguration of settings in BIOS/RAID CARD/faulty drivers, there were no technical issues with the servers
( up to G8, as after that I left the service department, cannot provide reliable info )
So risking a 1 out of 20 chance on a pc build, or rather then spending countless ours testing combability between the parts, and after I find them, always keep stock, or search on the net ( this is such a waste of time), I will definitely trust a server in reliability, as at the end of the day, uptime will usually win.
I hope I made myself understood, why some say PC grade is shit.
I do know that some took this PC parts assembly to the level of offering good services, good for them. Did you see the work behind this? Continuous testing and so on, that involves a lot of man power, and a lot of ours spent. Some, like OVH and others, have the $$ to do this, with modified cases, cooling solutions and so on, I do respect that.
But for the average Provider, I fail to see the benefit in a risky PC build setup advertised as a server. 1 crash, and instantly those 20 users are going to burn you on LET for what shitty services you have.
Do not get me wrong, Getting Xeon E 5600 family servers today for selling VPS to clients is even worse, those are 10+ year old hardware (2011), although the G6/7 generation was the best in terms of reliability in my opinion, the CPU’s today are so outdated like a CRT TV compered to OLED. ( hope I got this comparison right )
So I will stick to what I said, as a provider, I would stick to Xeon/Epic DDR4 family in a brand server, like HP or Dell.
Edit:
Sorry Super-micro fans, as until my 30's I was in Europe, did not have any chance to play with Supermicro, as in EU at that moment, they were not that present on the market.
I still haven't found a way to crash my Supermicro except by pulling the power cord. Thinking of it, it has never crashed in like 100,000 hours uptime.
Edit: and this Supermicro is suffering the pains of hell under my malignancy; 90°C for years
I'm sure one of the cpu fans is 20 years old. Luckily no one lives in the basement.
Isn't it depending solely on software? I doubt there is any major difference in stability between DELL/HP/Supermicro platforms
Some estimations of MTBF use formulas that are inversely proportional to components count. Supermicro boards have fewer discrete components than competitors.
The dumber the tougher.
From a provider point of view, if you sell VPS< get the biggest ass server in there, put in the max amount of ram and as many cores cpu you can find, get SAN or other dedicated storage.
If you need to sell dedis, then there are these things:
I don't say that model exactly, but you get the picture.
But then you see the price and the power density required. If you have a big hall in a relatively cold place, lower cooling necessary, you can order custom cases by the thousands and make your own miniPCs or you can buy ready-made:
https://www.emag.ro/mini-pc-beelink-ser5-cu-procesor-amd-ryzentm-5-5500u-pana-la-4-0ghz-16gb-ddr4-500gb-ssd-wi-fi-6-radeontm-graphics-windows-11-negru-beelinkser5/pd/DKWS4MYBM/
That is about 400 Eur and I didn't search much.
Best of all, you only buy when you need, have demand for them, expand as you go, order custom, something...
There is a case for both ways.
Core i9/i7/i5/i3 support ECC as long as you use a workstation or server motherboard. An Intel Core server with a server-ish motherboard and server RAM seems totally fine to me.
My home server has a Core i5 with a workstation W680 motherboard (Asus Pro WS W680M-ACE SE) and 64GB DDR5 ECC RAM. Supports up to 192GB RAM which is more than I'll ever need.
More than 64 GB for that CPU is most likely a waste (barring very few scenarios). The CPU would simply not be strong enough to run that many VMs or whatever workload.
Now, if you need a RAMDisk or sort some insane amounts of data in many temporary tables, then you might need more, but the CPU to RAM ratio is already too low for normal applications, unless that is the latest top of the range i5 in which case is about right.
By the way these are two l5410 from ~2008. I don't decommission based on prejudice of age. They were imported from Israel, good multiculturalism too. (fuck israel, boom boom) They are about 15 years old, but I forgot their birthday day.
Some very bright mind please run the numbers to quantify how dumb it is to keep this old hardware around. And write it in the form of a comic, it will be funnier.
Mostly depends on your electricity price
Wow @davide , what do you actually run on that?
You could wait another 15 and give it to a museum, so kids by then can see how did Intel's first 64 bit CPU looked like and at that time how AMD beat the living shit out of them.
Guessing those are in some HP G5/Dell 1950? I had similar ones, yap, like 15 years ago as you say.
About 2€ per FLOP.
Nothing, the only point is to prove the supreme stability of Server Grade Hardware.
Argument won.
Edit: admittedly I have a fetish in insisting on improving the algorithm instead of throwing a faster chip at the problem. Works for me. Mostly.