Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Server vs Consumer Hardware.. - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Server vs Consumer Hardware..

245

Comments

  • davidedavide Member
    edited October 2023

    @Swiftnode said:
    I mean, there are near endless possibilities, plenty of Supermicro "server" boards support i3/i5/i7/i9 chips. AsRockRack workstation boards have IPMI and support Ryzen and Core series chips, same with Gigabyte, etc.

    Current use case for us is W680D4U or Z690D4U boards with 12900K/13900Ks. (With some X13SAE-F motherboards in the rotation too when we couldn't get ASRR boards)

    In my very humble opinion those aren't server motherboards. I'd call them junk motherboards, but we may convene on a more neutral term, without racial or religious connotations, such as """""server""""" boards if you prefer :)

    Thanked by 1host_c
  • SwiftnodeSwiftnode Member, Host Rep

    @davide said:
    In my very humble opinion those aren't server motherboards. I'd call them junk motherboards, but we may convene on a more neutral term, without racial or religious connotations, such as """""server""""" boards if you prefer :)

    This forum needs a "troll" tag so I know when I'm responding to someone legitimately asking a question versus someone intentionally acting retarded.

  • emghemgh Member
    edited October 2023

    @Swiftnode said:

    @davide said:
    In my very humble opinion those aren't server motherboards. I'd call them junk motherboards, but we may convene on a more neutral term, without racial or religious connotations, such as """""server""""" boards if you prefer :)

    This forum needs a "troll" tag so I know when I'm responding to someone legitimately asking a question versus someone intentionally acting retarded.

    .

    @davide said: Unless someone uses Ubuntu or some other unstable piece of trash ...

  • I feel you, there are people who call their 20€ Raspberry a server.

    Hetzner makes bank selling """"server"""" junk that reboots and crashes ten times a year. People buy whatever crap they want. But Asus Gigabyte and Asrock are consumer brands, it's not really an opinion, they are meant to serve the consumer market.

    Thanked by 2tentor host_c
  • @emgh said:

    @davide said: Unless someone uses Ubuntu or some other unstable piece of trash ...

    Is that your Jackass badge?

  • MrRadicMrRadic Patron Provider, Veteran

    @nocloud said:

    @MrRadic said:

    @nocloud said:
    i accidentality deleted the poll when editing EEC to ECC. So if you voted already, please re-vote!

    All Ryzen support ECC.

    Not according to wikichip, or chat GPT..

    As of my last knowledge update in September 2021, not all AMD Ryzen CPUs officially support ECC (Error-Correcting Code) memory.

    AMD has traditionally differentiated between its consumer-oriented processors and its professional/server processors when it comes to ECC support. CPUs in the Ryzen series, which are aimed at consumers, often lack official support for ECC memory. However, some of these consumer CPUs may still support ECC in practice, but it's not guaranteed or officially validated by AMD.

    https://en.wikichip.org/wiki/amd/ryzen_7/5800

    https://en.wikichip.org/wiki/amd/ryzen_9/5900

    The X series might but the Cézanne non-pro APUs in my example don't. or the non-X cpus

    They do, all of them do.

    Thanked by 1Swiftnode
  • darkimmortaldarkimmortal Member
    edited October 2023

    @davide said:
    I feel you, there are people who call their 20€ Raspberry a server.

    Tbf Pi's are pretty stable and have ECC(ish) as standard

    @davide said:
    Hetzner makes bank selling """"server"""" junk that reboots and crashes ten times a year. People buy whatever crap they want.

    There is some good stuff at Hetzner, especially in the auction - seen workstation boards from Fujitsu and Asus that are proper 24/7 rated. But I do agree most of what they sell (at least before the latest few models) is barely fit to be called a server

    @davide said:
    But Asus Gigabyte and Asrock are consumer brands, it's not really an opinion, they are meant to serve the consumer market.

    They do make some 24/7 rated boards as well, but in fairness I would choose other brands over them

    @davide said:

    @Swiftnode said:
    I mean, there are near endless possibilities, plenty of Supermicro "server" boards support i3/i5/i7/i9 chips. AsRockRack workstation boards have IPMI and support Ryzen and Core series chips, same with Gigabyte, etc.

    Current use case for us is W680D4U or Z690D4U boards with 12900K/13900Ks. (With some X13SAE-F motherboards in the rotation too when we couldn't get ASRR boards)

    In my very humble opinion those aren't server motherboards. I'd call them junk motherboards, but we may convene on a more neutral term, without racial or religious connotations, such as """""server""""" boards if you prefer :)

    Those boards listed, especially the X13SAE, are quality boards

    Thanked by 1Swiftnode
  • @darkimmortal he got you

  • @emgh said:
    @darkimmortal he got you

    The lines are always blurred between neurodiversity and trolling on this forum

  • @darkimmortal said:

    @emgh said:
    @darkimmortal he got you

    The lines are always blurred between neurodiversity and trolling on this forum

    :D Definitely

    It’s probably often somewhere in-between

  • @MrRadic said:

    @nocloud said:

    @MrRadic said:

    @nocloud said:
    i accidentality deleted the poll when editing EEC to ECC. So if you voted already, please re-vote!

    All Ryzen support ECC.

    Not according to wikichip, or chat GPT..

    As of my last knowledge update in September 2021, not all AMD Ryzen CPUs officially support ECC (Error-Correcting Code) memory.

    AMD has traditionally differentiated between its consumer-oriented processors and its professional/server processors when it comes to ECC support. CPUs in the Ryzen series, which are aimed at consumers, often lack official support for ECC memory. However, some of these consumer CPUs may still support ECC in practice, but it's not guaranteed or officially validated by AMD.

    https://en.wikichip.org/wiki/amd/ryzen_7/5800

    https://en.wikichip.org/wiki/amd/ryzen_9/5900

    The X series might but the Cézanne non-pro APUs in my example don't. or the non-X cpus

    They do, all of them do.

    Fair enough, good to know. I honesty thought working vs supported were 2 different things. It works but it's not supported type of thing like with changing the ssd in my steamdeck, works fine, but if it doesn't boot valve won't help me, or proxmox forum help if using it on an arm device.

  • MaouniqueMaounique Host Rep, Veteran

    @host_c said: PC's are not built to run 24/7, the can, but not designed nor engineered that way.

    Maybe or maybe not.

    I am using LAPTOPS for my home lab, 10 running at this moment, all but 2 SH ones, running 24/7 with games and RDP for friends not fortunate enough to have great internet and who wish to save on power and noise overnight. Many of them have broken displays and I usually disconnect those from the MB to save even more power and using with the idd display for better flexibility regarding resolutions and all.
    I tell you, they don't break. Start/stopping is WAY more wearing than running 24/7 and the battery provides some extra cushion in case the power outage lasts for more than the UPSes.

    After all, HW life-cycle is about 3 years, maybe 5, I don't need a server to last me 20 years. Sure, I have a HP which was with facebook in 2010 and has a couple of E-5520 and it is still working, but who in their right mind would feed it 400 watt in mild load when ALL of my laptops, some even mining, are burning less than 500 watt overall (I am a maniac undervolter et al).

    Yes, in a DC, IPMI is a must, but ASRock, Supermicro, ASUS all have bare bones servers for the AM5 platform, for example, it is not that they don't exist, you can have the best of both worlds, IMO, unless you need HUGE beasts with SAN storage sporting 512+ GB RAM, 50+ cores to optimized power (in both senses) and host hundreds of VMs, the so called "consumer-grade" HW would work.

    Of course, if you would use consumer TLC SSDs in it, then you are too dumb for this world unless you need it for streaming, transcoding, routing and similar things which dont need storage at all.

  • emghemgh Member
    edited October 2023

    @jar care to share how many servers you operate, as well as failure stats for server vs. consumer-grade ones?

    Because you seem convinced that consumer grade hardware isn’t (as) stable, but I suspect possibly thousands of servers would be needed to draw such a conclusion?

    I mean, even if you have had 10 consumer grade servers fail, and no server-graded hardware do so, chances are it’s chance

    Also, when such failures has occured, what part has been failing? Different everytime or always the motherboard for example?

  • consumer hardware can be used in fun way, like this one

    I'm not sure how profitable it is, but from user perspective renting this kind of dedicated server might be cheaper. It can be used for training stuff, sure it's slower but at least it doesn't runs out of memory and it's miles cheaper compared to GPU server. And then you can put mlc-llm on it as the runner.

    I guess my question is, is this trend increasing or is it just a symptom of the lack of silicon due to covid? Or is it done simply to cut cost in an ever more competitive business?

    Well, there's demand from customer, so there will be supply for it no matter how stupid the reason is. They might be having a silly reason to use certain hardware, because whatever apps that they want to run / develop is well snug with that hardware. Even if it's silly from technical side, as long as it makes profit, why not.

    Thanked by 1nocloud
  • davidedavide Member
    edited October 2023

    Today I do all of my computing on an ATtiny85: facebook, reddit, netflix, porn... all the important home lab stuff. But I'll admit sometimes I cheat and just grab hold of the supercomputer that is in my pocket instead. Mom says I'm a scientist too because I'm learning python and I format my blog on two column PDFs. It's a beautiful world the one I live in my own head :)

  • NeoonNeoon Community Contributor, Veteran
    edited October 2023

    You pick what is best for you.
    Consumer vs Server is just bullshit, same goes Intel vs AMD.

    Thanked by 1PulsedMedia
  • @davide said:
    Today I do all of my computing on an ATtiny85: facebook, reddit, netflix, porn...

    Overclocked though...right?!

  • nocloudnocloud Member
    edited October 2023

    @Neoon said:
    You pick what is best for you.
    Consumer vs Server is just bullshit, same goes Intel vs AMD.

    Don't start that one :smiley:

  • tentortentor Member, Host Rep

    @nocloud said:

    @Neoon said:
    You pick what is best for you.
    Consumer vs Server is just bullshit, same goes Intel vs AMD.

    Don't start that one :smiley:

    Personally I am waiting for ARM to become mature enough for x86(_64) vs ARM thread

    Thanked by 1Maounique
  • I think consumer-grade hardware makes sense in two scenarios:

    1. If you're a service provider and you don't have high human costs (smaller company and you or a partner are in the data center all the time anyway), running some consumer-grade is a pretty good hack for undercutting the competition and reaching a market segment that really cares about computing power value more than SLAs with endless 9s.

    2. If you're running a large, horizontally-scaled app (app running on many physical servers) with significant redundant servers. You can get a lot more bang-for-the-buck running consumer-grade hardware. (This assumes the savings are more than a rounding error to whoever is running such a large system.)

    Thanked by 2mrTom PulsedMedia
  • jarjar Patron Provider, Top Host, Veteran
    edited October 2023

    @emgh said:
    @jar care to share how many servers you operate, as well as failure stats for server vs. consumer-grade ones?

    Because you seem convinced that consumer grade hardware isn’t (as) stable, but I suspect possibly thousands of servers would be needed to draw such a conclusion?

    I mean, even if you have had 10 consumer grade servers fail, and no server-graded hardware do so, chances are it’s chance

    Also, when such failures has occured, what part has been failing? Different everytime or always the motherboard for example?

    I wouldn’t be that organized, to be honest. But I’ve had multiple NVMe failures at Hetzner, and otherwise the systems I regret the most are the ones without ECC (hard to pin it down exactly, but the number of unforeseen and unusual errors are just higher). But you have to remember I do a lot of read/write operations on small files all day long too.

    It’s not organized information so you’re free to ignore it 💜

  • davidedavide Member
    edited October 2023

    @nocloud said:
    Overclocked though...right?!

    An ATtiny85 can run Binance at 2 MHz if enough crap is deleted from the source code.

    ...not really, but kinda. Not at all actually. Wouldn't recommend anyway. Do the opposite of what I say and you'll be fine.

    Thanked by 1nocloud
  • host_chost_c Member, Patron Provider

    @tentor said: Personally I am waiting for ARM to become mature enough for x86(_64) vs ARM thread

    There is no comparing ARM with x86. It is like comparing Diesel to Petrol. Might do the same job, but in a totally different way.

    ARM is efficient because of reduced instruction set.

    ARM is RISC, x86 is CISC.

    Now it took ARM more than 20 years to be able to do what x86 does, and it got here, mainly because of smart phone industry.

    By the time ARM will catch up, and software is written for it, we might as well use Quantum CPU's. ( and I am not talking about Quantum Fireball :smiley: )

    For specific use case scenario, ARM is much more efficient. For general use, at the moment, cannot beat x86.

    Thanked by 1PulsedMedia
  • tentortentor Member, Host Rep

    @host_c said: It is like comparing Diesel to Petrol. Might do the same job, but in a totally different way.

    I don't think that your comparison is appropriate but it definitely uses different approaches.

    @host_c said: Now it took ARM more than 20 years to be able to do what x86 does, and it got here, mainly because of smart phone industry.

    Embedded electronics made it obvious that CISC is not that good - yeah it got lots of attention and developed to what it is currently, but I don't see any future potential in x86. ARM on other side targets efficiency in the first place and for hosting industry it is interesting for lessen power consumption per core thus even further increasing client density per rack unit.

    @host_c said: By the time ARM will catch up, and software is written for it, we might as well use Quantum CPU's.

    I highly doubt that will take such much time.

    @host_c said: For general use, at the moment, cannot beat x86.

    For general use there are already Apple's M1/M2 systems :blush:

  • host_chost_c Member, Patron Provider
    edited October 2023

    Actualy this is a descution not ARM vs x86, rather CISC vs RISC, and it is going on for more than 20 years, i doubt we will finish it here.

    CISC prevailed in the 60's because you're applications were smaller in size hence the more instructions the CISC CPU had, this required fewer memory access, memory that at the time was slow as hell, so the outcome was a much faster running application. And here we alt taking bits as a measurement here.

    As computers became smaller ( from room sized ) and it started making it's way in the offices of companies, then into the homes of people, CISC adoption was so great, that this was the new standard.

    RISC started gaining terrain in the PC industry for the past ±10 years or so. Until then RISC was and is mostly used in Routers, Switches, or other devices that do specific tasks. ( a router will not run a minecraft server, but will do routing at 100GBps for example.)

    Apple did contribute to this a lot with the M1, as they did the CPU and the software, so they delivered a finished product that actually works, until then, in the Mainstream PC/Server segment, you did not even hear about ARM.

    It is not about how much of the market will RISC get, more what segments of the market will it dominate.

  • TimboJonesTimboJones Member
    edited October 2023

    @nocloud said:
    The X series might but the Cézanne non-pro APUs in my example don't. or the non-X cpus

    Had an ASRack x470 server board that didn't have ECC with a 5800X. MB product page said that it needed PRO variant for ECC. I swear they added that shit after purchasing...

    PRO versions were not available in my area until much later and more expensive, so that was a kick in the teeth.

  • fiberstatefiberstate Member, Patron Provider

    Most motherboard manufacturers are seeing the benefit of offering server features on their motherboards that support desktop version CPU's such as the Ryzen line. ASROCK RACK boards support ECC memory and full IPMI features. In CPU benchmark tests, some of the latest desktop CPU's are faster and better than just a gen prior server CPU's. Intel is a prime example of this, current XEON's in terms of price/performance aren't doing so well against the latest gen Ryzen's.

  • ValVal Member
    edited October 2023

    @host_c said:

    @tentor said: Personally I am waiting for ARM to become mature enough for x86(_64) vs ARM thread

    There is no comparing ARM with x86. It is like comparing Diesel to Petrol. Might do the same job, but in a totally different way.

    ARM is efficient because of reduced instruction set.

    ARM is RISC, x86 is CISC.

    Now it took ARM more than 20 years to be able to do what x86 does, and it got here, mainly because of smart phone industry.

    By the time ARM will catch up, and software is written for it, we might as well use Quantum CPU's. ( and I am not talking about Quantum Fireball :smiley: )

    For specific use case scenario, ARM is much more efficient. For general use, at the moment, cannot beat x86.

    It's very much up to debate to if ARM is still RISC nowadays. The frontend ISA does not matter much anyway, it all translates down to micro ops for both x86 or ARM.

  • MaouniqueMaounique Host Rep, Veteran

    @host_c said: Quantum Fireball

    Which wasn't a CPU, but hey, I got the joke :P

    Anyway, the ARM vs x86 is already settled. ARM won, just that the rout didn't begin yet.
    The moment ARM won was when they went fabless and somewhat open source. Then the final blow came when the DCs grew to such behemoth sizes and the power constraints broke the camel's back while the smartphones gave incentives (and paid for) going low-power high-compute for a fraction of the cost.

    Cross-compiling works in many cases, at least in server stuff, anyway, you can run an x86 VM on ARM already with qemu pretty easily.
    The way ARM moves the support for CISC translation would be there on the market pretty soon. Again, apple is launching the ARM revolution and they have the means and will to do it. Intel made a really bad move there...
    Soooo, the death knell for x86 sounded already, now we are looking at the corpse and wait for it to melt away.

    Thanked by 1tentor
  • host_chost_c Member, Patron Provider

    @Maounique said: Again, apple is launching the ARM revolution and they have the means and will to do it. Intel made a really bad move there...

    A agree 100% on this.

Sign In or Register to comment.