Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Intel vs Amd in performance per watt/cost
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Intel vs Amd in performance per watt/cost

DrvDrv Member

What is the best choice to go with these day's if you want the most performance per watt.
I know that some motherboards have also different power/more or less consumption at idle/load.
So, if you need to buy a lot of them, used to save $, what would you go with?
*ECC support needed.

Comments

  • TrKTrK Member

    Get EPYC or ThreadRipper, a really good performance per cost imo.

    Thanked by 1maverickp
  • ArkasArkas Moderator

    It really depends on which processors you are talking about. But generally, performance/watt, AMD wins.

  • 99% of people base their views on if they like the other logo better, so replies will be opinionated babbling without any numbers

  • ArkasArkas Moderator

    @covent said: 99% of people base their views on if they like the other logo better

    You got any numbers for that?

  • JabJabJabJab Member
    edited January 2023

    I like trains.

    // But totally agree -if you are not gonna define performance then you gonna have bad time buying shit based just on this vague statement that depends from workloads.

  • cybertechcybertech Member
    edited January 2023

    @covent said:
    99% of people base their views on if they like the other logo better, so replies will be opinionated babbling without any numbers

    this is opinionated babbling.

    Thanked by 1Arkas
  • PulsedMediaPulsedMedia Member, Patron Provider
    edited January 2023

    AMD wins perf per watt, it's not even a competition at this point.

    Desktop CPUs? Just check Hardware Unboxed or Gamer's nexus reviews.

    EPYCs: Check Serve The Home

    Both have numbers. Regardless in my experience with EPYC CPU says TDP = X Watt, it really is X Watt. No more, no less.

    You can also use cTDP to lower wattage on all epycs, many (if not all) zen1-3, and all Zen4 products.

    Only place Intel might have a fighting chance is ultra low power, non-laptop embedded. Say 5W max, and that's only because AMD might not have product for that niche.

    AMD low power variants are rarer than Intel, but i think that has a lot to do with the fact you can just lower wattage from bios in many models :) Also AMD doesn't make so many embedded CPUs.

    Desktop CPUs are on the ragged edge from AMD now as well, so stock X variant will use A LOT of juice, but if you lower TDP even just a fraction you get a lot lower power consumption and like 97% of performance. The last 3% costs like 30% in power consumption. Think of it as OC'd from the factory, with a guaranteed OC lasting for the lifetime of the product.

    I think it was actually so ragged edge that lowering power consumption by almost 50% only removed less than 10% of performance.

    Regardless, AMD has been the king of efficiency for many years now.

    Intel is getting into physical limitations of how much wattage they can pump in their CPUs, not in terms of silicon, but how to cool it and deliver the power. That should tell you something.

    Wasn't 13900k something like 330W stock if you had the cooler for it? and biggest AIOs in the market are not able to keep it cool.

    Thanked by 2Drv crunchbits
  • @Arkas said:

    @covent said: 99% of people base their views on if they like the other logo better

    You got any numbers for that?

    Sorry guys, bad day at the office. Was not meant to offend anyone replying or the OP

    Thanked by 1Arkas
  • jsgjsg Member, Resident Benchmarker
    edited January 2023

    @TrK said:
    Get EPYC or ThreadRipper, a really good performance per cost imo.

    ... or plain Ryzen. It depends what one needs and is going for.

    One point not often discussed is AMD Zen processors also, just like intel products, basically are heaters which as an interesting side effect also do computation. One, possibly the decisive point is clock speed. Higher clock ~ higher power consumption - and significantly worse than a linear increase.

    Another and also important factor is what you want. Just look at those processors as black boxes that provide a more or less given performance for a more or less given el. power envelope. You want many cores (as hoster you typically want that)? Fine get the Zen product that spreads its performance over many (weaker performance) cores (Epyc). You want both, pretty good performance and relatively low power consumption, but far less cores? OK, get Ryzen. You want a bit more cores (but way less than Epyc) but "extreme" performance? Ok, get Threadripper, but be prepared to feed it plenty electrical power.
    And that also holds true within a family, e.g. Epyc. fewer cores ~ higher performance/core.

    And btw, no, "TDP == power it actually guzzles with AMD" is wrong. Yes, unlike intel, AMD at least tries to provide realistic and (kind of) true TDP numbers but hell, processors are bloody dynamic beasts and nobody can really tell how much el. power it really consumes, let alone a maximum.
    On the other hand you can "tame" even intel guzzlers by simply lowering the clock speed - the problem there is marketing and, pardon me mindless customers, who promise/demand "maxxxxximum performance!!!"

    I use a Ryzen 4k APU (no bloody graphics card guzzling another couple of hundred Watts) and am really happy with (and I often do intense algorithms).

    As a provider I'd highly likely go for 24 core Epycs for normal VPS and 48 core Epycs for the budget line.

  • ericlsericls Member, Patron Provider

    I want ARM based could, badly

    Thanked by 1Maounique
  • PulsedMediaPulsedMedia Member, Patron Provider

    @jsg said: One, possibly the decisive point is clock speed. Higher clock ~ higher power consumption - and significantly worse than a linear increase.

    Comparable only within that specific generation and/or series of CPUs.
    It is called efficiency.

    @jsg said: And btw, no, "TDP == power it actually guzzles with AMD" is wrong. Yes, unlike intel, AMD at least tries to provide realistic and (kind of) true TDP numbers but hell, processors are bloody dynamic beasts and nobody can really tell how much el. power it really consumes, let alone a maximum.

    I speak from experience of multitude of generations of AMD CPUs in servers.
    Typically speaking when AMD talks about server CPUs TDP === Max power consumption. 45W truly is 45W. 180W truly is 180W.

    Not like Intel where 65W == achthcually is 180W.

    Opteron 2nd gen 45W + 4x3.5" = ~93W .... exact.
    Xeon 5600 series 65W + 4x3.5" = ~170-190W or so ...
    Xeon 5600 series 65W + 12x3.5" = ~210W or so ...
    Xeon e5 v1 series 65W + 12x3.5" = ~210W or so ...

    Hell, even desktop CPUs have maximum package power limit settings now. Yes, that means you can tell it specifically what's the maximum peak power draw. Not the TDP figure which has a ton of variables, but the actual upper ceiling.

    EPYC CPUs seems to more or less tread TDP == Package power limit, tho i have not done research into this to confirm. Call it a experience based educated hunch (aka guess).

    @jsg said: I use a Ryzen 4k APU (no bloody graphics card guzzling another couple of hundred Watts) and am really happy with (and I often do intense algorithms).

    modern GPUs on desktop use are like 15-20W ... Perhaps i should measure that myself.

    DISCLAIMER I am a AMD Stock holder since the moment it was clear Ryzen was coming. So late ~'16 or early '17 -- and my desktop was FX 8350 + HD7970 at the time.

  • jsgjsg Member, Resident Benchmarker

    @PulsedMedia said:

    @jsg said: One, possibly the decisive point is clock speed. Higher clock ~ higher power consumption - and significantly worse than a linear increase.

    Comparable only within that specific generation and/or series of CPUs.
    It is called efficiency.

    Nope. generally valid. Take any processor, intel or AMD, and let it run at its max speed (not even overclocking, just max speed) -vs- running it with say 15% lower speed and you'll have and see a significant difference of way more than 15% less power consumption.

    @jsg said: And btw, no, "TDP == power it actually guzzles with AMD" is wrong. Yes, unlike intel, AMD at least tries to provide realistic and (kind of) true TDP numbers but hell, processors are bloody dynamic beasts and nobody can really tell how much el. power it really consumes, let alone a maximum.

    I speak from experience of multitude of generations of AMD CPUs in servers.
    Typically speaking when AMD talks about server CPUs TDP === Max power consumption. 45W truly is 45W. 180W truly is 180W.

    On one hand, yes, but that's average consumption. On the other hand, no, because it can change within even a microsecond and will occasionally (or even over longer stretches of time) reach far higher consumption than TDP, but of course, it will also sometimes (or even often, depending on usage pattern) be much lower than TDP.
    A major reason for intel to play the 'P' and 'E' core game.

    Not like Intel where 65W == achthcually is 180W.

    Yep, one shouldn't even care about their ridiculous phantasy-land TDP numbers.

    Opteron 2nd gen 45W + 4x3.5" = ~93W .... exact.
    Xeon 5600 series 65W + 4x3.5" = ~170-190W or so ...
    Xeon 5600 series 65W + 12x3.5" = ~210W or so ...
    Xeon e5 v1 series 65W + 12x3.5" = ~210W or so ...

    (a) "exact"? There is no "exact" power consumption numbers with processors unless you happen to have a decent lab. What you can measure (with normal people equipment (and knowledge)) is a crude average (which however is fine with normal users).
    (b) Hmmm, a processor isn't about power consumption only. I really like AMD but also must see what kind of performance one gets out of those Opterons ...

    Hell, even desktop CPUs have maximum package power limit settings now. Yes, that means you can tell it specifically what's the maximum peak power draw. Not the TDP figure which has a ton of variables, but the actual upper ceiling.

    I'm btw no less interested in how low one can get consumption, and that's one of the points where Ryzens shine-

    @jsg said: I use a Ryzen 4k APU (no bloody graphics card guzzling another couple of hundred Watts) and am really happy with (and I often do intense algorithms).

    modern GPUs on desktop use are like 15-20W ... Perhaps i should measure that myself.

    The last GPU I had (because decent APUs weren't available yet) was (a) silent (no fan), and (b) roughly in that ball park (iirc, it was ca. 25W when e.g. gaming).
    Now, to be fair, when I'm gaming it's C&C or (my most "modern" game) Homeworld. For C&C and some other old games I use a HP USFF for "high end" gaming (well in my universe) I have a Ryzen 3k APU.

    I do remember many, many years when I had quite high end PCs (for professional use) and always felt "yuck, this damn thing is sooooo boringly slow". Not anymore since Ryzen (and I "only" have an 8 core workstation). Also since Ryzen I feel no pressure whatsoever to upgrade; I'm perfectly fine with my Ryzen PCs. Finally I/we have reached a point where our PCs aren't slowing us down but are adequate tools.

  • PulsedMediaPulsedMedia Member, Patron Provider

    @jsg said: Nope. generally valid. Take any processor, intel or AMD, and let it run at its max speed (not even overclocking, just max speed) -vs- running it with say 15% lower speed and you'll have and see a significant difference of way more than 15% less power consumption.

    Uh, that's what i said.
    But it's only comparable within the same series of CPUs.

    It's not like 3Ghz 10year old Xeon consumes exactly the same power as 3Ghz Ryzen 7000 series today.

    @jsg said: On one hand, yes, but that's average consumption. On the other hand, no, because it can change within even a microsecond and will occasionally (or even over longer stretches of time) reach far higher consumption than TDP, but of course, it will also sometimes (or even often, depending on usage pattern) be much lower than TDP.

    Ok, if we start looking at pico, nano or microsecond scale, YES, there are obviously noise on the power consumption. What we care about however is the energy consumed, which has time component, not instantenous.

    And also not what i said, for AMD Server CPUs that really does not hold true, like i explained to you, AMD Server CPU TDP tends to be more like package power limit, AKA absolute limit. You can safely assume it will wind up somewhere in that vicinity in reasonable certainty.

    For Intel it works more like their desktop counterparts even on Xeons; No basis in reality, only testing shows you. The numbers are almost completely irrelevant, only shows relative difference even at best case between CPUs of the same series. The advertised TDP has absolutely no real world relevance beyond that relative comparison. Intel 65W can easily be 200W, even a decade ago already.
    Now with desktop parts that's very obvious, their "115W" or "125W" parts consuming 250-330W in reality out of box.

    AMD on desktop tho, all the way up to Zen3 TDP actually had some basis in reality, but for Zen4 desktop parts they too went to that ridiculous so they can do unconstrained power envelope for the last few % which gains so much hype and focus (despite being irrelevant for most part AND very few pays 500€ extra to get 2% more performance).

    I see this kinda forced from Intel, it's hard to compete with 125W peak power limit when your competitor goes to 330 to 400W, to get that single metric everyone is hyping about; Gaming single thread performance.

    I think that was poor strategy from AMD, but then again, you can just cTDP / PPT it to sensible levels OR get non-X variant.

    @jsg said: (a) "exact"? There is no "exact" power consumption numbers with processors unless you happen to have a decent lab. What you can measure (with normal people equipment (and knowledge)) is a crude average (which however is fine with normal users).

    This was total system consumption which was ultimately what matters. Regardless, 4x 3.5" alone is already 30-40W range depending upon model. Chipset and RAM also takes juice. Opterons ran 6 modules each.

    Measurement was taken across hundreds and hundreds of servers over the timespan of nearly a decade. 93W measured nearly a decade ago, 93W measured still a few months back. From real life production servers. There was some tiny changes from system to system, but 93W was the typical, range was something like +/- 4%, but most (median) servers measured 93W within 1%.

    Those Opterons were actually just as performant in real world use than the 1gen newer Xeons too :) Never had a CPU performance bottleneck with those, but they were restricted to 1Gbps only. Not really a difference in real world applications, cannot remember what synthetics said. However, they lacked newer built-in encryption HW acceleration (AES-NI extension some subset)

    I would have preferred to let them run one more cycle as entry level dedis, but fscking energy crisis forced our hand. We consolidated 9-18:1 on new EPYC servers. Ultimate performance increased by about 50%, but some measurements were less than 10x performance difference, despite going from 45W 6core to 280W 32core -- which was rather surprising and unexpected. That case is non-ordinary and never met in real world production, borderline synthetic test.

    @jsg said: I'm btw no less interested in how low one can get consumption, and that's one of the points where Ryzens shine-

    Ultimate lowest and perf/watt is where Ryzens shine.

    @jsg said: The last GPU I had (because decent APUs weren't available yet) was (a) silent (no fan), and (b) roughly in that ball park (iirc, it was ca. 25W when e.g. gaming).

    Now, to be fair, when I'm gaming it's C&C or (my most "modern" game) Homeworld. For C&C and some other old games I use a HP USFF for "high end" gaming (well in my universe) I have a Ryzen 3k APU.

    Yea that stuff doesn't need anymore than that.

    @jsg said: I do remember many, many years when I had quite high end PCs (for professional use) and always felt "yuck, this damn thing is sooooo boringly slow". Not anymore since Ryzen (and I "only" have an 8 core workstation). Also since Ryzen I feel no pressure whatsoever to upgrade; I'm perfectly fine with my Ryzen PCs. Finally I/we have reached a point where our PCs aren't slowing us down but are adequate tools.

    Guess why i'm writing this on my Ryzen 1700X workstation, and the parts for my new Threadripper 2970WX workstation has been gathering dust @ my garage for a year now, perhaps more ... lol

    Thought i wanted that oomph for local dev VMs, but i've just been running them from the DC instead.

    It's fast ram + fast M.2 NVMe storage combined with the cores of a Ryzen which hits the sweetspot for most desktop usage perfectly. It's all about the I/O for the most part.

    But give it a few years for Microsoft and Canonical to find ways to waste those clock cycles ... perf in certain situations due to snap is back to 10 years back ... Fun waiting 5-15seconds for firefox or chromium just to launch ... or to open a hyperlink from say email ... or 20-30 seconds for thunderbird to launch ... -- non-snap previous ubuntu version had no such delays. Snap is a catastrophe... combined with the ridiculously unstable nature of KDE today has made this Ubuntu + KDE experiment a rather very unpleasant experience. (Coming from XFCE)

  • battle of the essay titans!

  • jsgjsg Member, Resident Benchmarker
    edited January 2023

    @PulsedMedia said:

    @jsg said: Nope. generally valid. Take any processor, intel or AMD, and let it run at its max speed (not even overclocking, just max speed) -vs- running it with say 15% lower speed and you'll have and see a significant difference of way more than 15% less power consumption.

    Uh, that's what i said.
    But it's only comparable within the same series of CPUs.

    My point wasn't "comparable". My point was that, no matter the processor, lower clock ~ significantly lower power consumption.

    ... like i explained to you ...

    Hint: I know how to use diverse lab equipment (like e.g. oscilloscope), I have an electronics background, and I wasn't born yesterday (I've quite a bit experience with servers and DCs too).

    @jsg said: (a) "exact"? There is no "exact" power consumption numbers with processors unless you happen to have a decent lab. What you can measure (with normal people equipment (and knowledge)) is a crude average (which however is fine with normal users).

    This was total system consumption which was ultimately what matters.

    Sure. But as it so happens we were discussing in the context of processors from AMD and intel.

    @jsg said: I do remember many, many years when I had quite high end PCs (for professional use) and always felt "yuck, this damn thing is sooooo boringly slow". Not anymore since Ryzen (and I "only" have an 8 core workstation). Also since Ryzen I feel no pressure whatsoever to upgrade; I'm perfectly fine with my Ryzen PCs. Finally I/we have reached a point where our PCs aren't slowing us down but are adequate tools.

    Guess why i'm writing this on my Ryzen 1700X workstation, and the parts for my new Threadripper 2970WX workstation has been gathering dust @ my garage for a year now, perhaps more ... lol

    To be honest, my upgrade from Ryzen 1K to 4k didn't bring me a WHOA moment. Sure, my newer processor is faster but honestly, I'm not sure that I need that extra speed.
    So, I totally agree.

    It's fast ram + fast M.2 NVMe storage combined with the cores of a Ryzen which hits the sweetspot for most desktop usage perfectly. It's all about the I/O for the most part.

    But give it a few years for Microsoft and Canonical to find ways to waste those clock cycles ... perf in certain situations due to snap is back to 10 years back ... Fun waiting 5-15seconds for firefox or chromium just to launch ... or to open a hyperlink from say email ... or 20-30 seconds for thunderbird to launch ... -- non-snap previous ubuntu version had no such delays. Snap is a catastrophe... combined with the ridiculously unstable nature of KDE today has made this Ubuntu + KDE experiment a rather very unpleasant experience. (Coming from XFCE)

    Hehe. I also trust both companies to reduce our advantage by mercilessly bloating the OS. But then, I use neither Windows 10 or 11 nor Ubuntu ...

  • @ericls said:
    I want ARM based could, badly

    AWS, or Oracle cloud which is so popular around here, both offer ARM.

  • PulsedMediaPulsedMedia Member, Patron Provider

    @jsg said: Sure. But as it so happens we were discussing in the context of processors from AMD and intel.

    System which consumes 93W total could not possibly have a CPU which consumes 150W by itself ...

  • emperoremperor Member
    edited January 2023

    @jsg said: I use a Ryzen 4k APU (no bloody graphics card guzzling another couple of hundred Watts) and am really happy with (and I often do intense algorithms).

    This was best change i've done it so far. Sold my graphic card for 300 euros (bought at 60 euros) and changed my ryzen 5 1600af for ryzen 3 4300ge which is awesome cpu, although it did gave me hard time to find it :)

    Thanked by 1jsg
  • @Drv said:
    What is the best choice to go with these day's if you want the most performance per watt.
    I know that some motherboards have also different power/more or less consumption at idle/load.
    So, if you need to buy a lot of them, used to save $, what would you go with?
    *ECC support needed.

    Your question is very broad, you didn't specify if you are talking about desktop, workstation or server and your budget.
    If you want to get good performance per watt and save $ then:

    Desktop (you can go used)
    Ryzen 3xxx/5xxx - ECC is not "officially supported" on AM4, but does work on many motherboards.

    Best $/perf server (used)
    Xeon v4 - first 14nm Intel server chips. Perf/wat somewhere between EPYC 1gen and EPYC 2gen, but you can get them VERY CHEAP on used market - GreenCloud sells $15/yr VPSes on v4's.
    EPYC 7xx2 - revolutionary chips that told the world "AMD is back"... they are so good that many companies skipped their usual X-year upgrade schedule and because of that there's not much great deals on used market. Search for one, maybe you will find it, but if not then either save a lot of $$$ with Xeon's v4 or consider Threadrippers.

    Highest performance server
    Threadripper / Threadripper Pro - nothing better in market currently, very good pricing. If you want single-socket performance then its the best deal.

    And there is EPYC Genoa / 9xx4. If you have budget don't even ask if its worth it.

    Thanked by 1jsg
  • PulsedMediaPulsedMedia Member, Patron Provider

    @AXYZE said: And there is EPYC Genoa / 9xx4. If you have budget don't even ask if its worth it.

    Surprisingly "cheap", lowest cost Genoa chips are like 900€
    Motherboards about the same as previous gen epycs, and well, DDR5 is ~double what DDR4.

    But it's not outrageous. Infact, ~1100€ decent 16C could replace 1st Gen 32C probably.

    We've actually considered these, but since DDR5 costs a lot of and we still have plenty of Epyc Zen1-3 parts laying around, we'll use those first before getting first Genoa parts.

    Def on our roadmap already tho.

  • jsgjsg Member, Resident Benchmarker
    edited January 2023

    @AXYZE said:
    Best $/perf server (used)
    Xeon v4 - first 14nm Intel server chips. Perf/wat somewhere between EPYC 1gen and EPYC 2gen, but you can get them VERY CHEAP on used market - GreenCloud sells $15/yr VPSes on v4's.
    EPYC 7xx2 - revolutionary chips that told the world "AMD is back"... they are so good that many companies skipped their usual X-year upgrade schedule and because of that there's not much great deals on used market. Search for one, maybe you will find it, but if not then either save a lot of $$$ with Xeon's v4 or consider Threadrippers.

    Nice and sensible summary overall.

    Re E5 v4: Yep, in fact I've seen quite a few E5 v4 VPS whose performance was in the same ballpark as Epyc 7xx2. In my view those two are what IMO makes most sense in (not high-end) servers. For providers the E5 v4 is dirt cheap and for customers both offer decent to good performance. I'd go for a Ryzen based VPS only if I really needed the extra performance and I'd in fact avoid Threadripper completely unless I had a very specific use case requiring and justifying it. I'm not at all "green" or woke but I think one should at least avoid needlessly wasting energy.

Sign In or Register to comment.