Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Ryzen 5000 Series Help
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Ryzen 5000 Series Help

pierrepierre Member

Any recommendations on coolers for 5900x/5950x? Seen that some use the Dynatron L3, but I don't really feel comfortable water-cooling in a server rack with other servers worth hundreds-thousands of dollars. Are there any similar/better alternatives to it?

I'm cramming it all into a 1U Supermicro Chassis.

Cheers!

Comments

  • deankdeank Member, Troll

    You will probably want to turn off turbo core. Otherwise, that CPU will gladly hit thermal limit in 1U pace.

    I really don't want that CPU in 1U. 2U is doable, not 1U.... hmm.

  • pierrepierre Member
    edited April 2021

    @deank said:
    You will probably want to turn off turbo core. Otherwise, that CPU will gladly hit thermal limit in 1U pace.

    I really don't want that CPU in 1U. 2U is doable, not 1U.... hmm.

    Thank you for the response. I've seen Nexril do it, so it is doable, just need to know if the L3 is the only thing that will run it somewhat safely.

    Edit: After joining his Discord, it seems like he does a lot of custom 3D printed brackets/mounting hardware. Not sure if it'll work good without the brackets/mounting hardware.

  • deankdeank Member, Troll

    It's doable. But I'd worry for life of components around it.

  • yoursunnyyoursunny Member, IPv6 Advocate

    @pierre said:
    water-cooling in a server rack

    Water cooling is outdated. The cool kids are submerging their servers in mineral oil.
    https://www.theverge.com/2021/4/6/22369609/microsoft-server-cooling-liquid-immersion-cloud-racks-data-centers

  • @deank said:
    It's doable. But I'd worry for life of components around it.

    I've got roughly 8-9 nodes that I'm wanting to rack. Wanting to rather pay for a quarter rack, instead of a half rack with the same amount of servers in it. Better financially in my eyes to use 1U's.

  • deankdeank Member, Troll

    A major difference between a desktop cpu and a server cpu is that the latter has overall lower default clock which enables controllable TDP with high core count.

    Ryzen 3900x and 5900x, their TDP will hit 200w easily if left uncontrolled (Thus turning off turbo). I can see it doable with some customized heatsink with fastest 40mm fan you can get through a wind tunnel. But personally I wouldn't do it.

    I'd go with Epyc instead.

  • @deank said:
    A major difference between a desktop cpu and a server cpu is that the latter has overall lower default clock which enables controllable TDP with high core count.

    Ryzen 3900x and 5900x, their TDP will hit 200w easily if left uncontrolled (Thus turning off turbo). I can see it doable with some customized heatsink with fastest 40mm fan you can get through a wind tunnel. But personally I wouldn't do it.

    I'd go with Epyc instead.

    I personally have no knowledge about EPYC, any recommendations that would be somewhat comparable/even better than Ryzen?

  • deankdeank Member, Troll

    You need to talk to your supplier rather than anyone else here.

    What they have in stock will ultimately decide how you want to lay out your server.

    Epyc is basically Ryzen. But about half of default clock speed (2.5 vs 5.0) but shitload of core counts and a really mild TDP that can easily be controlled.

  • jsgjsg Member, Resident Benchmarker
    edited April 2021

    "Cooling" 400 W (and even more) is feasible, even in 1 HU; there are plenty racked boxes out there to prove it.

    BUT: is it a smart thing to do? I don't think so and I strongly prefer 2 HU servers.
    Some factors: cooling (or rather avoiding overheating) means airflow; 1 HU means smaller cooler surface which means more airflow needed which means ventilators needing to turn faster which means significantly higher power consumption and shorter life-time.
    Also, 1 HU means smaller coolers (frankly, ridiculously small from an engineers perspective) which means yet higher ventilator speed needed and hence yet shorter life-time and significantly higher power consumption - which btw. is a more significant cost factor than mere rack units.

    IMO 1 HU basically (in most cases) translates to somehow cramming a server into less space - which comes at a price in terms of cost, reliability, and life-time.

    But if @pierre absolutely wants it, it can be done. Pretty much all the major server manufacturers/brands have such servers available. I doubt though that OP will really achieve (not utterly insignificant, if at all) savings.

    As for liquid cooling, the problem isn't the risk of destroying equipment if done properly, because there are non conductive cooling liquids available. The problem is where to put the radiator and reservoir and the rather low probability that a colo allows it.

    (And No, sorry, Epycs are not faster than Ryzens; they offer more cores and arguably better reliability and certain server features (like official and full ECC support) as well as (usually) better performance per Watt . If one is gamer-minded (as I call itt) and goes for highest performance then Ryzen is the best choice; in fact I even know of, frankly idiots, running overclocked 1 HU Ryzen servers in racks.)

  • FalzoFalzo Member

    @pierre said: I've got roughly 8-9 nodes that I'm wanting to rack

    multiplying the problem? does your colo even allow for liquid cooling? did you consider maintenance for that?

  • DataWagonDataWagon Member, Patron Provider

    If you don't want to watercool, check out the Dynatron A37, that's probably your best option.

    We've built our first 3950X machine using the AsRock Rack barebones chassis which includes a passive cooler (very similar to the Dynatron A37), and it actually performs almost as well as the Dynatron L3. Temps are high when all cores are pinned at 100%, but rarely reached throttling temps. The most important thing is having good fan shrouds that push airflow directly into the cooler.

    These days we just watercool them all with the L3s, but it's definitely doable without watercooling.

  • pierrepierre Member
    edited April 2021

    @deank said:
    You need to talk to your supplier rather than anyone else here.

    What they have in stock will ultimately decide how you want to lay out your server.

    Epyc is basically Ryzen. But about half of default clock speed (2.5 vs 5.0) but shitload of core counts and a really mild TDP that can easily be controlled.

    I mean I've already got my hands on 4 5900x and 2 5950x. Would rather use them then just sell them imo.

    @jsg said:
    (And No, sorry, Epycs are not faster than Ryzens; they offer more cores and arguably better reliability and certain server features (like official and full ECC support) as well as (usually) better performance per Watt . If one is gamer-minded (as I call itt) and goes for highest performance then Ryzen is the best choice; in fact I even know of, frankly idiots, running overclocked 1 HU Ryzen servers in racks.)

    Yep, I thought I was going crazy for a moment. But don't all Ryzen CPUs accept ECC by default? I would probably run it as stock, I personally don't know how to properly overclock (yes, i run a 3900x in my personal rig at base clock speeds), plus I'd want to run these nodes for a longer period of time. Maybe until something comes out that is entirely better (which will probably be 6000 series) but we'll see.

    @Falzo said:

    @pierre said: I've got roughly 8-9 nodes that I'm wanting to rack

    multiplying the problem? does your colo even allow for liquid cooling? did you consider maintenance for that?

    Yes, they do allow Water-Cooling! Spoke with them about it and they were recommending me the L3 originally. I also thought of the longevity of the cooler entirely. Need to do some more research.

    @DataWagon said:
    If you don't want to watercool, check out the Dynatron A37, that's probably your best option.

    We've built our first 3950X machine using the AsRock Rack barebones chassis which includes a passive cooler (very similar to the Dynatron A37), and it actually performs almost as well as the Dynatron L3. Temps are high when all cores are pinned at 100%, but rarely reached throttling temps. The most important thing is having good fan shrouds that push airflow directly into the cooler.

    These days we just watercool them all with the L3s, but it's definitely doable without watercooling.

    Noted! Thanks for the help. What temps are you getting with your L3 and what is the specs?

  • @pierre said: I'm cramming it all into a 1U Supermicro Chassis.

    1U hmmm there's Dyantron A38 but that test was in 2U rackmount at https://www.phoronix.com/vr.php?view=30106

  • DataWagonDataWagon Member, Patron Provider

    @pierre said:

    @deank said:
    You need to talk to your supplier rather than anyone else here.

    What they have in stock will ultimately decide how you want to lay out your server.

    Epyc is basically Ryzen. But about half of default clock speed (2.5 vs 5.0) but shitload of core counts and a really mild TDP that can easily be controlled.

    I mean I've already got my hands on 4 5900x and 2 5950x. Would rather use them then just sell them imo.

    @jsg said:
    (And No, sorry, Epycs are not faster than Ryzens; they offer more cores and arguably better reliability and certain server features (like official and full ECC support) as well as (usually) better performance per Watt . If one is gamer-minded (as I call itt) and goes for highest performance then Ryzen is the best choice; in fact I even know of, frankly idiots, running overclocked 1 HU Ryzen servers in racks.)

    Yep, I thought I was going crazy for a moment. But don't all Ryzen CPUs accept ECC by default? I would probably run it as stock, I personally don't know how to properly overclock (yes, i run a 3900x in my personal rig at base clock speeds), plus I'd want to run these nodes for a longer period of time. Maybe until something comes out that is entirely better (which will probably be 6000 series) but we'll see.

    @Falzo said:

    @pierre said: I've got roughly 8-9 nodes that I'm wanting to rack

    multiplying the problem? does your colo even allow for liquid cooling? did you consider maintenance for that?

    Yes, they do allow Water-Cooling! Spoke with them about it and they were recommending me the L3 originally. I also thought of the longevity of the cooler entirely. Need to do some more research.

    @DataWagon said:
    If you don't want to watercool, check out the Dynatron A37, that's probably your best option.

    We've built our first 3950X machine using the AsRock Rack barebones chassis which includes a passive cooler (very similar to the Dynatron A37), and it actually performs almost as well as the Dynatron L3. Temps are high when all cores are pinned at 100%, but rarely reached throttling temps. The most important thing is having good fan shrouds that push airflow directly into the cooler.

    These days we just watercool them all with the L3s, but it's definitely doable without watercooling.

    Noted! Thanks for the help. What temps are you getting with your L3 and what is the specs?

    Even with the L3, the 3950X still runs hot. We get like 85 - 87 with all 32 cores at 100% maxed out continuously. It doesn't throttle until 94 degrees though, so it does it's job. I think with the passive air only cooler, we were getting like 91 - 92 degrees under the same circumstances.

    I've heard of the L3 getting even lower temps than that though, it's all about finding the right chassis and having the right fan setup to cool the radiator properly. The 3 fans attached to the radiator aren't really enough. Make sure you'll be able to fit the L3 in whatever chassis you get also. We've fit them in half depth and full depth supermicro chassis, but It was very crammed in the 512 half depth.

  • pierrepierre Member
    edited April 2021

    @DataWagon said:

    Even with the L3, the 3950X still runs hot. We get like 85 - 87 with all 32 cores at 100% maxed out continuously. It doesn't throttle until 94 degrees though, so it does it's job. I think with the passive air only cooler, we were getting like 91 - 92 degrees under the same circumstances.

    I've heard of the L3 getting even lower temps than that though, it's all about finding the right chassis and having the right fan setup to cool the radiator properly. The 3 fans attached to the radiator aren't really enough. Make sure you'll be able to fit the L3 in whatever chassis you get also. We've fit them in half depth and full depth supermicro chassis, but It was very crammed in the 512 half depth.

    I'd probably go with CSE-815, as the pizza box is terrible for airflow. Also, you said you run all chips at 100% all the time? That's kinda crazy. I doubt mine will go past 85-90%, as I just don't feel comfortable running CPUs at the constant high load. My personal preference, but I also don't want customers to feel any hiccup in their services.

  • DataWagonDataWagon Member, Patron Provider

    @pierre said:

    @DataWagon said:

    Even with the L3, the 3950X still runs hot. We get like 85 - 87 with all 32 cores at 100% maxed out continuously. It doesn't throttle until 94 degrees though, so it does it's job. I think with the passive air only cooler, we were getting like 91 - 92 degrees under the same circumstances.

    I've heard of the L3 getting even lower temps than that though, it's all about finding the right chassis and having the right fan setup to cool the radiator properly. The 3 fans attached to the radiator aren't really enough. Make sure you'll be able to fit the L3 in whatever chassis you get also. We've fit them in half depth and full depth supermicro chassis, but It was very crammed in the 512 half depth.

    I'd probably go with CSE-815, as the pizza box is terrible for airflow. Also, you said you run all chips at 100% all the time? That's kinda crazy. I doubt mine will go past 85-90%, as I just don't feel comfortable running CPUs at the constant high load. My personal preference, but I also don't want customers to feel any hiccup in their services.

    We don't run them at 100% all the time, but that's how we test them. If they can run at fully maxed out at 100% for 12+ hours with no throttling, then you know that it's being cooled effectively :)

Sign In or Register to comment.