New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
DC building progress sneak peek
Development update post: https://pulsedmedia.com/clients/index.php/announcements/637/MD-Platform-Development-Update-04or2024-Dual-Drive-Active-cooling-Package-Power-Consumption.html
What's up with all the t variant offerings? Is it just that they are lower power or some place sell them for very cheap or like what. Just curious is all.
They have the best power vs performance ratio and therefore can be packed tight.
Regular versions use 2-8x more power, meaning could only pack them half as tight on per power basis -- due to power density cooling requirements even less, meaning we would have to charge 2-3x as much.
Even the newer t variants are stupid obnoxious with power consumption, ~3x what of claimed in reality. Say 12500t with current HW we could only pack 3 where 8 goes currently, and even after upgrading power delivery only 5 where 8 goes.
Now if we go for something like 13900k ... Yea it's less than 1 where 8 of these go without power delivery upgrade. Same with say Ryzen 7950x.
Since configs like that have approximately 32401284 providers, all competing for the very same market share, and some of them have infinite war chest (budget)... Yea we cannot compete with say Hetzner offering same configs as them, when they can buy enough for custom versions of even the motherboards, and can take 4-5 years for ROI ...
That being said, Hetzner limits CPUs to 65W these days too afaik, several sources claim such. With such restrictions we could cram quite a few on the space of 8 of these units.
Not absolutely certain on this, just several claims have been made.
With our MD series our unique advantage is the capability to cram these very tight -- without giving up flexibility.
Main difference is 35W versus 65W.
The performance of 8500 is bit better than 8500T
Both are mostly used as desktop processors.
Geekbench6 scores
Intel Core i5-8500T
Socket 1151 LGA 35 W
Intel Core i5-8500
Socket 1151 LGA 65 W
PS -- one more point
8500T can be configured as TDP-down as 25W
Actual i5-8500t GB6 Results: 1368 / 5205
That's what our nodes do anyways. It's taken as average / median result from multiple nodes and users in fact tend to report higher numbers often.
And, at least these cannot be configured for lower power consumption, no such option in bios.
Some Units Back In Stock!
MD2, MD13, MD14 been a while out of stock, now available again!
The GB6 score tends to be bit higher in Linux based OS as compared to Windows OS (I assume probably due to background tasks/services)
Here is one example for i5-8500T in Linux OS
https://browser.geekbench.com/v6/cpu/5713812
GB6 score for i5-8500 in Linux OS
https://browser.geekbench.com/v6/cpu/5675299
Why do you always try to confound everything and try to make a mess?
You are clearly doing this intentionally, or in other words trolling.
Servers are 98% Linux.
Because this was not convoluted enough, you should have probably provided the GB6 benchmark score of a Lemon, or perhaps Arm64 unit. Then again, it's still GB6 so perhaps use Wet Socks on a Sunny Day at the beach as the metric?
True.
But only few servers use “desktop grade motherboard and cpu”
https://ark.intel.com/content/www/us/en/ark/products/129941/intel-core-i5-8500t-processor-9m-cache-up-to-3-50-ghz.html
Also, regarding how to configure TDP-down
Like i said, the models we use does not have this option in bios. It needs to be supported by the motherboard vendor as well.
Systems provided with T models are optimized from head to feet. Simply configuring TDP-down on some other kind of system still requires toggles for anything else, like chipsets. the performane difference is marginal yet the non-Ts have higher initial costs. Most of these toggles can be set during runtime in OS with cool commands which you probably would want to prevent. DIY comes with all kinds of weirdness
MD Stock Update
That MD14 at 22.68€ is looking quite sweet, but particularly awesome value is that MD24 with 2x1TB NVMe, 32G Ram and 6C CPU Dang that's cheap.
New Week Flash Promo
33% OFF For Signups longer than 3 months, only 2 available and valid only until tomorrow. You can also compound these with our amazing annual, biennial and triennial discounts up to 15%
Now go get one while the getting is hot!
Current stock & prices before 33% Discount:
Use coupon code: 20240422LET-MONDAY-MD-SPECIAL33
Price updates on check out page. Order has to be paid within the hour or it will get cancelled, so no reservations.
SUPER Highlights at discounted rates:
Check all units out at: https://pulsedmedia.com/minidedi-dedicated-servers-finland.php
**SUPER BONUS: Test installation of other distro than Debian **
If you signup for at least 1 year for 1x NVMe model, you can test out other linux distros too, ought to be working for Ubuntu, Centos, Arch etc. now. But highly beta/alpha stage and will take a ticket and maybe a business day or two.
This offer is valid for pre-existing users as well with 1+ year remaining on the period, only a few will be accepted.
Weekly Stock Update
We should write up thoroughly sooner or later about what the MD platform actually is, how it became to be, and how this has been more than a decade in making.
By sheer accident and business decisions, what brings the absolute maximum value to our end users happened to bring those good feelie, green, virtue signaling thing people are so crazy about.
This is not because that was the goal, it just makes hell of a lot of sense financially to do it this way -- Maximizing the value You as the end user get.
RRR - Reduce, Recycle, Reuse
By sheer accident all of this happened; as maximal value of end users was our goal.
Recycle and Reuse
All the units currently in offer are all recycled, zero new units.
Only new things bought are the drives and additional ram, but even with RAM we always check for used units first.
We refurbish (recycle) the units fully, they arrive in all kinds of random conditions, they get disassembled, wash & clean, fully tested.
Used drives and too tiny RAM modules are all resold or donated locally, some of them go to school laptops for example or to local hacklab. Rest brought to local ewaste recycler, which resells them or recycles the raw materials.
New datacenter racks are reused from another DC as well, so is electrical cabinets in the new DC and our current DC. Nothing goes to waste, even some of the electrical cabling is recycled from another DC deconstruction job, we deconstructed it with care to reuse the materials. From small to big, from power outlets to large 3x400A electrical cabinets, to basic MMJ power cabling.
Metals, batteries, cardboard etc. which is easy to recycle we bring all it to recycling. Metals because you get paid for that, and carboard because of easier waste management.
We use 3d printing extensively, and we primarily use only biodegradable plastic typically made from corn starch; Which if it gets to landfill, will eventually decompose. Just so happens to be also the strongest, stiffest, cheapest and easier to work with common material to use (Also by colorants many brands are actually UL94 V0 grade, just not certified for ... Do Not Test At Home! But simple burn test of some common Black or White materials confirms this, it's because the colorants used for these colors often also functions as flame retardant)
Reduce
By going with best value proposition, we have managed to reduce not only power consumption drastically and directly, but new DC is going to be ~98% air cooled, and AC units only as backup. Not only that, a portion of our "waste heat" will go heating up portion of the huge industrial building we are in. Eventually it might even be possible to sell some of the waste heat into the district heating circuit as we grow into capacity.
We like to max out the performance given budget and power constraints, that means all nodes use all memory channels present for the increased performance, and we rather pick NVMe with higher endurance and/or performance over the cheapest which might work for. All of this means you need less nodes, and nodes last longer.
This goes as far as intake air filtering, instead of going with (rather expensive) commercial premade solutions which are constrictive and small (2-3x filter change per year), custom made mounts for largest possible filters we can fit, and prefiltering stage is 100% reusable (just clean it). We just might get away with swapping filters only once a year or even less! less wasted material on the framing, less effort to manufacture large over small etc.
Prefiltering is actually solving 2 issues with 1 solution; Industrial/Commercially available intake vents are abysmally bad, and extraordinarily expensive. They tend to block in the range of 60% of the surface area, while providing no filtering really at all -- They are only made to be easy to manufacture and look decent enough, with a lot of validations and metrics to make good feelie vibes "this is good product", but if you look closely, they are typically closed off by 60% or more, leaving only 40% of flow area.
So we'll make our own, which will not only have much higher flow area vs surface area ratio (we estimate we'll be at the 90-95% range!), but will simultaneously act as prefiltering stage AND look cool as hell. Still to be manufactured, we are waiting for delivery of large enough 3D printer not to have to make it out of hundreds of pieces, but instead just a few pieces.
3D Printing not only means we can do stuff we otherwise couldn't, but it's less energy intensive material, and less wasteful than say milling out of aluminium.
We also reduce the most expensive bit (and some would argue, most environmentally destructive) as much as possible; Human labor. For Example; Replacement units first for failed nodes, and repair failed units in batches to minimize the human effort spent on it. Time saved here is spent on design, development, validation type of things -- making the final product that much better.
Our plan is to run these nodes as long as people find value in them, no more every X years just replace them because nodes should be replaced OR they don't meet certain financial requirements or other constraints. Let'em run as long as they at least breakeven. Despite current bulk being ~5year old to begin with, we expect a majority of them to be operation in 10-15 years. Infact, we still have quite a number of 14 year old servers in production; Why replace when users of said nodes are perfectly happy with those servers, and they are quite power efficient. (Albeit, we just found a path where we can both reduce power consumption, increase performance and capacity, reducing opex -- testing to follow Q3/Q4 for potentially start replacing those in Q1/25)
We found a potential pathway to achieve UPS benefits, without any of the overheads, unreliability, inefficiency caused by normal UPS setups -- with very minimal money (few euros per node one time cost, instead of constant 20-30% power overhead + expensive maintenance + expensive installation costs + fire hazards + added downtime from said maintenances). Testing to be followed, needs 3rd party help to validate the configuration.
We've found other pathways for power delivery efficiency upgrades too, which as byproduct increases fault tolerance as well. All of that needs extensive testing and design tho, will be couple years out at least. We should be able to squeeze another 5-10% of efficiency out of power delivery, while reducing costs significantly.
Best Part Is No Part, and we've definitively managed to eliminate a lot of unnecessary parts. Especially moving parts. More on this later, but let's just say ... whole rack cooling
Nice, but ... you know whats missing RDNS
Been using them for a while and I have to say the network is topnotch. My home ping is around ~100ms and my net speed is 200 megabit I get maximum speeds even in busy hours, My others EU nodes (HEL1,AMS1,FSN1) not even close.
You know this is coming, question is just of when
Thank You
Some progress on new DC ... building this, new server platform hardware and software ... This is why stuff takes so much time, so many things to do at the same time.
Building a bigger 3d printer farm too, since we are always backlogged on prints, esp now as we are building the DC. There is 9 new machines in order right now.
I just spent portion of the day doing the steel fabrication / welding.
https://imgur.com/a/HiZkMUT
hey Alex ... any crazy May day offers?
Gib to me
look in the calendar smh
what are the model of nvmes? are they qlc?
They change and are all over the place, but we do try to avoid QLC.
See this from above;
QLC has lower endurance so they get less often picked.
Some very common models are Kingston NV2, Kioxia Exceria G2, Lexar NM790, MSI something ... These last 2 are top of the line with stellar performance.
That's great. Is it possible to remove the setup fee to celebrate Labour Day?
Free labour on Labour Day? Oh the irony
ROFLMAO!
Yea, we all enjoy working for free ;D
@zorax the setup fee is there because it literally is work to set one up right now, there's no end to end automation as of yet.
Once we get end to end automated AND constantly sufficient free stock we'll be removing them permanently, until then, it's only a random flash sale when that might be off for a few days.
You can keep checking this thread in case we do that, but i wouldn't wait too long if there's a model you really like and the price is right for you.
Building the new DC just for these ... It's a bit more work than anticipated.
Was doing BOM style list of just the prints we still need to do ... not all of them ... Still 450kg / 10 000+ hrs already on the list :O
Fortunately, something like 70% of that can be printed as we expand the number of nodes, not needed for the planned Q4 launch.
I think we might windup paying 6000€ per rack in total when fully built with all features, and that's despite all of the "free" stuff we've been using (ie. racks instead of 1500€ a pop, we got them for essentially freight + metal recycle cost).
Even at 6000€ per rack, we are still very low budget for datacenter builds.
Next room will be cheaper, less design work, more about just executing. But will also cost more unless we make this great finds again for "free" stuff.
It's cool to see how much work goes into this. I do hope more hosts do something like this and give new life to old hardware.