New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
I think it's subjective to a degree. Also dependent on what the target audience prefers. People here aren't going to want to rent from you a dedicated server that you rent from someone else, but they'll buy shared hosting from you on a server that you rent, for example.
A good bit. For example, several LET providers colocate with @Clouvider. Security and privacy is pretty much always a trust thing unless you rent out a whole cage and keep it locked down, which you usually only do if you have on site staff.
Take context into account, you're asking LET not hyperscalers.
I really know how overwhelming to run a DC. I'm not thinking to do that even if there was funds. But, I am just curious of how LET providers work, because I see some big players here.
Really? You know how overwhelming it is? Did you already know what rented racks is
I rent dedicated servers mostly from Hetzner and OVH. It's getting pretty near time for me to stop doing that and start colocating with another hosting provider.
Since I'm not selling servers but instead selling services I run on the servers, most people aren't interested in whether or not I own the hardware.
I see, mind you sharing which DC are you with? It's okay to not tell, just curious. Is it a well known provider? How about when it comes to security? You don't feel afraid if they mess with your stuffs?
Also, why not running your hardware from your house? Isn't possible? I saw some providers running their stuff from the basement! I might be wrong.
I'm in Evocative DAL4 (Formerly Carrier-1). It's a really nice datacenter that is clean & cozy. Never had any issues, and nothing but great things.
But like small companies that get bought out by larger ones, when Evocative came in they immediately raised power rates, and made some changes.
Luckily I'm in a contract on all of my rates so I'm not affected.
But how could they know if you are just a reseller or a DC owner/colo?
Oh yeah, I got your point. But how about running your stuff from your house? considering you are from a well location for latency.
Sure, that is why it's called LowEnd.
I don't need to run something to know it. I'm just asking man.
I stated above that I'm still new to this (not to the server management), but yeah I know that running a hosting business isn't easy, let alone running a DC.
Ideally they'll know because you're honest about it. If you're not, you'll need to be crafty. Better to pick the first.
Big red flag, never do it. You'll never have all of the things that people expect like dual power, proper and reliable cooling, quality bandwidth. Save your home rack for hobby stuff.
Oh I see, so you are selling services such as email server, yeah? Is it not that profitable to just rent servers since you are just selling services?
Oh, that is nice to hear. Do you have on site staff?
Haha, sure. But I think they might know that from the IP, nah?
Good point.
Staff is on site 24/7/365. The site is also monitored remotely by other security companies.
providers should run their own DCs, own their power plants, manufacture switches, lay their own submarine cables. anything less is considered incomplete set up.
Most providers here do not operate a DC. In fact even Linode/DO/Vultr/AWS/etc do not operate a DC either. Hetzner and OVH do. IIRC @terrahost has their own DC? Those who own a DC often advertise on their website, might be worth checking.
i believe after this thread, we do have GreenWoodDC.
Currently doing both. Have built/owned our own facilities and also colocate (or rent cage space) elsewhere. Generally speaking, the renting of multiple racks/cage space is a better solution for us. We sold off 2 of our own facilities during 2021/2022 and have shifted towards larger-scale colocation negotiations. At scale you can get some pretty good deals, and usually with a larger existing facility you have better leverage over connectivity (mainly, speed of bringing new bandwidth/fiber online).
Running our own was a great experience, and was necessary given the power density (per rack and facility-wide) that was sort of a niche for us at the time. Most of the industry has caught up now and getting 17.2kW usable per 42U and properly cooling it isn't as ridiculous of an ask now as it was in 2018 and prior. It's all the extras that eat away at your time which are necessary but not directly related to your core business and growth (routine backup system testing, redundancy buildouts and testing, HVAC pm's, misc property owner headaches, physical building security, etc).
The main issue is that we didn't do any colocation so all the fiber and bandwidth were things we had to negotiate directly and all had big lead times due to build-out requirements and the fact that we're a tiny company and the other side of our property line was Microsoft's behemoth DC and a few blocks over is their new build so competing for resources with them was not really possible--just sit and wait.
At this point I'd prefer to pay a quality provider to take those tasks off our plate and bundle us with existing customers to get better economies of scale. This just lets us focus on other direct aspects of the business.
And maintenance.
It's A LOT of work, and i mean A LOT of work. Every single little detail needs to be taken care of, and usually contractors don't work on this type of stuff, so as a business owner i've found out i need to be personally involved in every single step and ensure right stuff is used.
We don't own the building, only the datacenter, there was miscommunication from new building manager and contracted maintainer, so they swapped our air filters without our approval WITHOUT notification; They used the wrong type, and all of sudden all our servers are stuffed with dust. There was literally piles of sand & dust on top of some of the spare part bins.
Even employees sometimes miss the whole point, i asked one junior tech to vacuum the DC. FIRST he wanted to use the broken consumer old vacuum which blows a lot of dust in the air everytime you start it, and then he would not use the full power of the industrial vacuum we had purchased (as he was whining to get one for a long time -- but then refused to use it!), and ultimately was way too quick about it.
Turns out he only vacuumed the open floor space, so the dust piles were still sitting on top of the parts bins -.-
Yeah we had to let him go (he also pulled drives on the fly from a production server without approval ...)
Contractors who do not even work in IT, has never worked with IT, may not even own a laptop are even worse in this regard.
So every step of the way on a small business, i as the owner has to be involved. Ventilation systems, AC systems, power distribution, metering systems, electrical safety systems, racking the bloody servers can be surprisingly difficult for some people too etc etc etc. I need to understand all of them well enough to make a coherent big picture. Without this, i cannot make the right decisions neither.
Even those who understand the business miss things quite often, for example new power distribution cabinet we installed are missing fuses from our side which caused downtime this June, severe one. Fuse blew, but it was building side, at like 02:00, and ofc the building did not have spares. It was just one phase, but that shutdown the AC units in middle of summer, so high rate of HDD failures was to be expected this fall (knock knock on wood). This was just this June, so sometime during next summer or so we will have to shutdown a large portion of our production to add the fuses and new more reliable metering systems.
The list is just goes on and on and on, down to the tinyest details of HAVING THE RIGHT FRIGGIN' SCREWS! We once had server deployment delayed by weeks because those rails used imperial sized, i think 10-32 perhaps? Well, where fsck you going to find 10-32 screws/bolts in metric country? Took a few trips to find correct ones, and even then they tried to offer me M5, with sample on my hand (years ago, i did not know the sizing!), or HDD screws #6-32 with extra flat head, we've ran out a few times out of those, just spent something like 600$ to make sure we have enough for a few years.
It's the tinyest detail which might ruin you.
and the big stuff, like UPS units. A well known, respected brand has a nasty tendency to blow their circuits and/or transformers when it's actually needed, even with flames. They also consume was amounts of waste energy, ours has been measured to be consuming like 20% excess at about half load -- which is another detail, if you use Dual PSU, never go beyond half load on UPS because half of the power comes from straight the grid. Similar things go for fully direct grid on phase selection and reserves, if you use dual PSU, one phase fails but you hooked up to 2 different phases? Well now you are going to blow fuse on the next phase too. If you distributed across all 3 phases, you will likely loose the whole damn cabinet unless you were careful.
When you colocate, use remote hands etc. the colocation facility takes care of ALL OF THAT for you, sometimes even building the servers.
and that's just the physical stuff ... then there is routing, contracts, metering and monitoring etc. to take care of. To name just a few areas.
How about work areas, tooling, parts storage, having enough but not too many spares, cleaning and organizing, social, rest rooms, calculating densities, making allotments for power and density, laying fiber and their routes, access to network operators, cwdm, dwdm, optic colors, sc, st, cfp2, qsfp+, dac, twinax, abloy, iloq, rfid, gates, r401a or r32 flammability, insulation types, diamond cutting, microscopes and end polishing, raised floors and tiling surface pressure ratings, water piping, drains, epoxy coatings, light brick, heavy brick, cement wall carrying capacity with XYZ method, fiber jackets and their types, electrical insulators, AC wiring safety regulations code, battery chemistries and safety factors, automated window control systems, embedded electronics, wire ratings, connector types, fluid dynamics, physical leverage, hoist types, vacuum pumps, hydraulic fittings, 3d printing, CAD design, electrical engineering, engineering in general, insurances, law, ventilation motor types, ventilation fan types, ventilation duct types, duct construction methods, distributors for industrial gear, industrial automation, rs485, 1wire, canbus, 433Mhz etc etc etc etc etc.?
Then rinse a little bit of energy crisis with dash of inflation on top of it ...
It's a never ending list of things to take care of.
So next time you curse we are a bit slow to deliver a server, or why we cannot slap your RPi somewhere for 1€ a month. Well now you know, it's not as easy as it looks. If it looks easy and simple for you, you are either working with experienced professionals OR dunning-kruger is strong with you, and you just don't know enough yet to realize the challenges ...
We all start somewhere, i had nfi how difficult it would be when we got our first own site late 2012 i think or earli 2013.
-Aleksi
Now you guys must be thinking i am joking right? Not even in the slightest ... it takes this and much more to manage a DC, let alone building one from scratch...
Small DCs are also a thing, tho going out slowly, fewer and fewer of us small DC operators left.
We are what you would call little big. We might have a tiny DC, but the data throughputs, data amounts etc. we run on that tiny space is quite something. We min/max everything pretty much. There's no in-between. We tried in-between this year, and it totally blew up on our faces. Worst financial decision in a while, that was like 100k+ in just hardware alone going to ewaste ...
But once you got all that overhead sorted out, your per rack pricing, or per server, or per KW of end-use (compute, storage, whatevs) can be fraction of colocation cost.
Big financial upfront investment, huge returns on the backend. You just need the right contacts and knowledge of your local area operators to get started, and quite a bit of cash in pocket.
Our first DC was 5 rack closet with only 1x25A + AC it's own circuit ... Could not run profitably ever, but it was getting our foot on the door, build the contacts etc. before we bought this DC From a big operator for whom it was too small, and needed some great TLC (we immediately like doubled the rack density by just moving stuff around lol)
You say you know how hard running a DC is ... then you suggest this.
you know ... i just had to
Interesting.
With this energy crisis headache we've been considering colocating some of ours too, but then again, we just started the second DC build out too. Almost 8meters tall space, 3 sides are to outside, expecting some really nice PUE numbers out there, also we can sell at least some of our heat capacity, and have solar panels etc. there.
Being in Finland has it's perks -> Fiber layout to the building, only 2k € so that's pocket money in this stuff, the real cost is in the DWDM to connect to our current, but we might get the same cost IP Transit deal at the new site too from one of the providers.
Transformer building as well, so grid blackouts are going to be very rare.
I think, most of LET folks are missing the point of this thread!
I'm no way asking LET providers to run their own DC, nor I doubt their services.
I never say this in my first post.
The only thing I am wondering is how LET providers running their stuff, of course every provider runs their stuff differently that works for them.
The term data center is used loosely around this neck of the woods. Outside of some dude's basement I doubt anyone owns a building here. I know IOFlood was working on owning their own building but I think COVID slowed that down plus IOFlood has been priced out of the LE market.
That's literally crazy to deal with!!! I think anyone will prefer to do a colocation than getting into this crazy things, even colocation is less profitable compared to owning a DC, but yeah it's worth it after all.
I really had no idea before this thread. But yeah, now I have a big scoop of what providers are dealing with.
Thank you for sharing your story/experience with us.
What do you mean by "5 rack closet with only 1x25A"? Sorry to ask. It's meant to setup 5 servers?
Hahah, I'm just saying, you are the expert
Now, my confidence has gone