Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Do most of LET providers run their own DC? - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Do most of LET providers run their own DC?

2

Comments

  • jarjar Patron Provider, Top Host, Veteran

    @GreenWood said:

    @jar said:

    @GreenWood said:

    @jar said:
    Everyone's a reseller of something. Even if you have to bend a little bit to draw that conclusion, it isn't hard to do. But I'd say average expectation is owned hardware in rented racks, for LET hosts.

    I'd bet on a barely even split between renting racks from a DC customer and renting directly from the DC.

    What do you mean by "rented racks"? You mean, they rent a building or colocation?

    I'd say "colocation customer" is a good alternate way to say what I mean. Some colocating with other hosting providers, some in direct contract with the facility itself.

    Oh I see. do you think that is much better than just be a reseller of other providers? meaning you own no hardware/rack.

    I think it's subjective to a degree. Also dependent on what the target audience prefers. People here aren't going to want to rent from you a dedicated server that you rent from someone else, but they'll buy shared hosting from you on a server that you rent, for example.

    @GreenWood said:

    @jar said:
    Also of interesting note, I feel like companies that rent racks directly from the data center itself tend to more closely refer to said data center as "theirs" and probably rightfully so, since they have far more ability to negotiate power, upstream, etc. Even though it's not that they literally built a building.

    Ohh, so that is how big LET providers are running their stuff? How safer is renting rack/colocation when it comes to security?

    A good bit. For example, several LET providers colocate with @Clouvider. Security and privacy is pretty much always a trust thing unless you rent out a whole cage and keep it locked down, which you usually only do if you have on site staff.

  • @GreenWood said:
    Not really, I do know that it takes more than money, experience...

    Take context into account, you're asking LET not hyperscalers.

  • GreenWoodGreenWood Member
    edited December 2022

    @drizbo said:
    You're throwing DC around like that, like its nothing. Its a big deal to have own building and dealing with multiple network providers, multiple power providers, as you can't just have one and then be offline when there are issues. Having the building "fireproof" and ready in events of anything. Its not a small feat, and unless you are a really big provider it's not worth it. It's a really big investment.

    There are some providers here that have own datacenter of course, but most lease space / collocate their hardware in another datacenter, and some lease the servers themselves and resell those.

    I really know how overwhelming to run a DC. I'm not thinking to do that even if there was funds. But, I am just curious of how LET providers work, because I see some big players here.

  • @GreenWood said:

    @drizbo said:
    You're throwing DC around like that, like its nothing. Its a big deal to have own building and dealing with multiple network providers, multiple power providers, as you can't just have one and then be offline when there are issues. Having the building "fireproof" and ready in events of anything. Its not a small feat, and unless you are a really big provider it's not worth it. It's a really big investment.

    There are some providers here that have own datacenter of course, but most lease space / collocate their hardware in another datacenter, and some lease the servers themselves and resell those.

    I really know how overwhelming to run a DC. I'm not thinking to do that even if there was funds. But, I am just curious of how LET providers work, because I see some big players here.

    Really? You know how overwhelming it is? Did you already know what rented racks is

  • jarjar Patron Provider, Top Host, Veteran

    @GreenWood said:

    @jar said:

    Most affordable road in is to rent a dedicated server. Most affordable spread over time is probably to colocate with an established hosting provider who has empty rack space. Building a system is more money up front than renting one for a month, obviously.

    Got you. So interesting.

    Oh yeah! You are a provider too. You mind sharing if you are a reseller (if yes, which provider are you getting your servers) or you do colo/rent racks?

    I rent dedicated servers mostly from Hetzner and OVH. It's getting pretty near time for me to stop doing that and start colocating with another hosting provider.

    Since I'm not selling servers but instead selling services I run on the servers, most people aren't interested in whether or not I own the hardware.

  • @aqua said:
    I do not run my own DC. I run out of a DC with my own hardware.

    Running DCs isn't as easy as before, because of all the new technology that wholesale Datacenters have. In the long run, it's easier to lease from them.

    Having a ton of transits, cooling, redundancy, and power available is a perk that takes a major hassle off the company's hands.

    Edit: Also the price of real estate, lumber, and all other supplies has sky rocketed in recent years. Getting a well-built datacenter with all of those things can cost millions, and even close to billions. Getting space in a major city like Dallas, Chicago, New York City, Los Angeles, and Miami is nearly impossible.

    I see, mind you sharing which DC are you with? It's okay to not tell, just curious. Is it a well known provider? How about when it comes to security? You don't feel afraid if they mess with your stuffs?

    Also, why not running your hardware from your house? Isn't possible? I saw some providers running their stuff from the basement! I might be wrong.

  • aquaaqua Member, Patron Provider
    edited December 2022

    @GreenWood said:

    @aqua said:
    I do not run my own DC. I run out of a DC with my own hardware.

    Running DCs isn't as easy as before, because of all the new technology that wholesale Datacenters have. In the long run, it's easier to lease from them.

    Having a ton of transits, cooling, redundancy, and power available is a perk that takes a major hassle off the company's hands.

    Edit: Also the price of real estate, lumber, and all other supplies has sky rocketed in recent years. Getting a well-built datacenter with all of those things can cost millions, and even close to billions. Getting space in a major city like Dallas, Chicago, New York City, Los Angeles, and Miami is nearly impossible.

    I see, mind you sharing which DC are you with? It's okay to not tell, just curious. Is it a well known provider? How about when it comes to security? You don't feel afraid if they mess with your stuffs?

    Also, why not running your hardware from your house? Isn't possible? I saw some providers running their stuff from the basement! I might be wrong.

    I'm in Evocative DAL4 (Formerly Carrier-1). It's a really nice datacenter that is clean & cozy. Never had any issues, and nothing but great things.

    But like small companies that get bought out by larger ones, when Evocative came in they immediately raised power rates, and made some changes.

    Luckily I'm in a contract on all of my rates so I'm not affected.

  • @jar said:

    I think it's subjective to a degree. Also dependent on what the target audience prefers. People here aren't going to want to rent from you a dedicated server that you rent from someone else, but they'll buy shared hosting from you on a server that you rent, for example.

    But how could they know if you are just a reseller or a DC owner/colo?

    @jar said:

    A good bit. For example, several LET providers colocate with @Clouvider. Security and privacy is pretty much always a trust thing unless you rent out a whole cage and keep it locked down, which you usually only do if you have on site staff.

    Oh yeah, I got your point. But how about running your stuff from your house? considering you are from a well location for latency.

  • @jmgcaguicla said:

    @GreenWood said:
    Not really, I do know that it takes more than money, experience...

    Take context into account, you're asking LET not hyperscalers.

    Sure, that is why it's called LowEnd.

  • @paijrut said:

    @GreenWood said:

    @drizbo said:
    You're throwing DC around like that, like its nothing. Its a big deal to have own building and dealing with multiple network providers, multiple power providers, as you can't just have one and then be offline when there are issues. Having the building "fireproof" and ready in events of anything. Its not a small feat, and unless you are a really big provider it's not worth it. It's a really big investment.

    There are some providers here that have own datacenter of course, but most lease space / collocate their hardware in another datacenter, and some lease the servers themselves and resell those.

    I really know how overwhelming to run a DC. I'm not thinking to do that even if there was funds. But, I am just curious of how LET providers work, because I see some big players here.

    Really? You know how overwhelming it is? Did you already know what rented racks is

    I don't need to run something to know it. I'm just asking man.

    I stated above that I'm still new to this (not to the server management), but yeah I know that running a hosting business isn't easy, let alone running a DC.

  • jarjar Patron Provider, Top Host, Veteran

    @GreenWood said:

    @jar said:

    I think it's subjective to a degree. Also dependent on what the target audience prefers. People here aren't going to want to rent from you a dedicated server that you rent from someone else, but they'll buy shared hosting from you on a server that you rent, for example.

    But how could they know if you are just a reseller or a DC owner/colo?

    Ideally they'll know because you're honest about it. If you're not, you'll need to be crafty. Better to pick the first.

    @jar said:

    A good bit. For example, several LET providers colocate with @Clouvider. Security and privacy is pretty much always a trust thing unless you rent out a whole cage and keep it locked down, which you usually only do if you have on site staff.

    Oh yeah, I got your point. But how about running your stuff from your house? considering you are from a well location for latency.

    Big red flag, never do it. You'll never have all of the things that people expect like dual power, proper and reliable cooling, quality bandwidth. Save your home rack for hobby stuff.

    Thanked by 1GreenWood
  • @jar said:

    I rent dedicated servers mostly from Hetzner and OVH. It's getting pretty near time for me to stop doing that and start colocating with another hosting provider.

    Since I'm not selling servers but instead selling services I run on the servers, most people aren't interested in whether or not I own the hardware.

    Oh I see, so you are selling services such as email server, yeah? Is it not that profitable to just rent servers since you are just selling services?

  • @aqua said:

    @GreenWood said:

    @aqua said:
    I do not run my own DC. I run out of a DC with my own hardware.

    Running DCs isn't as easy as before, because of all the new technology that wholesale Datacenters have. In the long run, it's easier to lease from them.

    Having a ton of transits, cooling, redundancy, and power available is a perk that takes a major hassle off the company's hands.

    Edit: Also the price of real estate, lumber, and all other supplies has sky rocketed in recent years. Getting a well-built datacenter with all of those things can cost millions, and even close to billions. Getting space in a major city like Dallas, Chicago, New York City, Los Angeles, and Miami is nearly impossible.

    I see, mind you sharing which DC are you with? It's okay to not tell, just curious. Is it a well known provider? How about when it comes to security? You don't feel afraid if they mess with your stuffs?

    Also, why not running your hardware from your house? Isn't possible? I saw some providers running their stuff from the basement! I might be wrong.

    I'm in Evocative DAL4 (Formerly Carrier-1). It's a really nice datacenter that is clean & cozy. Never had any issues, and nothing but great things.

    But like small companies that get bought out by larger ones, when Evocative came in they immediately raised power rates, and made some changes.

    Luckily I'm in a contract on all of my rates so I'm not affected.

    Oh, that is nice to hear. Do you have on site staff?

  • @jar said:

    Ideally they'll know because you're honest about it. If you're not, you'll need to be crafty. Better to pick the first.

    Haha, sure. But I think they might know that from the IP, nah?

    @jar said:

    Big red flag, never do it. You'll never have all of the things that people expect like dual power, proper and reliable cooling, quality bandwidth. Save your home rack for hobby stuff.

    Good point.

  • aquaaqua Member, Patron Provider

    @GreenWood said:

    @aqua said:

    @GreenWood said:

    @aqua said:
    I do not run my own DC. I run out of a DC with my own hardware.

    Running DCs isn't as easy as before, because of all the new technology that wholesale Datacenters have. In the long run, it's easier to lease from them.

    Having a ton of transits, cooling, redundancy, and power available is a perk that takes a major hassle off the company's hands.

    Edit: Also the price of real estate, lumber, and all other supplies has sky rocketed in recent years. Getting a well-built datacenter with all of those things can cost millions, and even close to billions. Getting space in a major city like Dallas, Chicago, New York City, Los Angeles, and Miami is nearly impossible.

    I see, mind you sharing which DC are you with? It's okay to not tell, just curious. Is it a well known provider? How about when it comes to security? You don't feel afraid if they mess with your stuffs?

    Also, why not running your hardware from your house? Isn't possible? I saw some providers running their stuff from the basement! I might be wrong.

    I'm in Evocative DAL4 (Formerly Carrier-1). It's a really nice datacenter that is clean & cozy. Never had any issues, and nothing but great things.

    But like small companies that get bought out by larger ones, when Evocative came in they immediately raised power rates, and made some changes.

    Luckily I'm in a contract on all of my rates so I'm not affected.

    Oh, that is nice to hear. Do you have on site staff?

    Staff is on site 24/7/365. The site is also monitored remotely by other security companies.

    Thanked by 1GreenWood
  • cybertechcybertech Member
    edited December 2022

    providers should run their own DCs, own their power plants, manufacture switches, lay their own submarine cables. anything less is considered incomplete set up.

  • KousakaKousaka Member
    edited December 2022

    Most providers here do not operate a DC. In fact even Linode/DO/Vultr/AWS/etc do not operate a DC either. Hetzner and OVH do. IIRC @terrahost has their own DC? Those who own a DC often advertise on their website, might be worth checking.

  • ninjatkninjatk Signature Restricted

    @Kousaka said:
    Most providers here do not operate a DC. In fact even Linode/DO/Vultr/AWS/etc do not operate a DC either. Hetzner and OVH do. IIRC @terrahost has their own DC? Those who own a DC often advertise on their website, might be worth checking.

    i believe after this thread, we do have GreenWoodDC.

  • crunchbitscrunchbits Member, Patron Provider, Top Host

    Currently doing both. Have built/owned our own facilities and also colocate (or rent cage space) elsewhere. Generally speaking, the renting of multiple racks/cage space is a better solution for us. We sold off 2 of our own facilities during 2021/2022 and have shifted towards larger-scale colocation negotiations. At scale you can get some pretty good deals, and usually with a larger existing facility you have better leverage over connectivity (mainly, speed of bringing new bandwidth/fiber online).

    Running our own was a great experience, and was necessary given the power density (per rack and facility-wide) that was sort of a niche for us at the time. Most of the industry has caught up now and getting 17.2kW usable per 42U and properly cooling it isn't as ridiculous of an ask now as it was in 2018 and prior. It's all the extras that eat away at your time which are necessary but not directly related to your core business and growth (routine backup system testing, redundancy buildouts and testing, HVAC pm's, misc property owner headaches, physical building security, etc).

    The main issue is that we didn't do any colocation so all the fiber and bandwidth were things we had to negotiate directly and all had big lead times due to build-out requirements and the fact that we're a tiny company and the other side of our property line was Microsoft's behemoth DC and a few blocks over is their new build so competing for resources with them was not really possible--just sit and wait.

    At this point I'd prefer to pay a quality provider to take those tasks off our plate and bundle us with existing customers to get better economies of scale. This just lets us focus on other direct aspects of the business.

  • PulsedMediaPulsedMedia Member, Patron Provider
    edited December 2022

    @jmgcaguicla said:

    @GreenWood said:
    But it doesn't need to run it in a foreign country, nah? I mean, some providers only have single location.

    The answer is still valid even if you remove that part. I think you're underestimating what physically building a DC entails.

    And maintenance.

    It's A LOT of work, and i mean A LOT of work. Every single little detail needs to be taken care of, and usually contractors don't work on this type of stuff, so as a business owner i've found out i need to be personally involved in every single step and ensure right stuff is used.

    We don't own the building, only the datacenter, there was miscommunication from new building manager and contracted maintainer, so they swapped our air filters without our approval WITHOUT notification; They used the wrong type, and all of sudden all our servers are stuffed with dust. There was literally piles of sand & dust on top of some of the spare part bins.

    Even employees sometimes miss the whole point, i asked one junior tech to vacuum the DC. FIRST he wanted to use the broken consumer old vacuum which blows a lot of dust in the air everytime you start it, and then he would not use the full power of the industrial vacuum we had purchased (as he was whining to get one for a long time -- but then refused to use it!), and ultimately was way too quick about it.
    Turns out he only vacuumed the open floor space, so the dust piles were still sitting on top of the parts bins -.-
    Yeah we had to let him go (he also pulled drives on the fly from a production server without approval ...)

    Contractors who do not even work in IT, has never worked with IT, may not even own a laptop are even worse in this regard.

    So every step of the way on a small business, i as the owner has to be involved. Ventilation systems, AC systems, power distribution, metering systems, electrical safety systems, racking the bloody servers can be surprisingly difficult for some people too etc etc etc. I need to understand all of them well enough to make a coherent big picture. Without this, i cannot make the right decisions neither.

    Even those who understand the business miss things quite often, for example new power distribution cabinet we installed are missing fuses from our side which caused downtime this June, severe one. Fuse blew, but it was building side, at like 02:00, and ofc the building did not have spares. It was just one phase, but that shutdown the AC units in middle of summer, so high rate of HDD failures was to be expected this fall (knock knock on wood). This was just this June, so sometime during next summer or so we will have to shutdown a large portion of our production to add the fuses and new more reliable metering systems.

    The list is just goes on and on and on, down to the tinyest details of HAVING THE RIGHT FRIGGIN' SCREWS! We once had server deployment delayed by weeks because those rails used imperial sized, i think 10-32 perhaps? Well, where fsck you going to find 10-32 screws/bolts in metric country? Took a few trips to find correct ones, and even then they tried to offer me M5, with sample on my hand (years ago, i did not know the sizing!), or HDD screws #6-32 with extra flat head, we've ran out a few times out of those, just spent something like 600$ to make sure we have enough for a few years.

    It's the tinyest detail which might ruin you.

    and the big stuff, like UPS units. A well known, respected brand has a nasty tendency to blow their circuits and/or transformers when it's actually needed, even with flames. They also consume was amounts of waste energy, ours has been measured to be consuming like 20% excess at about half load -- which is another detail, if you use Dual PSU, never go beyond half load on UPS because half of the power comes from straight the grid. Similar things go for fully direct grid on phase selection and reserves, if you use dual PSU, one phase fails but you hooked up to 2 different phases? Well now you are going to blow fuse on the next phase too. If you distributed across all 3 phases, you will likely loose the whole damn cabinet unless you were careful.

    When you colocate, use remote hands etc. the colocation facility takes care of ALL OF THAT for you, sometimes even building the servers.

    and that's just the physical stuff ... then there is routing, contracts, metering and monitoring etc. to take care of. To name just a few areas.

    How about work areas, tooling, parts storage, having enough but not too many spares, cleaning and organizing, social, rest rooms, calculating densities, making allotments for power and density, laying fiber and their routes, access to network operators, cwdm, dwdm, optic colors, sc, st, cfp2, qsfp+, dac, twinax, abloy, iloq, rfid, gates, r401a or r32 flammability, insulation types, diamond cutting, microscopes and end polishing, raised floors and tiling surface pressure ratings, water piping, drains, epoxy coatings, light brick, heavy brick, cement wall carrying capacity with XYZ method, fiber jackets and their types, electrical insulators, AC wiring safety regulations code, battery chemistries and safety factors, automated window control systems, embedded electronics, wire ratings, connector types, fluid dynamics, physical leverage, hoist types, vacuum pumps, hydraulic fittings, 3d printing, CAD design, electrical engineering, engineering in general, insurances, law, ventilation motor types, ventilation fan types, ventilation duct types, duct construction methods, distributors for industrial gear, industrial automation, rs485, 1wire, canbus, 433Mhz etc etc etc etc etc.?

    Then rinse a little bit of energy crisis with dash of inflation on top of it ...

    It's a never ending list of things to take care of.

    So next time you curse we are a bit slow to deliver a server, or why we cannot slap your RPi somewhere for 1€ a month. Well now you know, it's not as easy as it looks. If it looks easy and simple for you, you are either working with experienced professionals OR dunning-kruger is strong with you, and you just don't know enough yet to realize the challenges ... :)

    We all start somewhere, i had nfi how difficult it would be when we got our first own site late 2012 i think or earli 2013.

    -Aleksi

    Thanked by 2GreenWood yoursunny
  • PulsedMediaPulsedMedia Member, Patron Provider

    Now you guys must be thinking i am joking right? Not even in the slightest ... it takes this and much more to manage a DC, let alone building one from scratch...

  • PulsedMediaPulsedMedia Member, Patron Provider

    @jmgcaguicla said:

    @GreenWood said:
    As I know, a hosting business can be a reseller of other big providers, for example they rent a server then split it into VPS instances.

    Some big players just run their own DC, like run their own hardware and network in their own physical building.

    It doesn't have to go big or go home; colocation is a thing, a middle ground between reselling hardware and owning the DC.

    Small DCs are also a thing, tho going out slowly, fewer and fewer of us small DC operators left.

    We are what you would call little big. We might have a tiny DC, but the data throughputs, data amounts etc. we run on that tiny space is quite something. We min/max everything pretty much. There's no in-between. We tried in-between this year, and it totally blew up on our faces. Worst financial decision in a while, that was like 100k+ in just hardware alone going to ewaste ...

  • PulsedMediaPulsedMedia Member, Patron Provider

    @jar said:

    @GreenWood said:
    Also, I would like to know, what is the affordable option for a dedicated server hosting business, to run your own DC/hardware or just rent few servers from lets say OVH?

    Most affordable road in is to rent a dedicated server. Most affordable spread over time is probably to colocate with an established hosting provider who has empty rack space. Building a system is more money up front than renting one for a month, obviously.

    But once you got all that overhead sorted out, your per rack pricing, or per server, or per KW of end-use (compute, storage, whatevs) can be fraction of colocation cost.

    Big financial upfront investment, huge returns on the backend. You just need the right contacts and knowledge of your local area operators to get started, and quite a bit of cash in pocket.

    Our first DC was 5 rack closet with only 1x25A + AC it's own circuit ... Could not run profitably ever, but it was getting our foot on the door, build the contacts etc. before we bought this DC From a big operator for whom it was too small, and needed some great TLC (we immediately like doubled the rack density by just moving stuff around lol)

    Thanked by 2jar GreenWood
  • PulsedMediaPulsedMedia Member, Patron Provider

    @GreenWood said: Also, why not running your hardware from your house? Isn't possible? I saw some providers running their stuff from the basement! I might be wrong.

    You say you know how hard running a DC is ... then you suggest this.

    you know ... i just had to ;)

  • PulsedMediaPulsedMedia Member, Patron Provider

    @crunchbits said:
    Currently doing both. Have built/owned our own facilities and also colocate (or rent cage space) elsewhere. Generally speaking, the renting of multiple racks/cage space is a better solution for us. We sold off 2 of our own facilities during 2021/2022 and have shifted towards larger-scale colocation negotiations. At scale you can get some pretty good deals, and usually with a larger existing facility you have better leverage over connectivity (mainly, speed of bringing new bandwidth/fiber online).

    Running our own was a great experience, and was necessary given the power density (per rack and facility-wide) that was sort of a niche for us at the time. Most of the industry has caught up now and getting 17.2kW usable per 42U and properly cooling it isn't as ridiculous of an ask now as it was in 2018 and prior. It's all the extras that eat away at your time which are necessary but not directly related to your core business and growth (routine backup system testing, redundancy buildouts and testing, HVAC pm's, misc property owner headaches, physical building security, etc).

    The main issue is that we didn't do any colocation so all the fiber and bandwidth were things we had to negotiate directly and all had big lead times due to build-out requirements and the fact that we're a tiny company and the other side of our property line was Microsoft's behemoth DC and a few blocks over is their new build so competing for resources with them was not really possible--just sit and wait.

    At this point I'd prefer to pay a quality provider to take those tasks off our plate and bundle us with existing customers to get better economies of scale. This just lets us focus on other direct aspects of the business.

    Interesting.

    With this energy crisis headache we've been considering colocating some of ours too, but then again, we just started the second DC build out too. Almost 8meters tall space, 3 sides are to outside, expecting some really nice PUE numbers out there, also we can sell at least some of our heat capacity, and have solar panels etc. there.

    Being in Finland has it's perks -> Fiber layout to the building, only 2k € :) so that's pocket money in this stuff, the real cost is in the DWDM to connect to our current, but we might get the same cost IP Transit deal at the new site too from one of the providers.

    Transformer building as well, so grid blackouts are going to be very rare.

  • I think, most of LET folks are missing the point of this thread!

    I'm no way asking LET providers to run their own DC, nor I doubt their services.

    I never say this in my first post.

    The only thing I am wondering is how LET providers running their stuff, of course every provider runs their stuff differently that works for them.

  • The term data center is used loosely around this neck of the woods. Outside of some dude's basement I doubt anyone owns a building here. I know IOFlood was working on owning their own building but I think COVID slowed that down plus IOFlood has been priced out of the LE market.

  • @PulsedMedia said:

    @jmgcaguicla said:

    @GreenWood said:
    But it doesn't need to run it in a foreign country, nah? I mean, some providers only have single location.

    The answer is still valid even if you remove that part. I think you're underestimating what physically building a DC entails.

    And maintenance.

    It's A LOT of work, and i mean A LOT of work. Every single little detail needs to be taken care of, and usually contractors don't work on this type of stuff, so as a business owner i've found out i need to be personally involved in every single step and ensure right stuff is used.

    We don't own the building, only the datacenter, there was miscommunication from new building manager and contracted maintainer, so they swapped our air filters without our approval WITHOUT notification; They used the wrong type, and all of sudden all our servers are stuffed with dust. There was literally piles of sand & dust on top of some of the spare part bins.

    Even employees sometimes miss the whole point, i asked one junior tech to vacuum the DC. FIRST he wanted to use the broken consumer old vacuum which blows a lot of dust in the air everytime you start it, and then he would not use the full power of the industrial vacuum we had purchased (as he was whining to get one for a long time -- but then refused to use it!), and ultimately was way too quick about it.
    Turns out he only vacuumed the open floor space, so the dust piles were still sitting on top of the parts bins -.-
    Yeah we had to let him go (he also pulled drives on the fly from a production server without approval ...)

    Contractors who do not even work in IT, has never worked with IT, may not even own a laptop are even worse in this regard.

    So every step of the way on a small business, i as the owner has to be involved. Ventilation systems, AC systems, power distribution, metering systems, electrical safety systems, racking the bloody servers can be surprisingly difficult for some people too etc etc etc. I need to understand all of them well enough to make a coherent big picture. Without this, i cannot make the right decisions neither.

    Even those who understand the business miss things quite often, for example new power distribution cabinet we installed are missing fuses from our side which caused downtime this June, severe one. Fuse blew, but it was building side, at like 02:00, and ofc the building did not have spares. It was just one phase, but that shutdown the AC units in middle of summer, so high rate of HDD failures was to be expected this fall (knock knock on wood). This was just this June, so sometime during next summer or so we will have to shutdown a large portion of our production to add the fuses and new more reliable metering systems.

    The list is just goes on and on and on, down to the tinyest details of HAVING THE RIGHT FRIGGIN' SCREWS! We once had server deployment delayed by weeks because those rails used imperial sized, i think 10-32 perhaps? Well, where fsck you going to find 10-32 screws/bolts in metric country? Took a few trips to find correct ones, and even then they tried to offer me M5, with sample on my hand (years ago, i did not know the sizing!), or HDD screws #6-32 with extra flat head, we've ran out a few times out of those, just spent something like 600$ to make sure we have enough for a few years.

    It's the tinyest detail which might ruin you.

    and the big stuff, like UPS units. A well known, respected brand has a nasty tendency to blow their circuits and/or transformers when it's actually needed, even with flames. They also consume was amounts of waste energy, ours has been measured to be consuming like 20% excess at about half load -- which is another detail, if you use Dual PSU, never go beyond half load on UPS because half of the power comes from straight the grid. Similar things go for fully direct grid on phase selection and reserves, if you use dual PSU, one phase fails but you hooked up to 2 different phases? Well now you are going to blow fuse on the next phase too. If you distributed across all 3 phases, you will likely loose the whole damn cabinet unless you were careful.

    When you colocate, use remote hands etc. the colocation facility takes care of ALL OF THAT for you, sometimes even building the servers.

    and that's just the physical stuff ... then there is routing, contracts, metering and monitoring etc. to take care of. To name just a few areas.

    How about work areas, tooling, parts storage, having enough but not too many spares, cleaning and organizing, social, rest rooms, calculating densities, making allotments for power and density, laying fiber and their routes, access to network operators, cwdm, dwdm, optic colors, sc, st, cfp2, qsfp+, dac, twinax, abloy, iloq, rfid, gates, r401a or r32 flammability, insulation types, diamond cutting, microscopes and end polishing, raised floors and tiling surface pressure ratings, water piping, drains, epoxy coatings, light brick, heavy brick, cement wall carrying capacity with XYZ method, fiber jackets and their types, electrical insulators, AC wiring safety regulations code, battery chemistries and safety factors, automated window control systems, embedded electronics, wire ratings, connector types, fluid dynamics, physical leverage, hoist types, vacuum pumps, hydraulic fittings, 3d printing, CAD design, electrical engineering, engineering in general, insurances, law, ventilation motor types, ventilation fan types, ventilation duct types, duct construction methods, distributors for industrial gear, industrial automation, rs485, 1wire, canbus, 433Mhz etc etc etc etc etc.?

    Then rinse a little bit of energy crisis with dash of inflation on top of it ...

    It's a never ending list of things to take care of.

    So next time you curse we are a bit slow to deliver a server, or why we cannot slap your RPi somewhere for 1€ a month. Well now you know, it's not as easy as it looks. If it looks easy and simple for you, you are either working with experienced professionals OR dunning-kruger is strong with you, and you just don't know enough yet to realize the challenges ... :)

    We all start somewhere, i had nfi how difficult it would be when we got our first own site late 2012 i think or earli 2013.

    -Aleksi

    That's literally crazy to deal with!!! I think anyone will prefer to do a colocation than getting into this crazy things, even colocation is less profitable compared to owning a DC, but yeah it's worth it after all.

    I really had no idea before this thread. But yeah, now I have a big scoop of what providers are dealing with.

    Thank you for sharing your story/experience with us.

  • @PulsedMedia said:

    But once you got all that overhead sorted out, your per rack pricing, or per server, or per KW of end-use (compute, storage, whatevs) can be fraction of colocation cost.

    Big financial upfront investment, huge returns on the backend. You just need the right contacts and knowledge of your local area operators to get started, and quite a bit of cash in pocket.

    Our first DC was 5 rack closet with only 1x25A + AC it's own circuit ... Could not run profitably ever, but it was getting our foot on the door, build the contacts etc. before we bought this DC From a big operator for whom it was too small, and needed some great TLC (we immediately like doubled the rack density by just moving stuff around lol)

    What do you mean by "5 rack closet with only 1x25A"? Sorry to ask. It's meant to setup 5 servers?

  • @PulsedMedia said:

    @GreenWood said: Also, why not running your hardware from your house? Isn't possible? I saw some providers running their stuff from the basement! I might be wrong.

    You say you know how hard running a DC is ... then you suggest this.

    you know ... i just had to ;)

    Hahah, I'm just saying, you are the expert :)

    Now, my confidence has gone :#

Sign In or Register to comment.