Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Do most of LET providers run their own DC? - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Do most of LET providers run their own DC?

13»

Comments

  • PulsedMediaPulsedMedia Member, Patron Provider

    @GreenWood said:

    @PulsedMedia said:

    But once you got all that overhead sorted out, your per rack pricing, or per server, or per KW of end-use (compute, storage, whatevs) can be fraction of colocation cost.

    Big financial upfront investment, huge returns on the backend. You just need the right contacts and knowledge of your local area operators to get started, and quite a bit of cash in pocket.

    Our first DC was 5 rack closet with only 1x25A + AC it's own circuit ... Could not run profitably ever, but it was getting our foot on the door, build the contacts etc. before we bought this DC From a big operator for whom it was too small, and needed some great TLC (we immediately like doubled the rack density by just moving stuff around lol)

    What do you mean by "5 rack closet with only 1x25A"? Sorry to ask. It's meant to setup 5 servers?

    5 server racks of 40-42U, and 1x25A = 230v 25A = 5750W, but since you can only run 80% for safety practical max load 4600W. These days just 1 of our AC units swing by more than that lol, not enough to run even 1 full rack. But had to start somewhere

    @GreenWood said: Now, my confidence has gone :#

    No, now you know how much there is to learn. You are now at the peak before the valley.

    Thanked by 1GreenWood
  • CalinCalin Member, Patron Provider

    very lucky @PulsedMedia , my first rack running on 15A maximum :) now i m have 30A per rack

  • terrahostterrahost Member, Patron Provider

    @Kousaka said:
    Most providers here do not operate a DC. In fact even Linode/DO/Vultr/AWS/etc do not operate a DC either. Hetzner and OVH do. IIRC @terrahost has their own DC? Those who own a DC often advertise on their website, might be worth checking.

    Yes correct, we operate two, soon three datacenters in Norway.

  • 1gservers1gservers Member, Patron Provider

    @deadpool said:
    The term data center is used loosely around this neck of the woods. Outside of some dude's basement I doubt anyone owns a building here. I know IOFlood was working on owning their own building but I think COVID slowed that down plus IOFlood has been priced out of the LE market.

    Their facility buildout is complete and they’ve begun migrations. Yes their buildout did take longer than they planned, and covid did play a factor to that. I don’t want to speak for them, but I agree they’ve been absent from the LE market for a while. Great team at IOFlood.

    It’s generally difficult to justify a facility buildout on LE margins.

  • crunchbitscrunchbits Member, Patron Provider, Top Host

    @PulsedMedia said:

    Interesting.

    With this energy crisis headache we've been considering colocating some of ours too, but then again, we just started the second DC build out too. Almost 8meters tall space, 3 sides are to outside, expecting some really nice PUE numbers out there, also we can sell at least some of our heat capacity, and have solar panels etc. there.

    Being in Finland has it's perks -> Fiber layout to the building, only 2k € :) so that's pocket money in this stuff, the real cost is in the DWDM to connect to our current, but we might get the same cost IP Transit deal at the new site too from one of the providers.

    Transformer building as well, so grid blackouts are going to be very rare.

    Your write-up was excellent. Really a lot of experience coming through there. Giving me a little bit of PTSD reading some of it :D

    We had similar issues with HVAC contractors which, as the business owner, I basically had to just learn all of that myself (in addition to being an electrical engineer, network engineer, etc). Then they don't understand why I am specifically providing certain MERV rated filters and no you can't use whatever is on the truck or why you can't kill all the evaps at once to work on them and we're going to be standing there making sure you kill 1 at a time and bring it online before touching the next. Oh, it's a new guy and our normal guy didn't leave it in the notes? Sorry we hit the kill switch on all 10 evaps at once in July.

    Or when a certain major name-brand UPS system we had decided to grenade itself for no reason (for the second time) and take out a few branch circuits one of which was primary fiber upstream. Legitimately trying to add redundancy and backup protection for core systems which then just fail spectacularly and cause more issues than if we had done nothing.

    Trying to teach 'experienced' new employees how 3 phase power/PDU's work and how to balance the PSU's across all machines in the rack on all the phases only to find everything plugged into a single bank and them telling me the PDU is malfunctioning and keeps popping the safety breaker.

    Yeah the actual DIA/transit fiber (from any single carrier) to the building wasn't bad but the issue was exactly the same: 100G wave circuits between facilities/DWDM so we didn't have pools of stranded IPs and other inefficiencies. >1 year worth of missed deadlines and no services enabled. Eh, I just don't miss a lot of that and at a certain point decided I was okay taking less margin in exchange for not having to wear all those other hats. I think we'll always keep at least 1 self-owned/operated facility at all times, though. Just for flexibility and freedom.

    @deadpool said:
    The term data center is used loosely around this neck of the woods. Outside of some dude's basement I doubt anyone owns a building here. I know IOFlood was working on owning their own building but I think COVID slowed that down plus IOFlood has been priced out of the LE market.

    I personally know at least 2 other LET members IRL that are owner/operators of their own entire facilities. They aren't 50mW Microsoft Columbia facilities but they aren't shacks in mom's basement, either.

  • PulsedMediaPulsedMedia Member, Patron Provider

    @crunchbits said: Your write-up was excellent. Really a lot of experience coming through there. Giving me a little bit of PTSD reading some of it :D

    X) Yea there can be some really stressful situations. That's why we like niche we work in, for ecommerce even 1 minute downtime can be crucial. For seedboxes? Not so much, if we need to reboot a certain system it most likely is no problem.

    @crunchbits said: We had similar issues with HVAC contractors which, as the business owner, I basically had to just learn all of that myself (in addition to being an electrical engineer, network engineer, etc). Then they don't understand why I am specifically providing certain MERV rated filters and no you can't use whatever is on the truck or why you can't kill all the evaps at once to work on them and we're going to be standing there making sure you kill 1 at a time and bring it online before touching the next. Oh, it's a new guy and our normal guy didn't leave it in the notes? Sorry we hit the kill switch on all 10 evaps at once in July.

    Yea, we started just adding notices on our gear to keep their dirty stingy fingers off of them! ;)

    @crunchbits said: Or when a certain major name-brand UPS system we had decided to grenade itself for no reason (for the second time) and take out a few branch circuits one of which was primary fiber upstream. Legitimately trying to add redundancy and backup protection for core systems which then just fail spectacularly and cause more issues than if we had done nothing.

    Certain major name-brand UPS systems actually add to downtime indeed, they don't remove it, they add more of it. For example, run out of battery during blackout? You can forget about it recovering, someone has to go in manually press on the friggin' machine a button to start outputs again -.-

    And yeah, that grenading is very common, and grenading with flames is infinitely more common than it should be.

    Not sure, but i've gotten the itching feeling that all legacy old school UPS systems are more or less like this. Expensive to buy, expensive to maintain, SO Unreliable you are better without them, and ultimately useless when there is a real blackout since the batteries cannot last more than a few minutes.

    Generators are even more expensive to keep up.

    In Helsinki? All that for what, a sub 0.2s brownout once per 1½-2 years, and one actual blackout every 5-10 years which could be upto couple of hours (ie. transformer maintenance, but you should get upfront notice of this ... unless building manager fucks up due to "green values" so no notifications because that would've used some paper!!!! ffs).

    At this point you have tripled your energy costs to eliminate single downtime event, if you are lucky and the UPS units don't fuck up ... The maths are not pretty. One of the advantages big boys have.

    It's the very same exact experience everyone else local DC operator has had with that brand of UPS units. Now it's obvious why telecom side uses 48V DC PSUs in routers etc. eliminate the need for failure prone UPS when you power straight from the batteries and only have the chargers / rectifiers to worry about.

    contractors, suppliers etc ..... yikes

    Maybe the hardest part is exactly because you have to deal with morons in your supply line, and a lot of them because there are so many suppliers and parties of all sorts. Some of which gathers the absolutely lowest common denominator of human beings ... Hell, i've seen a building maintainer guy who thought that adding a power strip after a power strip would double the power available to him, so he put 5 power strips in a string, and to each of them a 2kw load -- then proceeded to wonder why the fuse blew. and they have let him do electrical work on the buildings he maintains oO;

    One contractor was so broke, that he sent his invoices straight to debt collector to get paid instantly, or called saturday night to have his invoices paid before due date. Yet also so absent minded he left his tools everywhere. Once he insisted on driving and bringing a AC unit to my garage; We got ~brand new 12kw unit for the grand total cost of disassembling it, i decided to get it for my garage. Had been as backup on that DC for a few years, test driven a couple of times, maybe 100 run hours. Despite me telling him i'm not going to pay him for that and i'll just move it myself when i get my van back. He kept on insisting to bring it (50km per way), and we went for coffee afterwards. Invoice comes he invoiced not just the travel BUT hours for the travel and end of the day coffee too. I think he even tried to charge his travel back + travel time for that too.

    Probs needless to say we did not use his services ever again nor did i pay for that bullshit charge, so to get DC keys and badges back from him, he left them on car windshield in public space outside where any one could have picked them up ...

    He also left 2 of our DC AC units untested and unfilled for refrigerant, so we had a mid summer failure on one of them because it was leaking AND then we noticed he never filled them either ... 700€ bottle of refrigerant + another contractor finishing the job later we got them running reliably :)

    Another AC contractor broke one of our units water collection tub during maintenance, resulting in rain in front of the AC unit during summer. They refused to fix it for any money, we even found the spare parts ourselves, but they would not come and fix it even if we paid for it, even tho that should've been a warrantied repair.

    Not a good sign when you got a rain going inside of your DC lol

    @crunchbits said: Trying to teach 'experienced' new employees how 3 phase power/PDU's work and how to balance the PSU's across all machines in the rack on all the phases only to find everything plugged into a single bank and them telling me the PDU is malfunctioning and keeps popping the safety breaker.

    People not following instructions is a termination level offense imho. We were too lenient even just as of lately, i keep getting angry at myself for being too lenient these days lol.

    Even larger operators can get seriously wrong -> One finnish DC company had their whole electrical cabinet burn because no one was watching the phases, and way too much got plugged into single phase ... And i guess source fuses were too big, the input cable too big for the cabinet itself which by the sounds of it lacked input fuses ...

    Everything was preventable.

    Electrical cabinets can get quite hot even if running completely within a spec, scary when you need to start thinking about cooling a electrical cabinet. One cabinet we have were running quite hot, but someone kept closing it's door, so i simply removed the door to make sure it's always on the fresh air :) On that same cabinet one electrical contractor used slightly different sized automatic fuses / cable protectors, resulting in the bus bar being janky, and never even tightened properly. Was scary moment when we realized that because there was coming crackling noises out of the cabinet.

    That contractor neither had any business to our DC ever again :)

    @crunchbits said: Yeah the actual DIA/transit fiber (from any single carrier) to the building wasn't bad but the issue was exactly the same: 100G wave circuits between facilities/DWDM so we didn't have pools of stranded IPs and other inefficiencies. >1 year worth of missed deadlines and no services enabled. Eh, I just don't miss a lot of that and at a certain point decided I was okay taking less margin in exchange for not having to wear all those other hats. I think we'll always keep at least 1 self-owned/operated facility at all times, though. Just for flexibility and freedom.

    The flexibility and potential higher margins indeed is the siren call of owning a datacenter ... Years later you figure out how friggin' much work there is.

    On the other side of the fence, customers have NFI how much work it is just to get a server online. It's anything but simple and easy. Even things like customer interface for remote reboots & reinstalls, someone has to make it, and test it, do continuous QC on it, and continuously add distros and keep them updated.

    As a small operator like ourselves, most of that stuff is a lot of overhead, where as for the big boys it's a rounding error.

    Economies of scale, you are always chasing the next step :)

    On other business i used to run i got threats of physical violence because we were selling at higher than factory's cost to produce and therefore scammers... Uhm ok. Also competitors making false tax fraud claims to tax agency and all that kind of fun stuff. Needless to say, closed the doors on that business and later on have seen people complaining there is no one serving that niche and hence have to go abroad for way higher costs. I wonder why ...

    @crunchbits said: I personally know at least 2 other LET members IRL that are owner/operators of their own entire facilities. They aren't 50mW Microsoft Columbia facilities but they aren't shacks in mom's basement, either.

    Big boys get ~tax free electricity here once you reach 5MW :/ They started to support "small" operators, but 500kw was the minimum and ridiculous PUE requirements for the small providers too. I could not figure out how to get to that PUE requirement without installing heat pumps to sell back the waste heat to district heating.

    -Aleksi

    Thanked by 1jegh
  • Equinix, of course

  • bacloudbacloud Member, Patron Provider

    If a company has many locations, it probably runs its business from a rented space/rack or rented hardware. Most of the providers use rented space and colocate their own hardware.

    For example, we have our own DC in Lithuania and we have rented racks with our own hardware in other locations.

    Thanked by 1GreenWood
  • @bacloud said:
    If a company has many locations, it probably runs its business from a rented space/rack or rented hardware. Most of the providers use rented space and colocate their own hardware.

    For example, we have our own DC in Lithuania and we have rented racks with our own hardware in other locations.

    Sounds true, another example is Vultr. They have their own DC in NJ (owned by their parent company to be more precise) and colocate in other locations.

  • ArkasArkas Moderator

    also I believe @Hybula have their own DCs.

  • HybulaHybula Member, Patron Provider

    @Arkas said:
    also I believe @Hybula have their own DCs.

    No, we do not have our own DCs. However, we are working on a new project that could be classified as "own DCs" :smile:

    Thanked by 1Arkas
  • NeoonNeoon Community Contributor, Veteran

    Doesn't mean much.

    You could sell dog and cat foot on top and sell servers in your basement.
    The term DC is used for everything these days, like Cloud.

  • @Hybula said: However, we are working on a new project that could be classified as "own DCs"

    Rearranging garage?

  • @Kousaka said:
    Most providers here do not operate a DC. In fact even Linode/DO/Vultr/AWS/etc do not operate a DC either. Hetzner and OVH do.

    Hetzner Own/Operate the DC in Germany and Finland.

    USA West/East coast the probably rented multiple racks/suite

  • WebProjectWebProject Host Rep, Veteran

    @JabJab said:
    WoodenRackDC

    used by OVH and Oracle, in summer was too hot in UK so UK Oracle DC had issue with cooling systems:
    https://www.bloomberg.com/news/articles/2022-07-19/google-oracle-data-centers-knocked-offline-by-london-heat

  • rustelekomrustelekom Member, Patron Provider

    Any business should be managed properly. It doesn't matter how big it is. It's just that the result of the business will affect more or less customers.
    I know several stories about companies that claimed to be a data center, and so it was. But they rented a building, and when the owner of the building changes his mind, they fail.
    Another used leased equipment - one day they were unable to complete the lease payments, and therefore their lease agreement was terminated.
    So, the moral again comes down to proper management.
    If you ask me, I could say that we combine both - leasing and our own equipment colocation. At the current level of business, this method suits us.

  • nessanessa Member, Patron Provider
    edited December 2022

    I don't think it would be economical for any provider in this space to own/run their own facility. Like most, we colo with various DC providers and rent racks/cross-connects, but own all our hardware.

  • LeasewebUKLeasewebUK Member, Host Rep

    We own our own racks using a few Data center providers. In NL We have our own DC's. Right now it is not the time to be responsible of your racks due to high electricty pricing. Uncertain times.

  • jsgjsg Member, Resident Benchmarker
    edited January 2023

    @GreenWood said:
    Hey,

    Just curious, do most of LET providers have their own DC or they are a reseller of big providers such as OVH, Hetzner...?

    Do you think running your own DC is something you could consider? or you prefer to be just a reseller?

    This as well quite a few others of your posts strongly suggest that you do not yet know a lot about that field. And no, sadly a bunch of rather subjective statements and some "war stories" don't help much.

    So let me offer a quick 101.

    "Provider" doesn't mean a whole lot; that could be some smart (or "smart") guy who simply resells at the lowest end that is, e.g. some other smart guy's hosting based on one or a couple dedis ("dedi" ~ physical server hardware); or that could be a really smart guy who over years and years built a serious player and thinks in racks ("rack" ~ a metal somewhat standard frame into which physical equipment is put) or even in cages ("cage" ~ a (usually physically separated) group of racks often with "private" el. power and fiber feeds). Finally there's also a product called "colocation" which means space in standardized sizes ("RU" or "HU" ~ rack units ~ height units of which usually between 42 and 46 are available in a typical rack) plus el. power and a network connection, both of which usually come in (very) basic, small quantities but can, for a price of course, be extended.

    At the bottom, or if you prefer, nearest to the end-user/customer, are those who provide some kind of hosting service; maybe plain web hosting, maybe virtual servers of some type, maybe dedis. At the end of the day all of those services are based on some server hardware. The dedi is server hardware and a virtual server ("VM" ~ virtual machine a.k.a "VPS", "VDS", "root server", etc., some kind of a virtual system on which an OS can run) is based on a hardware server.

    To run hardware servers some infrastructure is needed, in particular and most importantly network connectivity, electrical power, and cooling (because as a smart man put it, a server is basically a heater that provides computing as a side effect) - and all of those preferably reliably. "Reliable" typically meaning "redundant" and preferably "smart".

    While that infrastructure may, and sometimes does, run in a garage, typically it's provided by a data center, preferably in one that is well designed, built, and operated. Which sounds deceivingly simple but actually is a very complex and very capital intense operation requiring loads and loads of know how and experience and hence tends to be (and to become ever more) large; while 10 or 15 years ago one might have been "somebody" with say 200 m² (ca. 2000 ft²) net colo space such an operation would highly likely be considered amateur today (when >= 1000 m² are normal and even 10000m² aren't really huge).

    Finally let's look at another term often coming up: "reseller", and at some typical scenarios.
    A reseller comes in two rather different variants, (a) the basic reseller that is, someone who basically just sells someone else's products or services, and (b) someone who doesn't have their own basic products but bases his own products or services, typically with some kind of value added, on basic products or services from someone else. Renting servers from a (usually) large dedi provider and putting VMs on them is an example - or not; one factor to decide often is whether that company has their own network (which may mean quite diverse things too) or not (i.e. uses his providers network).

    And now some scenarios which probably are typical for LET providers. (Besides a few like e.g. Hetzner who run their own DCs) a good provider has racks in a good quality DC ("colo") or even a cage; possibly (and not rarely) in multiple DC in different regions/countries/continents, and uses their own hardware, network equipment, possibly UPS, and likely their own network too.
    Another and similar variant is basically the same but with rented hardware, at least a major part (like e.g. the servers).

    Another scenario would be smaller operations with say, just half a rented rack as well as network(s) operated by the colo but with an AS ("Autonomous System" ~ basically meaning that the provider not only had his own IP range(s) but a BGP routable network), and the whole thing possibly in different locations/DCs.

    Finally at the "bottom end" (which can be quite profitable if done well) a provider might simply rent a couple of dedis at a couple of locations, maybe with their own IPs, maybe not, have a logo and some branding and marketing done and look like a big serious provider (at least to most customers). Again, this is not a derisive statement; some of that kind of provider have become quite large.

    At the end of the day one, and an often ignored or at least not valued by many, perspective is "how deep and far do we want to go and invest". Example: Yes, of course from a technical perspective it's very desirable to have as much control as possible which means to have one's own DC, and multiple ones all over the planet. On the other hand though, an excellent provider might be a mediocre DC operator (many are misled to think that "hosting provider" and "DC" are very similar, almost the same except for scale; that is however wrong). Plus everything requires some investments and not only in terms of money. So, say a young company has 5 mio$; one could build something one might call "DC" with that (albeit a questionable and/or really small one) -or- one could rent some cages in NA, Europe, and Asia from quality DCs and focus on becoming an excellent hosting provider with good professionals and expertise -or- one might chose a blunt reseller route and go all in in terms of marketing and sales.

Sign In or Register to comment.