New on LowEndTalk? Please Register and read our Community Rules.
Datacenter Bandwidth Estimation

Hi all.
This question is geared towards those who operate their own datacenters and networks. This question is not for those who talk about running their virtual machines and the question is not really for those who run single nodes.
What do you use to estimate the amount of bandwidth you need for your datacenter/new location? Is it more just like buying 10 Gbit uplink and then seeing if you need more or less after a few weeks or months of operation? What if it's a bigger deployment? What information factors into this decision (beyond finances and network costs, or are these the primary decision variables?)?
Comments
Not operator of DC, but I’ve seen plan like this:
“Buy 10Gbit/s transit at first, if the actually used value is more than 7Gbit/s, then buy more. Just keep 30% or more unused bandwidth at all times. “
I have been involved in a datacenter turn up for a company I used to work for. I think Kousaka gave good advice, try and estimate what you need to start with and add 30% on top for growth and bursts. Bandwidth is cheap enough to have the extra overhead, but make sure whatever gear you deploy can be upgraded later on along with the carrier you choose. Once ran into that problem where I needed to go beyond the 10G and the carrier took 10+ MONTHS of getting it installed because their infrastructure was not able to handle it in that location.
Whenever we are closing to 50% during peak hours, we start looking at upgrades. Since going beyond that will eat into your redundancy. For instance if you have 2 x 10G, and both of these are at 50%. What do you do if one of them drops?
In our early years as Gigahost back in 2006 we were rocking a 50Mbps business fiber connection. It was later upgraded to a blazing fast 100Mbps, and eventually 1Gbps. All of this was singled homed and using the ISPs IP space, but the links were protected.
That 1Gbps was accompanied by another 1Gbps from a different ISP and we went BGP with own IP space. And from there, 10G, multi 10G and on to multi 100G++ where we are today.
This is a really valid point, make sure to plan in time. It can take months. We had some growing pains because of this at some point. We were at a point where we needed another 10G circuit. Not a problem, delivered in 2 days. When we wanted another one, it was months out.. So plan ahead
Thanks all. This back-of-the-napkin understanding of the logic is appreciated. If there are further comments or other methodologies please let me know. This is validating my own thought process.
I was re-reading this and I had a thought. How much of the redundancy is valuable? I'm also tagging @terrahost in this as well. For example, lets scale this up from 10G to about 10T. What percentage of that do you want to be redundant/unused capacity but available in case of disaster events? Like a single 10G commit can usually mean a single point of failure (e.g. network point, etc.), but 10T usually means multiple pieces of hardware building that redundancy in. So once you're at that scale, how do you make those decisions on capacity?
Our data center at yoursunny summer host is not oversubscribed.
We have enough bandwidth to allow every server to transmit at the full speed of their port at all times.
Suppose there are 3000 servers with 10Gbps ports, the total bandwidth would be 30Tbps.
Moreover, it needs to be redundant, so that there needs to be two transit providers with 30Tbps each.
This is very common practice.
Also make sure upfront you can upgrade, ie. enough fiber or what-not-available.
This is rather common actually for various reasons. Last time we upgraded one of our transits it took something like 8 months from contract agreement to actual install. And this was despite all fiber, CWDM etc. already installed. Just because people were busy etc.
Commit and links are two different things.
Depends on your niche to be honest.
That 30% rule is quite good, but it is also more nuanced than that. If you peak once every month at say 90% for 30-45minutes, but you are otherwise at 50% -- is it really worth to upgrade then?
It all also depends on your niche.
We start looking at things when we hit 60% and evaluate from there. About 80% is where we want to add some capacity.
This again depends on your scale, niche etc. are you looking at average, 95th, 90th, peak etc.
Right now for example we are about 76% when looking at max burst, but 95th is more like 64%. and that's just commit, we have bigger links than that. So in our case the actual metric is when we start to pay for burst use we increase commit and not actually hitting our link max.
One thing is also funny, how out of touch "home gamers" are in relation to how freakin' much bandwidth even 10G is. They just see their ISP offering 150€/Mo for 10/10G Fiber and think it cannot cost more than 50€ at a DC for sure.
We always have double Capacity then what is used 95/5 in case of failure as said by Posters before.
I definitely live in the wrong country!
In Finland that's already on expensive side oO; 100€/month, and that's not even fully capital area, well outside of helsinki: https://lounea.fi/10g
Wow, gigabit 10 years ago? I'm currently on 72 Mbit and 940Mbit was just rolled out in our street last month. I'm signed up to get that (actually 450Mbit) in a couple of weeks, but for the privilege, I have to forego IPv6, because apparently that ISP still hasn't rolled it out despite promising it'll be "very soon" since 2019.
shaking my head
Fastest plan available where I am is
1500/100 €130/month (converted)
But yes, home connections are oversubscribed, data center connections are usually dedicated bandwidth, even if transferred/month is limited.
Very accurate answer. Thats why it takes so long to start colo on a full rack. Even worss if its in Portugal. Its super expensive here. We've been planning it for a year.
Basically this, but we do redundant links with different upstream ISPs. If shit hits the fan we can switch over and not over saturate the link.
Is best to have two uplinks. This is for redundancy purposes and what we use is Observium to monitor the bandwidth. Each network device is mounted in the Observium instance. That way we can see who is use the bandwidth also to monitor other things that could hinder the network.
Meanwhile in Australia, residential connections over the NBN (National Broadband Network) only reached 100Mbps a few years ago. IIRC they just added 250Mbps and 1000Mbps plans last year, not everyone can get it, and it's expensive.
In the USA I've got 1Gbps for $70/month but the upload speed is only 40Mbps <_<
I can get a 25 Gbit/s residential connection here from init7.net for 70$/month.