New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
united states only, the platform is in english only.
Are you saying the content/files stored on each server would be 20TB (terabytes)? Or would each 10gbps server need a small but high speed disk?
each server, you will probably need about 3-7 TB
As long as the total storage (together with all the servers of this "CDN" has a capacity close to 20 TB) it will be enough, although we don't really need 20 TB, this is the maximum we have in S3 constantly and we rarely need this in cache. Since the users simply watch the videos live, we could delete files older than 6 hours (.ts).
@milla I'm not really seeing a need for a CDN even.
Just. If users are just streaming. Just need a good NVMe server with a 10G pipe.
when there is a lot of traffic, I simply want to add another server and when there is none, remove it.
if I only have a 10 Gbp/s server, I will not be able to scale it.
or am I wrong?
Well, a single 10G server right, wouldn't scale. But I'm not saying have multiple (at least 2) so there is some load balancing.
But truly just a good centrally located servers I think would fit you good.
Granted there is a LOT of details that are kinda missing here as I understand you don't want to give out all your trade secrets on a public forum.
I would say at this point I would look more into a dedicated option or even colo.
There are questions as to like how your CPU loads are? Memory usage come into play as well.
Psychz has 10 Gbps fully Unmetered dedicated servers in various locations starting at $199, I would contact them. Their bandwidth is some of the cheapest:
https://www.psychz.net/dashboard/client/web/order/dedicated-server?filterId=wKc8uqE7GpzNC6dyWqF8/yzr6kJW6AmNZ0+4GsSEvMw=
UKServers/FractionServers has something similar for 225 GBP per month:
https://www.fractionservers.com/specials/
@Clouvider might have something too
I would contact these providers in private, and they can probably give you a quote
our CPU and memory load is always almost null sometimes it goes up to 2% or 5% the servers use a Dual E5-2630v3 CPU (shared, since we use a lot of zetservers)
I am looking for an easy and scalable option, currently we use the above mentioned provider with kubernetes for an "automatic" scaling, actually it is not, as it does not provide api, we have about 50 servers and sometimes we add 50 more. and then we remove them manually.
we use traefik to balance our traffic but this solution fails quite a lot, because as they are shared their bandwidth is sometimes not enough (less than 50 Mbit/s).
we are looking for something better than that, i don't care if they are dedicated servers or 100 small vps like the ones from zetservers.
our infrastructure is basically nodejs + graphql, frontend next.js (simple) postgreSQL database self managed by scaleway and we have an average of 150 million requests per week.
I am a programmer, I don't really know much about servers, so if you give me advice or tips I am also grateful.
Well if their website shows all their current locations. Anything across the pond is not ideal for providing content here in the states. Latency alone would do a number.
I would say to start off. Would need to find a good total of all resources being used and then go from there.
We can help you at least get an idea of where to go. If you want reach out on our discord.
Rather if you get service from us or someone else. Can at least help you out to get a better idea of what you truly need.
of course I will go to the discord and ask in general to see if someone can give me some guidance or show me suppliers, prices or something like that.
@miia
Why do you need 10Gbps servers?
I can see you are streaming m3u or m3u8 if you handle HLS/.ts
You can use existing solutions like Xtream UI 22F (free) or FastoCloud (paid license), and combine that with Xtream UI load balancer setups to spread clients across x amount of cheap 1Gbps servers based on multiple reasoning that's configurable. you can also spread the load based on GeoIP DBs
That program accepts OBS as source material to be streamed across different load balancers, so it works for twitch too. It's generally used for illegal IPTV or VOD, but it has legitimate use cases too.
10x 1Gbps is much cheaper to buy from 10Gbps nodes, than 1x 10Gbps VPS that allows such abusive usage.
If you want basic info, and suggestions, PM me, though I do not sell anything.
@miia
You need to use the cloud provider to have Kubernetes & Scaling on demand. Look like you are using AWS, GCP. These are top tier so very expensive. You can try to use Digital Ocean, Vultr, etc for cheaper pricing.
However, it will not as cheap as purchasing a dedicated server from @1gservers . The performance/price ratio of the dedicated approach is much better too. The downside is no scaling at all.
Back to your current solution, you mention traefik so I assume that you are using a Load Balancer from the cloud provider. It will distribute the traffic to multiple servers but they are also shared at the Load Balancer. In ideal scenario, your video should go from the one of the server to the user directly.
You mention that you are using nodejs so I think that you are flexible with a hybrid solution.
You can have a main infrastructure at the cloud provider. However, it should be minimal enough to save the cost. Let's say 3 pods in a Kubernetes cluster. You will host your nodejs API server here so you can scale easily. These API servers are behind a Load Balancer.
A user makes a request to the API server and the API server will return an URL to a separated server that distribute the video. This should be a direct URL without any load balancer. My idea is that your API server will route the traffic to a pool of video servers.
You will purchase a dedicated server from a provider in this forum and setup it as video servers. In a low traffic scenario, your API will route all traffic to this dedicated server. This server is a monthly subscription so the cost is fixed.
When you see that there are high traffic (due to user count), you will create a new server inside the cloud provider and the API can do weighted round robin between the dedicated server and a cloud vps. You can scale up and down based on traffic.
In conclusion, you need an API server to return an URL of the video server and that API will also scale your system as you want. If this works, you can purchase multiple cheap 1gbps instead of unlimited 10gbps.
Your budget is enough for a 10Gbps dedicated server
So... based on my own personal interests in video serving and reading over this thread...
It sounds to me that you want a few good, reliable, well-connected dedicated servers. Use anycast DNS to do your load balancing.
How I would do it, based solely upon the limited data you've provided (such as the vast majority of your users being NA) plus what I've done with professional and personal projects in the past, is to find a provider with excellent connectivity on either coast as well as a good central-continent location with reasonably good connectivity to the coasts.
That provider would ideally also have a massive internal network available with the ability to shuffle your data files between machines without using the public interface.
I used to have a solid recommendation for that part, but they have plummeted in quality in the past year or two and thus will remain nameless. I'm sure recommendations will come in to this thread, however.
I would ingest the source via the central server and disseminate it from there using a lightweight method over the private network (even an NFS mount isn't all that heavy, honestly, and it's fairly trivial to set up). Of course, if that central server is down, the coastal servers would want to be able to fall back to doing their own heavy lifting at the risk of somehow falling out of sync.
Once the infrastructure is in place, I'd use someone like DNSMadeEasy or Route53 or something that has globally-fast resolution times and easy anycast management.
Assuming that your audience falls in line with population centers, I'd put the coastal visitors on their local servers to start, the midwest on the central server, and then international visitors on the central server. Mix and adjust, depending upon what actual traffic looks like. Use the excess central capacity to cover spikes from e.g. SF or NYC. 20-30 ms latency is far better than hiccups and stutters.
As you know, there is no way to do this cheaply and "correctly". This is just the balance that I would go for myself. A full on CDN for something like this which is only a couple of hosts seems like extra steps that can fail and/or impact performance. And once it is time for that CDN, the prices that Cloudflare and Akamai ask for start looking really attractive as you'll need full-time staff to manage things otherwise.
And I too have left out some dots to connect, both to protect past and future projects as well as to try to keep from leading you astray from not knowing all your details.
But hopefully there's a few things in how I've done things that might help you out. I have found much more luck in using APIs to balance anycast DNS based on realtime information (user geolocation + bandwidth consumption) than trying to do a full on DIY CDN. I have come to the conclusion that a CDN just is hard to justify until there's like 5+ locations one is working out of, especially if you are getting unmetered bandwidth that's over your average need by 5-10 times per location. And then it's just a shift in what's easier to manage, so if you have good luck with using APIs to shift your anycast traffic around it may even be a much larger distribution before you get to the point where you go "Akamai sounds appealing".
Hello! I can offer VPS's with 10Gbit Downlink. We can handle your traffic load as well. with cheap affordable pricing and a strategic plan to ensure all your VPS's maintain a high uptime. Down with 24x7 support In London & USA to ensure you always have someone to reach out to.
I can process over 20 Gbps on a high end KVM server.
High end specs, but possible.
The traffic is going over 100 Gbps fibers to other data centers, not public Internet.
These fibers are only shared among users of the same host node.
Software is custom made with Data Plane Development Kit (DPDK) where packets do not go through kernel.
The main unlocker for high performance in KVM is really the SR-IOV.
Without SR-IOV, the host node side kernel is going to be bottleneck.
Relatedly, I always wonder about all these lowend providers who seem to have purposely chosen hardware without SR-IOV support just to save a literal few bucks. With SR-IOV, your CPU utilization goes down while network performance goes up, which should in theory allow you to eke a couple more customers out of each node, which would pay for the cost difference.
But yeah, in the end, one isn't likely to notice the difference between physical and virtual resources if the virtual resources are well-specified, -implemented, and -managed. But the reality is that, at the lower end of the pool, you'll find better dedi offerings.
Tempest hosting if they got stocks. They have lots of locations, so if its gonna be for CDN then probably looking at more than one location.
It makes no sense, when you can use such resources, the provider will suggest you upgrade to a dedicated server instead and you can create your own VPS.
Other customers on the same node don't like a 20Gbps neighbor.
What is your budge for a single VPS with 2PB of traffic?
I hope it is not under $1000
If it's a serious project, don't trust anything to VPS, but start with dedicated servers. You are definitely talking about figures above $1000-2000$ per month for a single server.
For what you said, I doubt there is anything cheap and no one is offering it, and if anyone ever offers cheap prices, there is some deception ahhah
It sounds like you just need a couple 10gbps dedicated unmetered servers with NVMe SSD storage. We help people with that all day long, and can get that done within your budget. Please PM me if you’d like to explore a no obligation quote. Please note however dedicated servers don’t do auto scaling so you couldn’t just add a server for a few hours and then cancel. The services are provided on a monthly basis.
Hey there bud! I know there's a ton of information here and I don't want to pile on but this might be useful.
A CDN is a useful/helpful avenue to service content to your end users/clients in a distributed manner. However, if it's live-streaming video content (which is my interpretation of the problem here), a CDN would be useful but at the current level you're hitting you can simplify this a bit further by just getting a good group of centralized servers. The important metric here is actually being able to deliver the content to the end user without breaking data continuity, which is a problem a CDN could help with, but realistically some of the other solutions proposed here can satisfy this requirement easier and better.
You mention HLS (live streaming) - I have had a lot of experience in this world, as I've built auto scaling WebRTC (real-time, < 0.3sec latency) for as high as 15,000 concurrent users consuming petabytes of bandwidth.
Depending on your live streaming schedule, I would highly suggest using hourly billing VM business model (such as Vultr, DO / Linode / Hetzner) as they all have consumable REST APIs which allow you to scale up and down as demand needs, without having a lot of compute / resources sitting around 80% of the time idling.
I would often scale up to 50+ VMs (200 vCores, 400GB RAM, 50Gbit Uplink) for short bursts when demand was required for 4-8 hours at a time, and it cost $$ vs paying $$,$$$ for having the compute on a month to month basis.
You need to do a bit of upfront development to build the autoscaler and interacting with your chosen cloud provider(s), however it's pretty minimal.
I would suggest using something like nixstats to monitor individual VM resource consumptions, then have a central master program that get's all the stats from nixstats API, aggregates the average resources being utilised, then scale up / down using predetermined min/max values.
The above really only makes sense if you're not live streaming 24/7 - If you are live streaming 24/7, then a CDN layer does make sense. You state your budget is $1,500-2,500 You should be able to easily get quite a few large specification VMs with this, and then configure a DNS round robin / geodns type deal to distribute the load, then you don't need such a high per node bandwidth amount as it'll be distributed.
I'm happy to elaborate on any of the above, if you would like to go down any of those mentioned paths.
he mentioned SRIOV so presumably he does iommu passthrough to the VM from those VF adapters so these are probably not the average vps that the typical provider sells.
I can't wrap my head around how a CDN helps with live streaming. You are constantly and continually generating new segments in the origin, so requests will bypass CDN cache anyway unless you have a way to push to your CDN node in Sydney from a Miami ingest server in a very very short time, especially if you are offering low latency mode. The point is to just absorb user traffic in the first mile rather than let it transit through the public Internet? Because, otherwise, there is going to be a lot of contention over a resource that CDN has barely seen and they are all going to origin. I just can't get how it'd work unless the user pauses the live stream for a couple of seconds so that CDN fills the cache from requests from different users that preceded our delayed guy.
Host node spec:
See FABRIC Site Hardware SlowNet worker.
If I reserve 12 dedicated cores that is 9% of the available cores, it isn't unreasonable to use 20 Gbps network bandwidth that is 10% of the total bandwidth among three adapters.
The host node uses OpenStack hypervisor that supports SR-IOV.
Traffic between VMs is forwarded via Cisco 5700 switch under a hairpin configuration.
I went to sleep, sorry, now I read and answer.