Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


VPS 10 Gbps (unlimited bandwidth) *per node*. - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

VPS 10 Gbps (unlimited bandwidth) *per node*.

13»

Comments

  • @lewellyn said:
    So... based on my own personal interests in video serving and reading over this thread...

    It sounds to me that you want a few good, reliable, well-connected dedicated servers. Use anycast DNS to do your load balancing.

    How I would do it, based solely upon the limited data you've provided (such as the vast majority of your users being NA) plus what I've done with professional and personal projects in the past, is to find a provider with excellent connectivity on either coast as well as a good central-continent location with reasonably good connectivity to the coasts.

    That provider would ideally also have a massive internal network available with the ability to shuffle your data files between machines without using the public interface.

    I used to have a solid recommendation for that part, but they have plummeted in quality in the past year or two and thus will remain nameless. I'm sure recommendations will come in to this thread, however.

    I would ingest the source via the central server and disseminate it from there using a lightweight method over the private network (even an NFS mount isn't all that heavy, honestly, and it's fairly trivial to set up). Of course, if that central server is down, the coastal servers would want to be able to fall back to doing their own heavy lifting at the risk of somehow falling out of sync.

    Once the infrastructure is in place, I'd use someone like DNSMadeEasy or Route53 or something that has globally-fast resolution times and easy anycast management.

    Assuming that your audience falls in line with population centers, I'd put the coastal visitors on their local servers to start, the midwest on the central server, and then international visitors on the central server. Mix and adjust, depending upon what actual traffic looks like. Use the excess central capacity to cover spikes from e.g. SF or NYC. 20-30 ms latency is far better than hiccups and stutters.

    As you know, there is no way to do this cheaply and "correctly". This is just the balance that I would go for myself. A full on CDN for something like this which is only a couple of hosts seems like extra steps that can fail and/or impact performance. And once it is time for that CDN, the prices that Cloudflare and Akamai ask for start looking really attractive as you'll need full-time staff to manage things otherwise.

    And I too have left out some dots to connect, both to protect past and future projects as well as to try to keep from leading you astray from not knowing all your details.

    But hopefully there's a few things in how I've done things that might help you out. I have found much more luck in using APIs to balance anycast DNS based on realtime information (user geolocation + bandwidth consumption) than trying to do a full on DIY CDN. I have come to the conclusion that a CDN just is hard to justify until there's like 5+ locations one is working out of, especially if you are getting unmetered bandwidth that's over your average need by 5-10 times per location. And then it's just a shift in what's easier to manage, so if you have good luck with using APIs to shift your anycast traffic around it may even be a much larger distribution before you get to the point where you go "Akamai sounds appealing".

    I have tried anycast from buyvm, but it was a bad experience, maybe I have configured it wrong.

  • @dbContext said:
    You mention HLS (live streaming) - I have had a lot of experience in this world, as I've built auto scaling WebRTC (real-time, < 0.3sec latency) for as high as 15,000 concurrent users consuming petabytes of bandwidth.

    Depending on your live streaming schedule, I would highly suggest using hourly billing VM business model (such as Vultr, DO / Linode / Hetzner) as they all have consumable REST APIs which allow you to scale up and down as demand needs, without having a lot of compute / resources sitting around 80% of the time idling.

    I would often scale up to 50+ VMs (200 vCores, 400GB RAM, 50Gbit Uplink) for short bursts when demand was required for 4-8 hours at a time, and it cost $$ vs paying $$,$$$ for having the compute on a month to month basis.

    You need to do a bit of upfront development to build the autoscaler and interacting with your chosen cloud provider(s), however it's pretty minimal.

    I would suggest using something like nixstats to monitor individual VM resource consumptions, then have a central master program that get's all the stats from nixstats API, aggregates the average resources being utilised, then scale up / down using predetermined min/max values.

    The above really only makes sense if you're not live streaming 24/7 - If you are live streaming 24/7, then a CDN layer does make sense. You state your budget is $1,500-2,500 You should be able to easily get quite a few large specification VMs with this, and then configure a DNS round robin / geodns type deal to distribute the load, then you don't need such a high per node bandwidth amount as it'll be distributed.

    I'm happy to elaborate on any of the above, if you would like to go down any of those mentioned paths.

    this is the most likely option

  • @0xbkt said:
    I can't wrap my head around how a CDN helps with live streaming. You are constantly and continually generating new segments in the origin, so requests will bypass CDN cache anyway unless you have a way to push to your CDN node in Sydney from a Miami ingest server in a very very short time, especially if you are offering low latency mode. The point is to just absorb user traffic in the first mile rather than let it transit through the public Internet? Because, otherwise, there is going to be a lot of contention over a resource that CDN has barely seen and they are all going to origin. I just can't get how it'd work unless the user pauses the live stream for a couple of seconds so that CDN fills the cache from requests from different users that preceded our delayed guy.

    the use of cdn is mostly to give access to the files, the latency of the live for now is not important, right now we are just looking for a more accessible way than with google cloud, aws or similar providers.

  • @miia said:

    @lewellyn said:
    So... based on my own personal interests in video serving and reading over this thread...

    It sounds to me that you want a few good, reliable, well-connected dedicated servers. Use anycast DNS to do your load balancing.

    How I would do it, based solely upon the limited data you've provided (such as the vast majority of your users being NA) plus what I've done with professional and personal projects in the past, is to find a provider with excellent connectivity on either coast as well as a good central-continent location with reasonably good connectivity to the coasts.

    That provider would ideally also have a massive internal network available with the ability to shuffle your data files between machines without using the public interface.

    I used to have a solid recommendation for that part, but they have plummeted in quality in the past year or two and thus will remain nameless. I'm sure recommendations will come in to this thread, however.

    I would ingest the source via the central server and disseminate it from there using a lightweight method over the private network (even an NFS mount isn't all that heavy, honestly, and it's fairly trivial to set up). Of course, if that central server is down, the coastal servers would want to be able to fall back to doing their own heavy lifting at the risk of somehow falling out of sync.

    Once the infrastructure is in place, I'd use someone like DNSMadeEasy or Route53 or something that has globally-fast resolution times and easy anycast management.

    Assuming that your audience falls in line with population centers, I'd put the coastal visitors on their local servers to start, the midwest on the central server, and then international visitors on the central server. Mix and adjust, depending upon what actual traffic looks like. Use the excess central capacity to cover spikes from e.g. SF or NYC. 20-30 ms latency is far better than hiccups and stutters.

    As you know, there is no way to do this cheaply and "correctly". This is just the balance that I would go for myself. A full on CDN for something like this which is only a couple of hosts seems like extra steps that can fail and/or impact performance. And once it is time for that CDN, the prices that Cloudflare and Akamai ask for start looking really attractive as you'll need full-time staff to manage things otherwise.

    And I too have left out some dots to connect, both to protect past and future projects as well as to try to keep from leading you astray from not knowing all your details.

    But hopefully there's a few things in how I've done things that might help you out. I have found much more luck in using APIs to balance anycast DNS based on realtime information (user geolocation + bandwidth consumption) than trying to do a full on DIY CDN. I have come to the conclusion that a CDN just is hard to justify until there's like 5+ locations one is working out of, especially if you are getting unmetered bandwidth that's over your average need by 5-10 times per location. And then it's just a shift in what's easier to manage, so if you have good luck with using APIs to shift your anycast traffic around it may even be a much larger distribution before you get to the point where you go "Akamai sounds appealing".

    I have tried anycast from buyvm, but it was a bad experience, maybe I have configured it wrong.

    You want to manage the anycast DNS yourself, using a provider that is known by everyone as doing it well without requiring their own services (which kinda rules out CloudFlare since their anycast has a lot of caveats if that's all you're using from them, for instance).

    One place I worked, we would have users on a new node within 10 seconds of bringing it online in their region without disrupting anyone or affecting other regions. Basically what most people are trying to hit with a CDN without the many layers that can get gnarly.

    Note that I'm specifically referring to anycast DNS to unicast hosts. Point the users where you want them and you're a good way to a solid low-latency solution.

  • kdhkdh Member
    edited December 2022

    AFAIK BuyVM by @Francisco provides 10Gbps uplink for premier accounts - people who used more than 6 months for his service.
    But I'm pretty sure he'll not be happy if you use Petabytes of transfer every month.

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @kdh said: AFAIK BuyVM by @Francisco provides 10Gbps uplink for premier accounts - people who spent 50$+ for their services.

    Where is $50 coming from?

    Francisco

    Thanked by 1kdh
  • @Francisco said:

    @kdh said: AFAIK BuyVM by @Francisco provides 10Gbps uplink for premier accounts - people who spent 50$+ for their services.

    Where is $50 coming from?

    Francisco

    Oh it actually requires 6 months of active service
    Sorry, kinda messed up my memory :smile:

  • FranciscoFrancisco Top Host, Host Rep, Veteran

    @kdh said: Oh it actually requires 6 months of active service

    Sorry, kinda messed up my memory

    True, but it can be any service.

    There's $5/year shared people with premier :)

    Francisco

    Thanked by 1kdh
  • You can check about Tempest.net Their dedicated servers are the cheapest 10g dedi I know of.

  • @w_ho_ami said:
    You can check about Tempest.net Their dedicated servers are the cheapest 10g dedi I know of.

    nah, I know even cheaper 10Gbit dedi
    OVH has some very cheap ones actually with 10Gbit

  • @hollowvi1 said:

    @w_ho_ami said:
    You can check about Tempest.net Their dedicated servers are the cheapest 10g dedi I know of.

    nah, I know even cheaper 10Gbit dedi
    OVH has some very cheap ones actually with 10Gbit

    On the other hand, I'd be highly hesitant to try to do a project like the one described on OVH's network. I've legitimately had to move off of OVH a few times in the past due to impossible to diagnose network issues reported by users.

  • @lewellyn said:

    @hollowvi1 said:

    @w_ho_ami said:
    You can check about Tempest.net Their dedicated servers are the cheapest 10g dedi I know of.

    nah, I know even cheaper 10Gbit dedi
    OVH has some very cheap ones actually with 10Gbit

    On the other hand, I'd be highly hesitant to try to do a project like the one described on OVH's network. I've legitimately had to move off of OVH a few times in the past due to impossible to diagnose network issues reported by users.

    I've never had any of these issues
    But then again I do pay for upgraded business support lol

  • @hollowvi1 said:

    @lewellyn said:

    @hollowvi1 said:

    @w_ho_ami said:
    You can check about Tempest.net Their dedicated servers are the cheapest 10g dedi I know of.

    nah, I know even cheaper 10Gbit dedi
    OVH has some very cheap ones actually with 10Gbit

    On the other hand, I'd be highly hesitant to try to do a project like the one described on OVH's network. I've legitimately had to move off of OVH a few times in the past due to impossible to diagnose network issues reported by users.

    I've never had any of these issues
    But then again I do pay for upgraded business support lol

    So did we. And they about crapped their pants when we cancelled just over 100 boxes at once, one time.

  • @lewellyn said:

    @hollowvi1 said:

    @lewellyn said:

    @hollowvi1 said:

    @w_ho_ami said:
    You can check about Tempest.net Their dedicated servers are the cheapest 10g dedi I know of.

    nah, I know even cheaper 10Gbit dedi
    OVH has some very cheap ones actually with 10Gbit

    On the other hand, I'd be highly hesitant to try to do a project like the one described on OVH's network. I've legitimately had to move off of OVH a few times in the past due to impossible to diagnose network issues reported by users.

    I've never had any of these issues
    But then again I do pay for upgraded business support lol

    So did we. And they about crapped their pants when we cancelled just over 100 boxes at once, one time.

    oh well must be my luck then,

  • @hollowvi1 said:

    @lewellyn said:

    @hollowvi1 said:

    @lewellyn said:

    @hollowvi1 said:

    @w_ho_ami said:
    You can check about Tempest.net Their dedicated servers are the cheapest 10g dedi I know of.

    nah, I know even cheaper 10Gbit dedi
    OVH has some very cheap ones actually with 10Gbit

    On the other hand, I'd be highly hesitant to try to do a project like the one described on OVH's network. I've legitimately had to move off of OVH a few times in the past due to impossible to diagnose network issues reported by users.

    I've never had any of these issues
    But then again I do pay for upgraded business support lol

    So did we. And they about crapped their pants when we cancelled just over 100 boxes at once, one time.

    oh well must be my luck then,

    Yeah. But we were doing realtime type things, so random 200-1000ms latency spikes for minutes on end to various parts of the USA just were not something we could have.

    We had a couple dozen issues tracking individual cases when we canceled that swath of boxes. Their support would respond every week or so with useful feedback like "have you had the user run Windows Update?" which was hilarious when one of those times was a fixed-location iOS device. That was probably the particular instance that was when we decided to cut our losses: it was obvious we weren't getting any support people who were empowered to actually find the cause of ongoing large amounts of jitter.

  • @lewellyn said:

    @hollowvi1 said:

    @lewellyn said:

    @hollowvi1 said:

    @lewellyn said:

    @hollowvi1 said:

    @w_ho_ami said:
    You can check about Tempest.net Their dedicated servers are the cheapest 10g dedi I know of.

    nah, I know even cheaper 10Gbit dedi
    OVH has some very cheap ones actually with 10Gbit

    On the other hand, I'd be highly hesitant to try to do a project like the one described on OVH's network. I've legitimately had to move off of OVH a few times in the past due to impossible to diagnose network issues reported by users.

    I've never had any of these issues
    But then again I do pay for upgraded business support lol

    So did we. And they about crapped their pants when we cancelled just over 100 boxes at once, one time.

    oh well must be my luck then,

    Yeah. But we were doing realtime type things, so random 200-1000ms latency spikes for minutes on end to various parts of the USA just were not something we could have.

    We had a couple dozen issues tracking individual cases when we canceled that swath of boxes. Their support would respond every week or so with useful feedback like "have you had the user run Windows Update?" which was hilarious when one of those times was a fixed-location iOS device. That was probably the particular instance that was when we decided to cut our losses: it was obvious we weren't getting any support people who were empowered to actually find the cause of ongoing large amounts of jitter.

    I mean I remember when their USA datacenter was brand new I had deployed a dedi and the switch that it was connected to wasn't even setup for production use so I had a dedi that couldn't connect to the internet for like 2 days lol

  • Solved with less than $300/mo thanks in part to cloudflare and R2 technologies solutions and various other methods,

    thanks for your help.

    Thanked by 1maverick
  • Thanked by 1maverick
Sign In or Register to comment.