Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


CDN is expensive - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

CDN is expensive

2»

Comments

  • AXYZEAXYZE Member

    @OhJohn said:

    @AXYZE said: But your CDN for $200 wont be as reliable as CloudFront is. With CloudFront your data is served everywhere, fast, always online, no packet loss, no fiber cut... Your serrvice will be down only if half of internet is down.

    Hm, my own self-run CDN (which is more expansive than $200) is having a much better uptime than e.g. AWS or Cloudflare.
    I don't believe in centralization which is prone to SPOF by software/configuration (and most of the times BGP-related) as shown again and again by CF or AWS.

    OP typed $200, I say about $200. Like you said, your solution costs more :)

    Its a scale thing - if you dont need big scale then youre good with providers doing work for you (VPS, shared hosting, CDN).

    If you are medium you buy dedis, custom solutions from providers.

    If you are big you most likely rent a whole rack cabinet.

    If you are extrabig you can build own DC or make custom agreements with big bois so your pricing is nice and they will pay you if they fail to provide services :)

  • OhJohnOhJohn Member
    edited June 2022

    @stevewatson301

    What you should always do: "intelligent" health checks. That is: don't just check for e.g. a 200 response or ping but have scripts that proclaim the health status and have those checked. E.g. my servers indicate a starting maintenance to the dns health checks while they are still able to serve requests. So the dns has time to remove the server (needs 60sec for that) while the server still is capable of serving requests. This won't help with e.g. a server hardware failure or fiber cut but those are actually rare in comparison to e.g. human errors or stupidity. E.g. all commands like reboot/shutdown/service stop etc. are aliased to a script that first indicates an upcoming maintenance to dns, than sleeps for five minutes before doing anything that is requested by the command (e.g. "reboot"). So it is declared as "under maintenance" for health checks and removed from dns while still serving requests for 5 more minutes.

  • OhJohnOhJohn Member
    edited June 2022

    @AXYZE

    That's true but by running my own CDN I gain advantage against my smaller competitors that run their services out of just one DC and go down if their provider goes down but also gain advantage against my competitors (small or big) that use services like e.g. Cloudflare and go down if Cloudflare goes down. The problem with CF and AWS is that they are so big that they centralize again through automation as otherwise they wouldn't be able to run their network. So with your own CDN you have latency optimization plus resilience and redundancy all woven in.

    And also if you e.g. use BunnyCDN you cannot do canary releases easily the way I can by just testing new backends with e.g. one small country only (you still could do this if you control dns e.g. through geodns and only route like 99% traffic to Bunny and 1% elsewhere for a canary release test).

    Thanked by 1letlover
  • WebProjectWebProject Host Rep, Veteran

    The other cheaper option compare to Amazon is keycdn.com

  • letloverletlover Member
    edited June 2022

    @chrisp said:

    @FatGrizzly said:
    I don't get the point in this thread, and one of your other recent thread.

    Are you trying to inform us that Cloudfront is expensive or trying to get alternatives?
    and with your other thread, it would cost way more than Amazon Cloudfront to setup an "in house CDN"

    This. Totally unclear what the expectation is. @letlover, why don't you comment on BunnyCDN or give us some insights how you would build your own CDN for cheaper?

    I am thinking about using several cheap bare metal dedis or low end vpses with unlimited bandwidth or large bw quotas at different locations for cdn network. I may set up nginx+varnish. I have hands on experience with this setup, and am using one of this for one of my sites. Right now, my question is that if I have several of these nginx+varnish boxes or vpses, how can set up my single domain to let people in asia accesing my singapore box, us east or west audience access their own local ones. My server will be in EU, because it is the cheapest place to have a powerful bare metal dedi.
    I don't know how to set up this geodns round robin with single domain. This is my current situation. If I know how to make this happen, I think I can set up my in house cdn much cheaper than aws cf. I already identify wsi and dacentec and reliablesite for my us dedi providers, terrahost for vpn, php-friends and hetzner and netcup as my eu providers, for asia, maybe leaseweb or contabo or lightsail or terrahost. I think I can get less than $100 for my in house small scale cdn. The US and EU can be unmetered bw, Asia unmetered bw seems not possible.

  • bruh21bruh21 Member, Host Rep

    @letlover said:

    @chrisp said:

    @FatGrizzly said:
    I don't get the point in this thread, and one of your other recent thread.

    Are you trying to inform us that Cloudfront is expensive or trying to get alternatives?
    and with your other thread, it would cost way more than Amazon Cloudfront to setup an "in house CDN"

    This. Totally unclear what the expectation is. @letlover, why don't you comment on BunnyCDN or give us some insights how you would build your own CDN for cheaper?

    I am thinking about using several cheap bare metal dedis or low end vpses with unlimited bandwidth or large bw quotas at different locations for cdn network. I may set up nginx+varnish. I have hands on experience with this setup, and am using one of this for one of my sites. Right now, my question is that if I have several of these nginx+varnish boxes or vpses, how can set up my single domain to let people in asia accesing my singapore box, us east or west audience access their own local ones. My server will be in EU, because it is the cheapest place to have a powerful bare metal dedi.
    I don't know how to set up this geodns round robin with single domain. This is my current situation. If I know how to make this happen, I think I can set up my in house cdn much cheaper than aws cf. I already identify wsi and dacentec and reliablesite for my us dedi providers, terrahost for vpn, php-friends and hetzner and netcup as my eu providers, for asia, maybe leaseweb or contabo or lightsail or terrahost. I think I can get less than $100 for my in house small scale cdn. The US and EU can be unmetered bw, Asia unmetered bw seems not possible.

    I think for Asia your best bet would be either Contabo or cheapwindowsvps but neither have great local latencies

    Thanked by 1letlover
  • ArkasArkas Moderator

    I'd stay away from cheapwindowsvps. Contabo is a good choice for their locations, BW and prices.

    Thanked by 1letlover
  • Otus9051Otus9051 Member
    edited June 2022

    @letlover said:

    @Otus9051 said:

    @dosai said:
    12 threads in 2 weeks.

    understandable as he is a letlover

    Is this a sin?

    not at all no
    but this many threads in that many weeks can be considered spammy, be careful kid

  • @letlover said:
    I just calculated if I use AWS cloud front
    2tb out
    2tb in

    200 dollars per month

    first 1TB is free anyway. So you are only paying for 1 TB.

  • @OhJohn said:

    @LiliLabs said: Curious, how are you running your CDN?

    LBs with ACLs and ssl termination and reverse proxies/caches in different regions of the world with different providers (and looking for different transit/networks as well) on geodns (so no, I'm not using anycast <- too expansive in traffic) with minutely health checks and dns failover (and dns round robin thrown in as well). Internal lines with secondly health checks and automatic failovers with diverse backends on different providers. No central automatization, Cassandra is used as an DB for e.g. ACLs. Automatic maintenance of a node with prior automatic removal of a node from the mesh.

    So the (regional) downtime is mainly happening with an edge node hw failure/fiber cut etc. resulting in max 60 seconds regional downtime with good dns resolvers but even less felt for consumers as round-robin is in there as well and (at least) browsers will retry a different node in that max 60sec timeframe if they can't reach that failed ip.

    Overall e.g. Hetrixtools is showing me an uptime of 100%, >99.9998% and >99.998% of my CDN globally for the last two years (the CDN is used for three different services) and the 99.998% was a result of stupidly blocking the hetrix bot on that service for some minutes. So in reality I have two services with 100% uptime for two years and one with 2 minutes of downtime in two years.

    What is LB?

  • @letlover said:
    What is LB?

    Load balancer.

  • @stevewatson301 said:

    @letlover said:
    What is LB?

    Load balancer.

    OK. I thought something new. LOL.

  • letloverletlover Member
    edited June 2022

    @OhJohn said:

    @LiliLabs said: Curious, how are you running your CDN?

    LBs with ACLs and ssl termination and reverse proxies/caches in different regions of the world with different providers (and looking for different transit/networks as well) on geodns (so no, I'm not using anycast <- too expansive in traffic) with minutely health checks and dns failover (and dns round robin thrown in as well). Internal lines with secondly health checks and automatic failovers with diverse backends on different providers. No central automatization, Cassandra is used as an DB for e.g. ACLs. Automatic maintenance of a node with prior automatic removal of a node from the mesh.

    So the (regional) downtime is mainly happening with an edge node hw failure/fiber cut etc. resulting in max 60 seconds regional downtime with good dns resolvers but even less felt for consumers as round-robin is in there as well and (at least) browsers will retry a different node in that max 60sec timeframe if they can't reach that failed ip.

    Overall e.g. Hetrixtools is showing me an uptime of 100%, >99.9998% and >99.998% of my CDN globally for the last two years (the CDN is used for three different services) and the 99.998% was a result of stupidly blocking the hetrix bot on that service for some minutes. So in reality I have two services with 100% uptime for two years and one with 2 minutes of downtime in two years.

    Actually, if I don't care about geoip smart thing, just want a dumb cdn, I think that a round robin LB can do the job. Probably this workflow will work: domain name point to the ip address of the LB VPS, then LB VPS round robin to several nginx-varnish cache servers.
    Then if this is the case, I can put LB behind the cache server and before the application server, which is what I have now. Maybe a second LB is good, or may not be so important. LOL.

  • If I want to go free geoip, debian has a package of geoip freely available. Just don't know how to integrate this with the LB.

  • @letlover said:
    Actually, if I don't care about geoip smart thing, just want a dumb cdn, I think that a round robin LB can do the job. Probably this workflow will work: domain name point to the ip address of the LB VPS, then LB VPS round robin to several nginx-varnish cache servers.
    Then if this is the case, I can put LB behind the cache server and before the application server, which is what I have now. Maybe a second LB is good, or may not be so important. LOL.

    If you use round robin DNS, then that means users will not be automatically served by the closest cache server. In my mind, this gives you redundancy, but not one of the biggest advantages of a CDN: speed.

    If all of your users and cache servers are in the same general geographic area, then perhaps this would be good enough.

    Thanked by 1letlover
  • @OhJohn said:

    @LiliLabs said: Curious, how are you running your CDN?

    LBs with ACLs and ssl termination and reverse proxies/caches in different regions of the world with different providers (and looking for different transit/networks as well) on geodns (so no, I'm not using anycast <- too expansive in traffic) with minutely health checks and dns failover (and dns round robin thrown in as well). Internal lines with secondly health checks and automatic failovers with diverse backends on different providers. No central automatization, Cassandra is used as an DB for e.g. ACLs. Automatic maintenance of a node with prior automatic removal of a node from the mesh.

    So the (regional) downtime is mainly happening with an edge node hw failure/fiber cut etc. resulting in max 60 seconds regional downtime with good dns resolvers but even less felt for consumers as round-robin is in there as well and (at least) browsers will retry a different node in that max 60sec timeframe if they can't reach that failed ip.

    Overall e.g. Hetrixtools is showing me an uptime of 100%, >99.9998% and >99.998% of my CDN globally for the last two years (the CDN is used for three different services) and the 99.998% was a result of stupidly blocking the hetrix bot on that service for some minutes. So in reality I have two services with 100% uptime for two years and one with 2 minutes of downtime in two years.

    In your in house cdn, do you fail over your lb?

  • OhJohnOhJohn Member

    Yes, LB (as edge node) failover is done via dns by external health checks, everything more inside the mesh internally by internal health checks.

    Thanked by 1letlover
  • @OhJohn said:
    Yes, LB (as edge node) failover is done via dns by external health checks, everything more inside the mesh internally by internal health checks.

    So there is another server or third party service specifically monitoring these LBs without doing the LB job? Very smart idea.

  • Seems LB before varnish servers can protect from ddos attack. Interesting. In house cdn even dumb ones are beneficial.

  • quanhua92quanhua92 Member
    edited June 2022

    @letlover said:

    @OhJohn said:
    Yes, LB (as edge node) failover is done via dns by external health checks, everything more inside the mesh internally by internal health checks.

    So there is another server or third party service specifically monitoring these LBs without doing the LB job? Very smart idea.

    Use cloudns here for the geodns feature and dns failover. https://www.cloudns.net/geodns/
    It costs $10 monthly.
    I personally use Google DNS and it is pay-as-you-go type ($0.70 per million queries per month). However, Google DNS doesn't have DNS failover. With Google DNS, you can configure routing rules at Google data center level instead of IP address.
    At each region, you can create a DNS record of multiple regional load balancer IP address.
    If the app server is down, the load balancer can handle the routing easily.
    If the LB server is down, the browser can retry other IP addresses. Otherwise, you can build a service to monitor and change DNS record yourself.
    I use Google DNS because I don't have many traffic to my website so a few millions queries with cost of $1-2 is a better choice for me than GeoDNS

    Thanked by 1letlover
  • JustHostJustHost Member, Patron Provider

    OVH offer CDN too

    Thanked by 1letlover
  • @quanhua92 said:

    @letlover said:

    @OhJohn said:
    Yes, LB (as edge node) failover is done via dns by external health checks, everything more inside the mesh internally by internal health checks.

    So there is another server or third party service specifically monitoring these LBs without doing the LB job? Very smart idea.

    Use cloudns here for the geodns feature and dns failover. https://www.cloudns.net/geodns/
    It costs $10 monthly.
    I personally use Google DNS and it is pay-as-you-go type ($0.70 per million queries per month). However, Google DNS doesn't have DNS failover. With Google DNS, you can configure routing rules at Google data center level instead of IP address.
    At each region, you can create a DNS record of multiple regional load balancer IP address.
    If the app server is down, the load balancer can handle the routing easily.
    If the LB server is down, the browser can retry other IP addresses. Otherwise, you can build a service to monitor and change DNS record yourself.
    I use Google DNS because I don't have many traffic to my website so a few millions queries with cost of $1-2 is a better choice for me than GeoDNS

    Thank you ver much for the good info.

  • jaansepikjaansepik Member
    edited July 2022

    [Spam]

    Mod edit (angstrom): Deleted spam

    Thanked by 1letlover
  • WebProjectWebProject Host Rep, Veteran

    @SWS said:
    OVH offer CDN too

    Performance much better on BunnyCDN or Keycdn

    Thanked by 1letlover
  • JustHostJustHost Member, Patron Provider

    @WebProject said:

    @SWS said:
    OVH offer CDN too

    Performance much better on BunnyCDN or Keycdn

    I suggested this as a budget option as the OP main query seems to be about cost, not performance.

    Thanked by 1letlover
Sign In or Register to comment.