New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Try BunnyCDN
12 threads in 2 weeks.
Try Cloudflare
Correction. 13 threads
I don't get the point in this thread, and one of your other recent thread.
Are you trying to inform us that Cloudfront is expensive or trying to get alternatives?
and with your other thread, it would cost way more than Amazon Cloudfront to setup an "in house CDN"
AWS cloudfront first 1 tb free, then each 1 tb after is about $200/m. With $200/m, I can rent several powerful dedis in different locations and set up my own cdn.
Just trying to save money here. Follow the LET's spirit. Hahaha.
By the way, if the website does video streaming, 1 tb can be easily used. 10 tb or even 100 tb is normal per month, if all the streaming videos go through cdn, this is more than 1000 dollars per month, prohibitively expensive.
You must be a math genius
Just find a lower-priced CDN, BunnyCDN has already been suggested in this thread, or if you don't want to use it for some reason there's also Stackpath or 5centscdn.
understandable as he is a letlover
aws has a calculator
Is this a sin?
try Fastly? or Stackpath?
But your CDN for $200 wont be as reliable as CloudFront is. With CloudFront your data is served everywhere, fast, always online, no packet loss, no fiber cut... Your serrvice will be down only if half of internet is down.
This is why you pay for CDN.
If you dont need it you can serve whole world with one server like people used to do it. Host in London/Amsterdam/NY/Ashburn, site will run good enough everywhere in the world.
It will be a little bit slower in Latin America and Oceania, but people there are used to it (hard truth)
i did not realize that cdn is for redundance too, just thought it reducing latency. i like cdn, just too expensive.
Hm, my own self-run CDN (which is more expansive than $200) is having a much better uptime than e.g. AWS or Cloudflare.
I don't believe in centralization which is prone to SPOF by software/configuration (and most of the times BGP-related) as shown again and again by CF or AWS.
@leftover Kindly try to post similar topics under one thread with multiple questions instead of creating multiple threads. It doesn't look good
Bunny is great!
Curious, how are you running your CDN?
LBs with ACLs and ssl termination and reverse proxies/caches in different regions of the world with different providers (and looking for different transit/networks as well) on geodns (so no, I'm not using anycast <- too expansive in traffic) with minutely health checks and dns failover (and dns round robin thrown in as well). Internal lines with secondly health checks and automatic failovers with diverse backends on different providers. No central automatization, Cassandra is used as an DB for e.g. ACLs. Automatic maintenance of a node with prior automatic removal of a node from the mesh.
So the (regional) downtime is mainly happening with an edge node hw failure/fiber cut etc. resulting in max 60 seconds regional downtime with good dns resolvers but even less felt for consumers as round-robin is in there as well and (at least) browsers will retry a different node in that max 60sec timeframe if they can't reach that failed ip.
Overall e.g. Hetrixtools is showing me an uptime of 100%, >99.9998% and >99.998% of my CDN globally for the last two years (the CDN is used for three different services) and the 99.998% was a result of stupidly blocking the hetrix bot on that service for some minutes. So in reality I have two services with 100% uptime for two years and one with 2 minutes of downtime in two years.
Could you tell us everyone on how much costs for a year with your setup?
@FatGrizzly : that would depend highly on the number of nodes/regions/locality you want to use and the numbers in traffic/compute power etc. your CDN would need. And also what kind of latency degradation you would accept (e.g. can you failover from Asia to the US, can you failover from US East to US West or do you have to failover from US East to US East?, etc.). Can you use VPS or do you need bare metal machines?, etc. And then: Are you available 24/7 to intervene?
E.g. my CDN is not a hyperscaler (while AWS or Cloudflare pretty much is), so new nodes would be added manually (which takes like one or two hours when I hit a provider w/ instant delivery).
I mainly did this because a) CDN providers tested produced too many errors for me and b) I like to have the control in-house and sometimes for special requirements (gdpr or government customers) that may only use a subset of the CDN nodes to meet requirements. Overall I would expect Bunny or Cloudflare to be cheaper than my setup but if you start small or mix up your CDN with a CDN provider (like doing it yourself in e.g. Europe but using e.g. Bunny in South America) you may save.
I am running my own CDN for 1 year and more. It is costing me upwards of 200USD+ but I have many pops.
Now to be fair your service is down most of the time. Look at all the complaint threads from various LET users.
This. Totally unclear what the expectation is. @letlover, why don't you comment on BunnyCDN or give us some insights how you would build your own CDN for cheaper?
How do you drain existing connections on a node (like, do you do this at all)?
Bunny is prem
On the LBs (that is: edge nodes)? Only through dns failover. If I have to restart a server for e.g. kernel updates or other work, it first gets removed from dns in advance and checks if it is still used: if the traffic drains to zero, the maintenance work starts and the server brought back into the mesh after the maintenance finishes. So a consumer with an existing connection would have sth. like 5 minutes of still accessing the server (as browsers cache dns longer) while it is removed already via dns (and new consumers would already use the different failover ip), then a dns-refetch would bring up a new ip. And via dns round-robin this is not really felt as the browser would try another round-robin ip if the one first fetched isn't reachable anymore.
It got down only on a few locations and I have fixed all of them. So it won't be down anymore. You can verify now
Thanks for confirming, useful info for some projects I had. Seems like draining connections isn't the challenge I imagined it to be!