Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Edge caching as a service
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Edge caching as a service

SplitIceSplitIce Member, Host Rep

For the past 3 months we have been working on an additional mode to the caching capability provided by our system, a mode we are calling "Edge Cache". This feature will work as edge deployed content serving directly inside an existing service, on your existing domain tightly integrated into your existing url structure (check for file, fall back to proxying).

We hope to be able to offer it for free with all Anycast services. We arent designing this as a file storage service or even a replacement for full CDN (for those with the need to distribute content). Limits will not be that high. The goal is to offer something of use to customers using excess storage capacity at our edge.

For that reason the limits will be tight enough that we can offer this to all Anycast customers without the need to rapidly upgrade servers (at-least for now). Should be more than enough to serve important content like CSS, JS, fonts and images. We have found that in low traffic PoPs this can be great for producing consistently low load times (something that regular caching can not do as content gets aged out / replaced).

Proposed features:

  • Upload files to one or more specific edges
  • browse, delete and download content using the panel
  • API & dashboard based upload from URL
  • HTTP Zones (e.g /css/ or *.css) can operate in eiher "Edge" mode or any of the existing modes that cache on request
  • Content stored in filesystem of the edge server and with a single backup stored remotely to the edge and used for backup & distribution

Stretch goals:

  • Rules that can automatically pull from patterned paths (e.g ^/css/.+) on first observed request asyncronously to the activating request

Proposed Limits:

  • 500MB per customer service included with every service with no more than 1/4 used in any single region.
  • No intention at this time to offer additional storage. Our edge servers arent deployed with TB of SSDs in their current configuration
  • No more than 10,000 files per services HTTP port (or a limit of 200k files per service perhaps), no more than 100 files per directory, no directory depth greater than 16

    • We may place limits on deployment rate (TBD) or at-least charge for it via bandwidth (a neglibible cost for those using the system for what its designed for). The file syncronisation / deployment is fast & efficient - but it's not free in CPU or bandwidth (which may matter in South America).

Its still very early days for the development on this. We just finished building and verifying the scalability of the deployment system to be used for this and development on the UI started yesterday.

What does the brains trust of LET think? Useful?

Is there any killer features you would like to see that would make a capability like this more useful for you?

IMPORTANT: All details here are subject to change. This is not a publicly released feature (yet), and details and limits are theory at best. If you are looking for documentation on this as a released feature then please consult the documentation.

Thanked by 1MrLime

Comments

  • Congratulations. This is nice one for edge computing

  • Congratulations for the launch!
    Can you explain the difference with Cloudflare?

  • BunnyCDN have already had something similar, Perma-Cache and Edge Storage.

  • SplitIceSplitIce Member, Host Rep

    Not a launch. Not a CDN, explicitly not even trying to be one. An idea being community tested for traction though.

  • @O0ooo said:
    BunnyCDN have already had something similar, Perma-Cache and Edge Storage.

    I mean that's great and all, but what does that have to do with OP? These folks spent 3 months developing something, they're offering up the fruits of their labor, free of charge in exchange for some feedback. They said right off the bat they weren't looking to be a full on CDN replacement, so I'm not sure why you're looking to rain on these guys' parade? If anything, it'll be another potential alternative and competition breeds innovation.

    Thanked by 2SplitIce jsg
  • @Don_Keedic said:
    I mean that's great and all, but what does that have to do with OP? These folks spent 3 months developing something, they're offering up the fruits of their labor, free of charge in exchange for some feedback. They said right off the bat they weren't looking to be a full on CDN replacement, so I'm not sure why you're looking to rain on these guys' parade? If anything, it'll be another potential alternative and competition breeds innovation.

    Because to be successful you need to have differentiating features from a well known existing market player. Perhaps they were hoping for the OP to point out these factors.

    Thanked by 1SplitIce
  • @CyberneticTitan said:

    @Don_Keedic said:
    I mean that's great and all, but what does that have to do with OP? These folks spent 3 months developing something, they're offering up the fruits of their labor, free of charge in exchange for some feedback. They said right off the bat they weren't looking to be a full on CDN replacement, so I'm not sure why you're looking to rain on these guys' parade? If anything, it'll be another potential alternative and competition breeds innovation.

    Because to be successful you need to have differentiating features from a well known existing market player. Perhaps they were hoping for the OP to point out these factors.

    Are people not reading the first post at all?

    For the past 3 months we have been working on an additional mode to the caching capability provided by our system

    We arent designing this as a file storage service or even a replacement for full CDN (for those with the need to distribute content)

    Its still very early days for the development on this. We just finished building and verifying the scalability of the deployment system to be used for this and development on the UI started yesterday.

    Then asking for feedback..

    What does the brains trust of LET think? Useful?

    Is there any killer features you would like to see that would make a capability like this more useful for you?

    When you reply with "Well this company already has X" , you're contributing nothing. When you post something and their initial posts openly says "We're not even planning on doing 50% of what you posted about" - it's like he didn't even read the post at all.

    Now if he would have commented on the one feature they may support in the future, what/if any pros/cons were with BunnyCDN and/or there was a feature they provided, but it's not great or it could be made better, that'd be constructive. That's what these folks are looking for, actual, constructive feedback.

    Because to be successful you need to have differentiating features from a well known existing market player. Perhaps they were hoping for the OP to point out these factors.

    Again..

    We arent designing this as a file storage service or even a replacement for full CDN (for those with the need to distribute content)

    BunnyCDN also provides an on-the-fly image optimizer and DNS...but that's not going to help OP with his software in the least bit. They're not asking for business advice. They're asking for feedback on a service they created. That's it. If there's a feature within BunnyCDN you'd consider a "killer" feature, post it up and if you really want to be helpful, explain what it is/does in detail if it isn't immediately apparent.

    Thanked by 1SplitIce
  • SplitIceSplitIce Member, Host Rep

    On-upload resource optimisation is not a bad idea to consider (or particularly hard to implement). I'll add that to the ideas list. It meshes well with the planned on-upload gzip&brotli compression. Thanks @Don_Keedic.

    Yeah we are very much not looking to compete with CDN services like BunnyCDN. Their storage tiers are cheaper than many object storages (S3), and we are using S3 to distribute to the edge. However they manage their storage it clearly benifits from massive economy of scale and efficiency. That they acheive that pricing with robustness (assumed) and redundancy (assumed) is very impressive and a real kudos to them.

    We are storing all customers files onto S3 at $20/TB to start with (not that it will be a big cost with the limits proposed) and then onto the edge servers directly. SSD's at every stage. As it is I'm a bit nervous about having no direct 1-to-1 (preferably cold) backup of the object store (instead if it failed we would need to re-build from all the edges....) however the object store forms the backup for the edge (which is the most at-risk).

    Push CDN has been a feature often requested by customers, however one that we have struggled with the economic case for (and I suspect many customers have not thought it through either). The main advantage of direct CDN integration is that you can fetch resources without an additional DNS lookup and HTTP1.1, HTTP/2 or HTTP/3 connections. This makes the most impact on small manditory resources like stylesheets, fonts and javascript. Hence the the limits being targetted.

    Part of why this feature has been considered now of all features on the roadmap (other than it's high mention count) is that the distribution system underpinning it has wider application. And thats what the last 3 months have really been mostly about.

Sign In or Register to comment.