Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Membucket.io BETA Registration ANNOUNCEMENT - CALLING Shared Hosts - Site Accelerator cPanel Plugin - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Membucket.io BETA Registration ANNOUNCEMENT - CALLING Shared Hosts - Site Accelerator cPanel Plugin

2

Comments

  • vovlervovler Member
    edited August 2018

    You installation is broken somewhere.

    No package membucket-server available.
    No package membucket-client available.

    Forcefully installed them w/ "yum install -y URL", nothing on WHM Plugins and still can't "Authorize Server" https://i.gyazo.com/3a7d8eb4be695f884ce7657a902697bd.png

  • @Josephd said:
    If you're on a VPS, then you're not a shared hoster.

    If every customer gets their own cPanel license, which is an additional $15/month, then they're NOT a shared host customer. They're a private company using cPanel on a VPS.

    This product is for the Shared Hosting Provider, to offer to shared hosting customers.

    I hope this gives everyone some clarity.

    Still, the lowest price license for dedicated is $32/month, and yes not only shared hosting environments individual people also heavily use LiteSpeed for their Magento or Prestashop stores etc.

    @willie said:

    Litespeed it seemed to me (maybe I'm wrong) addresses a different issue, which is to support apache-style configuration files (.htaccess) while keeping an nginx-like concurrency strategy. So it's more about features than performance.

    Not only apache-style configurations but built-in cache module along with cache plugins. Built-in cache module removes the need for Varnish or any reverse proxy based caching software.

    And if combined with application side cache plugins it quickly improves performance. But apache drop-in replacement is only for LiteSpeed Ent, and not OpenLiteSpeed. However, OpenLiteSpeed has same LSCache module without the support of ESI.

    For Membucket, I think it will take time for people to adapt and learning curve for end users will be tough.

  • @vovler said:
    You installation is broken somewhere.

    No package membucket-server available.
    No package membucket-client available.

    Forcefully installed them w/ "yum install -y URL", nothing on WHM Plugins and still can't "Authorize Server" https://i.gyazo.com/3a7d8eb4be695f884ce7657a902697bd.png

    I opened a ticket for it several-several hours ago but didn't receive a response and now ticket's deleted/gone from panel.

  • @Falzo said:
    I think it is an interesting approach / idea for trying to upsell something.

    Very cool, we thought so too!

    that said I heavily doubt a sharedhosting customer who pays like 3$ a month is willing to add more than 1-3$ on top of that - if ever. especially when the buckets don't translate into speed up on the same scale (like adding 10 buckets = getting 10x speeds)

    After the initial speed boost, this turns into a capacity issue where if you are evicting objects out of your cache that will impede performance.

    The only other option is upgrading to a different product line such as a VPS or a dedicated server. Because a product like Membucket hasn't been available the industry and customers have been forced down this path of upgrading into more than they want to manage.

    why should someone buy a 2nd or even 3rd, 4th bucket which most likely will not get him another big impact? I understand you could restrict it like needing a separate bucket for each type of stuff you want to cache or something like that.

    The impact addresses the capacity therefore maintaining performance.

    I also get that you probably want to limit the use per domain - still I doubt you find many people willing to pay another 12$ per year per domain.

    We have no such domain limitations and a customer could purchase as many Buckets and Wells your system can allocate.

    Technically a customer could share their Well across multiple sites as long as they're accessed Control Panel User.

    We do limit the maximum size of Wells around 16 Buckets to alleviate high density memory fragmention.

    randomly picked: a2hosting offers memcached hosting from 8€ a month, which allows users to use memcached on all their domains - that's what you're going to compete with and were you $70 dreams most likely will end very soon :-P

    I think we addressed this perception of a ceiling.

    do you intend to charge your clients aka providers that want to use and sell membuckets based on installs/wells/buckets ?
    let's finally get to the point of your pricing and earning model then, shall we? ;-) ;-) ;-)

    @HBAndrei said:
    If I understand it right, you're forcing your clients to sell 1 Bucket of 8MB of RAM for at least $1/mo... and that would be just to cover your licensing fees, without them turning any profit. If we take your example of a server with 32GB of RAM, having 4k Buckets, then you're expecting a $4k/mo licensing fees from that one server?

    >

    It will be a flat rate per Bucket per month as consumed, regardless of your pricing. This way you don't buy more than what you need or per server licensing costs preventing you from creating Product Availability to the end-users.

    We want you to install it on all of your servers.

    Our licensing fee will be a fraction of the minimum MSRP, but undecided at this time.

    So it will be definitely less than $1 and you will definitely make money :-)

    Our choice to impose an MSRP is explained here, and we should note that were inspired by ESET for having done the same.

    Our Promise


    @willie said:

    Josephd said:

    All of our lab tests were on full vanilla stack installs, CentOS, CloudLinux, cPanel, Default EasyApache profile with PHP 5.7. ... As stated in our Q&A, pulling objects from main memory references are still 20x faster than pulling straight from SSD.

    Sure but 1) MySQL makes good use of ram caching itself, so I'd need to know the MySQL configuration as mentioned. Repeated page loads should result in MySQL queries serving from ram without touching the SSD. 2) a decent SSD subsystem on a hosting server these days has >50k iops (probably a lot more) so even if a page load does 100 disk operations (which is way too many!) that's still just 0.002 seconds, so I still don't 1.5s of load time unless the server itself is extremely overloaded. In the HDD era the disk ops would have mattered a lot more.

    Willie, this is a fair ask and we are working on doing exactly that.

    This test would be more believable if you can supply a complete copyable setup allowing us to reproduce it. Also HBAndrei's comment about pricing seems important. I hadn't looked at your numbers carefully: your cache buckets are just 8MB? Are we back in the 1990s already?

    As FHR said, even memcache can house approximately 60 pages of content per 1MB, this means that 1 Bucket can approximately hold over 400 pages.

    That would be pretty sufficient to house a small site.

    In addition, each Well can hold up to 16 Bucket allowing for over 7,000 pages of content per Well.

    Wordpress as an example has 32 cache groups built into the core of its' code base, where each could point to their own Well.

  • vovlervovler Member
    edited August 2018

    @jetchirag Glad to hear that, now I don't have to waste my time trying to make it work.



    After further thinking about this, I don't believe on the claims on this product.

    The only advantage over any normal caching solution is seek time and I/O over a regular SSD, and even that will result at most in a 20~30ms improvement.

    I will recommend staying away from this.

  • FHRFHR Member, Host Rep

    Josephd said: 16GB RAM = 1000 max recommended buckets
    32GB RAM = 2000 max recommended buckets
    64GB RAM = 4000 max recommended buckets
    128GB RAM = 8,000 max recommended buckets

    So... 50% of server RAM for this? No, thanks.

    Thanked by 1kkrajk
  • @vovler said:
    You installation is broken somewhere.

    No package membucket-server available.
    No package membucket-client available.

    Forcefully installed them w/ "yum install -y URL", nothing on WHM Plugins and still can't "Authorize Server" https://i.gyazo.com/3a7d8eb4be695f884ce7657a902697bd.png

    Please open a ticket, and our team is currently standby to help.


    @cyberpersons said:

    @Josephd said:
    If you're on a VPS, then you're not a shared hoster.

    If every customer gets their own cPanel license, which is an additional $15/month, then they're NOT a shared host customer. They're a private company using cPanel on a VPS.

    This product is for the Shared Hosting Provider, to offer to shared hosting customers.

    I hope this gives everyone some clarity.

    Still, the lowest price license for dedicated is $32/month, and yes not only shared hosting environments individual people also heavily use LiteSpeed for their Magento or Prestashop stores etc.

    @willie said:

    Litespeed it seemed to me (maybe I'm wrong) addresses a different issue, which is to support apache-style configuration files (.htaccess) while keeping an nginx-like concurrency strategy. So it's more about features than performance.

    Not only apache-style configurations but built-in cache module along with cache plugins. Built-in cache module removes the need for Varnish or any reverse proxy based caching software.

    And if combined with application side cache plugins it quickly improves performance. But apache drop-in replacement is only for LiteSpeed Ent, and not OpenLiteSpeed. However, OpenLiteSpeed has same LSCache module without the support of ESI.

    For Membucket, I think it will take time for people to adapt and learning curve for end users will be tough.

    We agree that the learning curve will be difficult for end-user only because most people do not understand the stack and every provider uses the term caching differently.


    @jetchirag said:

    @vovler said:
    You installation is broken somewhere.

    No package membucket-server available.
    No package membucket-client available.

    Forcefully installed them w/ "yum install -y URL", nothing on WHM Plugins and still can't "Authorize Server" https://i.gyazo.com/3a7d8eb4be695f884ce7657a902697bd.png

    I opened a ticket for it several-several hours ago but didn't receive a response and now ticket's deleted/gone from panel.

    We sent you an email hours ago and have been waiting for a reply.

    Please check the inbox for the email address you signed up with.

  • Thanks, I was watching panel for reply and ticket is available anymore, technical issue?

    Anyway, I'll surely have a look tomorrow.

  • @FHR said:

    Josephd said: 16GB RAM = 1000 max recommended buckets
    32GB RAM = 2000 max recommended buckets
    64GB RAM = 4000 max recommended buckets
    128GB RAM = 8,000 max recommended buckets

    So... 50% of server RAM for this? No, thanks.

    FHR, that would be an assumptive misunderstanding.

    Memory is consumed as it is provisioned to the customer.

  • @jetchirag said:
    Thanks, I was watching panel for reply and ticket is available anymore, technical issue?

    Anyway, I'll surely have a look tomorrow.

    Not a problem! We look forward to helping you when you are available.

  • JosephdJosephd Member
    edited August 2018

    @vovler said:
    You installation is broken somewhere.

    No package membucket-server available.
    No package membucket-client available.

    Forcefully installed them w/ "yum install -y URL", nothing on WHM Plugins and still can't "Authorize Server" https://i.gyazo.com/3a7d8eb4be695f884ce7657a902697bd.png

    We fixed an issue that has to do with the yum repo package not listing. We noticed it was was built with too specific of a target causing the problem.

  • robohostrobohost Member
    edited August 2018

    @Josephd said:

    @FHR said:

    Josephd said: 16GB RAM = 1000 max recommended buckets
    32GB RAM = 2000 max recommended buckets
    64GB RAM = 4000 max recommended buckets
    128GB RAM = 8,000 max recommended buckets

    So... 50% of server RAM for this? No, thanks.

    FHR, that would be an assumptive misunderstanding.

    Memory is consumed as it is provisioned to the customer.

    For fraction of the MSRP we need to free up 50% of the server memory?

  • williewillie Member
    edited August 2018

    Josephd said: Our licensing fee will be a fraction of the minimum MSRP, but undecided at this time.

    If the total cost per 64GB host node (32GB is small these days) is above a significant chunk of cpanel ($32/m as someone says) or paid Litespeed (don't know what that costs), it's likely to be a difficult sell around here. Maybe I'm wrong but that's my guess. Where it says "low end" in LET, we really mean it ;-).

    @cyberpersons I still don't understand about Litespeed caching. Apache has mod_cache, is that different?

    Josephd said: That would be pretty sufficient to house a small site.

    Ok I'm speaking from some ignorance since I've never run a wordpress server and I do get the idea that wordpress is horrendous, but really, small wordpress sites as things are now aren't that slow, barring misconfiguration or slow plugins, afaict.

  • @robohost said:

    @Josephd said:

    @FHR said:

    Josephd said: 16GB RAM = 1000 max recommended buckets
    32GB RAM = 2000 max recommended buckets
    64GB RAM = 4000 max recommended buckets
    128GB RAM = 8,000 max recommended buckets

    So... 50% of server RAM for this? No, thanks.

    FHR, that would be an assumptive misunderstanding.

    Memory is consumed as it is provisioned to the customer.

    For fraction of the MSRP we need to free up 50% of the server memory?

    During the BETA, this is 100% Free to use, Free to sell, and Free to Benchmark.

    Membucket will allocate RAM on demand 8MB or 1 Bucket at a time until the recommended maximum is reached or your chosen administrator defined maximum is reached.

    We do not consume or pre-allocated 50% of your RAM.

  • williewillie Member
    edited August 2018

    Josephd said: Membucket will allocate RAM on demand 8MB or 1 Bucket at a time until the recommended maximum is reached or your chosen administrator defined maximum is reached.

    Thanks, that is helpful clarification.

    I think we're still waiting to find out how much 4000 buckets (32GB, 50% of a 64GB server) will cost in licensing. People likely want to know whether they will be able to afford the product before they spend any time testing it. I know you haven't yet decided the final number, but if it's above $1/GB as mentioned earlier, my guess is there will be considerable resistance to say the least. (OK, maybe wordpress users won't use that many buckets, but I'm used to memcached server farms using 1000s of GB of ram).

    To put that amount in perspective, I'm currently paying around $25/month for an entire 32GB dedicated server (E3-1230v3 iirc), though that's an unusual bargain.

  • @willie said:

    Josephd said: Our licensing fee will be a fraction of the minimum MSRP, but undecided at this time.

    If the total cost per 64GB host node (32GB is small these days) is above a significant chunk of cpanel ($32/m as someone says) or paid Litespeed (don't know what that costs), it's likely to be a difficult sell around here. Maybe I'm wrong but that's my guess. Where it says "low end" in LET, we really mean it ;-).

    Membucket does not replace cPanel or LiteSpeed.

    Membucket is a plugin for cPanel, controlled through cPanel or cli.

    Membucket does not replace LiteSpeed or their specific caching mechanisms, Membucket is located at the application stack increasing performance further. It is not our goal to say how much we increase someone else's infrastructure caching product, we are complimentary caching layers and not competing with each other.

    There are many caching mechanisms out there, but all of which give zero granularity in which to sell and delegate in a shared hosting single server environment, ie Varnish, SQLproxy, haproxy, memcached and others.

    Membucket does not debunk other caching mechanisms and our goal is not to say which mechanism is better over others except Membucket is only comparable to Memcached because they both provide caching for the executed application itself vs post process caching from Apache mod_cache, LiteSpeed, varnish, etc.

    With that being said we believe our version is far superior than memcached in functionality, security, and is already productized to be sold as an addon/speedboost/upgrade to existing cPanel based shared hosting end-users that wish to give a nice jolt of performance to that slowly performing site.

    @willie said:

    I think we're still waiting to find out how much 4000 buckets (32GB, 50% of a 64GB server) will cost in licensing. People likely want to know whether they will be able to afford the product before they spend any time testing it. I know you haven't yet decided the final number, but if it's above $1/GB as mentioned earlier, my guess is there will be considerable resistance to say the least. (OK, maybe wordpress users won't use that many buckets, but I'm used to memcached server farms using 1000s of GB of ram).

    If the minimum MSRP provider revenue is $1/month per Bucket or 8MB allocation. Then technically anything under $1/month per Bucket is profitable for the service provider.

    Membucket does not raise the cost of your server by using it. It does brings in additional revenue that otherwise would not be captured.

    We will only state at this time our price will be far less than the minimum MSRP of $1/Bucket per Month leaving plenty of that profit to the provider.

    We are confident that guaranteed profit is affordable.

  • @willie said:
    @cyberpersons I still don't understand about Litespeed caching. Apache has mod_cache, is that different?

    Theoretically same, but if we debate about this then Apache is a web server and so does LiteSpeed or NGINX doing the same thing.

    But how they do is what matters, LiteSpeed or OpenLiteSpeed is built to keep speed in mind, and when combined with cache plugins it becomes a good combination, because at the end of day cache invalidation also matters, and LSCache plugins know exactly when to invalidate so it is possible that your page is cached for lifetime.

    Then we've other PHP side cache plugins which you can use with Apache too, but they still involve PHP which cannot be as fast as the server-side cache.

  • CrossBoxCrossBox Member, Patron Provider

    I see that you're targeting shared hosts but don't all of them these days have CloudLinux with cageFS enabled?

    "cagefsctl --addrpm" in combination with a simple daemon service (written in Python or Golang) which handles the logic of the end user creating/deleting and starting/stopping the service and rebooting users' defined services if the server gets rebooted can make your plugin not so much needed :(. Actually, we already implemented this for fun on one of our test servers. The daemon is written in Python and uses PM2 to automate things. We did this for fun, out of curiosity and to learn about some cool features that Igor and the guys at CloudLinux created. And the best thing is that all the services user creates will automatically be subjected to the CloudLinux's LVE resource limits and throttled if needed.

    The method I mentioned works not just for Memcached but for many other services like MongoDB and Redis. Redis is in my personal favorite and far better/more feature rich then Memcached. Heck, there is already a pretty popular WordPress Redis plugin if you care about WordPress so much.

    We use Redis in CrossBox intensively to store most of the users' hot cache data and it works like a beauty.

    P.S. Don't go around telling people that vanilla WP loads in over 2sec on an SSD at forums like this one, where most of the guys are tech orientated as they might get easily offended about how stupid you think they are :D

    Anyway, good luck with your product!

  • eva2000eva2000 Veteran
    edited August 2018

    Josephd said: Membucket does not replace LiteSpeed or their specific caching mechanisms, Membucket is located at the application stack increasing performance further. It is not our goal to say how much we increase someone else's infrastructure caching product, we are complimentary caching layers and not competing with each other.

    There are many caching mechanisms out there, but all of which give zero granularity in which to sell and delegate in a shared hosting single server environment, ie Varnish, SQLproxy, haproxy, memcached and others.

    Membucket does not debunk other caching mechanisms and our goal is not to say which mechanism is better over others except Membucket is only comparable to Memcached because they both provide caching for the executed application itself vs post process caching from Apache mod_cache, LiteSpeed, varnish, etc.

    With that being said we believe our version is far superior than memcached in functionality, security, and is already productized to be sold as an addon/speedboost/upgrade to existing cPanel based shared hosting end-users that wish to give a nice jolt of performance to that slowly performing site.

    Only way is to really show benchmarks/usage cases for Membucket in comparison to alternative caching mechanisms. You'd probably also need to include instructions/tools so that users can reproduce those benchmark/comparisons on their own too.

    Bit confused is Membucket leveraging memcached server ? Or something entirely different ? Does Membucket have the same max key length limit that Memcached server has at 250 bytes ?

  • I’m still confused what this offers over and above Litespeed and LSCache on Wordpress. Can anyone explain?

  • williewillie Member
    edited August 2018

    eva2000 said: Only way is to really show benchmarks/usage cases for Membucket in comparison to alternative caching mechanisms.

    Well I think the comparison against raw wordpress is ok, since the proposition of this product (afaict) is that it's easier to set up than alternatives, not that it's faster. So you spend $X on licensing instead of burning $Y of your time deploying a FOSS solution. Of course that trade-off is impossible to evaluate until the OP is more forthcoming about the licensing costs. For it to be attractive (especially if you anticipate growth) you need X < Y.

    I do believe the concept of a $70/month shared hosting customer has to be seen as quite fanciful around these parts (even if it's a beautiful thought). "The product brings in more revenue" is a euphemism for hosts raising prices, which in this highly competitive market is hard to pull off. One of the most popular shared hosting plans around here is BuyShared's $5/year offer. I use it myself. Adding $1/month to that makes it $17/year, more than triple what it is now, so I can't imagine feeling tempted by it.

    I don't use wordpress though, so I'm not affected by its issues. My impression is that simple, low-traffic wordpress works ok because the total resources used are small; and larger high-traffic wordpress works ok because those sites are carefully optimized, so the worst wordpress horrors (plugin abuse etc) are at the middle levels. Maybe the main effect of caching is to bypass those plug-ins, but in that case why use them?

  • @LeonDynamic said:
    I’m still confused what this offers over and above Litespeed and LSCache on Wordpress. Can anyone explain?

    It's used to cache pages in RAM instead in disk. It offers performance improvement over caching in disk, very little if we are talking about SSDs.

    Theoretically it would improve the seek time of cached files, SSD's have a seek time of about 0.1 ms. Even if you are serving 50 files and RAM had 0ms seek time, this would result in an improvement of 5ms.

    The other thing that would improve speed is IO. Although that would be different depending on the IO limits of each shared hosting.

    In a 2MB webpage, and with 2MB IO limit, it COULD shave off 1 second of load time, but the real bottleneck is the network. Especially if you live far from the server, serving the page from RAM is pointless.

    If anyone wants to test the real speed of this, they can install wordpress with some caching plugin, find the directory where the cache is saved, and mount that directory into RAM.

    Here is a tutorial for that : https://www.scalescale.com/tips/nginx/mount-directory-into-ram-memory-better-performance/

    Will test this with cyberpanel(OLS) to see how it performs

  • FHRFHR Member, Host Rep

    It's used to cache pages in RAM instead in disk. It offers performance improvement over caching in disk, very little if we are talking about SSDs.

    Theoretically it would improve the seek time of cached files, SSD's have a seek time of about 0.1 ms. Even if you are serving 50 files and RAM had 0ms seek time, this would result in an improvement of 5ms.

    The other thing that would improve speed is IO. Although that would be different depending on the IO limits of each shared hosting.

    In a 2MB webpage, and with 2MB IO limit, it COULD shave off 1 second of load time, but the real bottleneck is the network. Especially if you live far from the server, serving the page from RAM is pointless.

    If anyone wants to test the real speed of this, they can install wordpress with some caching plugin, find the directory where the cache is saved, and mount that directory into RAM.

    Here is a tutorial for that : https://www.scalescale.com/tips/nginx/mount-directory-into-ram-memory-better-performance/

    Will test this with cyberpanel(OLS) to see how it performs

    You could try installing memcached and W3 Total Cache, configured with a memcached backend as well. I would be very interested in a Membucket vs W3TC+memcached comparison actually.

  • MikePTMikePT Moderator, Patron Provider, Veteran
    edited August 2018

    @Josephd said:

    @FHR said:

    Josephd said: A Bucket is 8MB of RAM allocation.

    Can you tell me how you want to squeeze 4000 buckets on a 32GB server then?

    You are correct, that was a typo on my part, and will fix my post on that.

    Server RAM size:

    16GB RAM = 1000 max recommended buckets

    32GB RAM = 2000 max recommended buckets

    64GB RAM = 4000 max recommended buckets

    128GB RAM = 8,000 max recommended buckets

    @willie said:
    This thread is getting reasonable reception from members so I guess ok, but I have to say it comes across as a bit spammy. And the questions and objections seem well taken. Conventional wisdom about Wordpress these days is that the bad slowdowns come from overuse of poorly written plugins, but Wordpress can be responsive with careful configuration and plenty of sites do that.

    Also, memcached was what, a 2008 thing, to bypass HDD latency before SSD was everywhere? So that 1.5s to 300ms speedup on an SSD system makes me suspicious of the mysql config and stuff like that.

    All of our lab tests were on full vanilla stack installs, CentOS, CloudLinux, cPanel, Default EasyApache profile with PHP 5.7.

    As stated in our Q&A, pulling objects from main memory references are still 20x faster than pulling straight from SSD.

    Litespeed it seemed to me (maybe I'm wrong) addresses a different issue, which is to support apache-style configuration files (.htaccess) while keeping an nginx-like concurrency strategy. So it's more about features than performance.

    Finally if I were running a site whose performance mattered, it wouldn't occur to me to use shared hosting in the first place. I do use shared hosting myself, but only for some low traffic personal pages.

    We work along side any web server, including Litespeed.

    Default EasyApache configuration isnt tweaked, doesnt benefit your benchmarks. Still using prefork IIRC which is being deprecated if not already.

    Other than that, testing this on PHP 5.7 while making such claims for speed improvement is ridiculous. Do test it with mpm event or worker, PHP 7.* and MariaDB.

    I do not see any potential on this product to be honest.
    And considering that we have our own infrastructure, RAM etc its pretty expensive. Would make sense if you hosted the memcached servers yourself, globally dispersed etc. Not when we get to use our own resources, pay for the RAM which is already expensive, and meh. I just dont see this working. The 1.5s to 300ms claim on default EasyApache config, PHP 5.7 and I assume MySQL (though the latest version is awesome)... Meh.

  • 1) it looks like the ramdisk made almost no difference?

    2) is that 2.4 sec for 84 requests, i.e. about 30 msec each? That's pretty good. Or does it mean the page has 84 resources (little gifs etc)?

    3) I don't really see why a ramdisk/tmpfs should make much difference given that the files should be in the linux kernel cache after the first load, and kept there if loaded with any frequency.

    4) Both tmpfs and the kernel cache on the local machine seem way saner to me than memcached and access through a socket. I always thought the idea of memcached was to have a farm of 100s or 1000s of memcached servers so your application could have LAN access to terabytes of ram. But if it's just a few GB, these days that's nothing and you don't need remote servers for it.

  • qtwrkqtwrk Member
    edited August 2018

    @Josephd said:

    Vanilla Wordpress v4.9.5 in php7 loads the index.php in 2.3 seconds on cPanel from an SSD. Running the same site with a minimal Membucket install, it began loading under 300ms.

    With all due respect, I have a different opinion about this part.

    So I did a quick test:

    Host: netcup RS2000 BF edition , 8 core + 12GB RAM + 60GB on RAID10

    tester: netcup VPS 500 BF edition, 2 core + 2GB RAM

    connection between them should be 1Gbps

    both servers are in same DC and connection between them should be 1Gbps.

    installed cyberpanel and created 2 WP 4.9.8 on it , one with redis full-page cache , and another one with LiteSpeed Cache , I see someone mention it earlier.

    I didn't tested non-cached , who doesn't use cache nowadays anyway...

    So I ran 5 times ab -n 1 -c 1 -H "Accept-Encoding: gzip" https://myurl1/, results:

    0.010, 0.005, 0.007, 0.010, 0.011, on localhost , average = 0.0086 seconds.

    0.013, 0.012, 0.018, 0.015, 0.015 on second node, average = 0.0146 seconds

    above is for redis cache, now goes LiteSpeed Cache, same command , 5 times.

    0.004,0.006, 0.004, 0.003, 0.004 on localhost, average = 0.0042 seconds.

    0.007, 0.007, 0.005, 0.006, 0.007 on second node, average = 0.0064 seconds.

    now i will do -n 1000 with -c 100.

    1.343, 1.263, 1.279, 1.337, 1.337 on localhost , average 1.3118 seconds

    1.446, 1.379, 1.267, 1.374, 1.352 on second node, average 1.3636 seconds

    so for instant I got several dozens of PHP processes showing up , as this is PHP level cache , each request goes to PHP.

    now let's go to LiteSpeed Cache.

    1.065, 1.148, 1.128, 1.104, 1.048 on localhost, average 1.0986 seconds

    1.105, 1.208, 1.124, 1.154, 1.045 on second node, average 1.1272 seconds.

    and during this round , I only see openlitespeed processes eating very little CPU resources.

    As @vovler said , if it's SSD disk , speed is pretty close , not even human noticeable , but the resources usage, is a totally different story.

    My point is , not matter if it's LiteSpeed Cache, or Varnish or whatever full-page cache that does NOT need PHP , should acts better than PHP-based cache , as invoke PHP is resources costly.

    It's simple , brutal, not-really-scientific benchmark , but I think it proofs my point ...

  • JosephdJosephd Member
    edited August 2018

    @willie said:
    1) it looks like the ramdisk made almost no difference?

    2) is that 2.4 sec for 84 requests, i.e. about 30 msec each? That's pretty good. Or does it mean the page has 84 resources (little gifs etc)?

    GTmetrix was one of the screenshots, I recognized it.

    They take the average across all of the requests so they're loading 1.6s and 2.4s per page.

    3) I don't really see why a ramdisk/tmpfs should make much difference given that the files should be in the linux kernel cache after the first load, and kept there if loaded with any frequency.

    This is the type of caching that LiteSpeed offers, and Apache's mod_cache, they load the files into cache RAM and they're not very performant as everyone can see from the screenshots.

    Also a couple of other notes on the RAM disk is that if one locks up, you have to reboot the whole server to get back.

    1. Linux has a default RAM disk max of 16, you can tweak it to do 128 but it gets very unstable when you do this. Also it's a far shot from the 10,000 instances of membucket we've tested on a single server.

    2. Lets say a RAM disk locks up, you will have to reboot the server to get that customer back online. Membucket runs all processes in userspace with the ability to restart caching services without disrupting neighbors.

    @FHR take note here^ because W3 cache core operations create tiny files of pre-processed PHP pages and it will thusly perform roughly the same. But feel free to try and post results further proving the point.

    We encourage it :-)

    4) Both tmpfs and the kernel cache on the local machine seem way saner to me than memcached and access through a socket. I always thought the idea of memcached was to have a farm of 100s or 1000s of memcached servers so your application could have LAN access to terabytes of ram. But if it's just a few GB, these days that's nothing and you don't need remote servers for it.

    I still fail to see the point you've brought up 3 times now. Shared hosting customers aren't interested in 1,000s of GB of RAM. If they were they wouldn't be on shared hosting.

    They're just interested in getting their site to load faster, and if +$1/month can do that, they will.

    Remember that most end users cannot tell the difference between 1MB and 1TB other than the device they most associate with those terms.

    And if it's a memory Bucket that does the job, then 1 Bucket it is. 8MB is knowledge for the provider so they can estimate how many they can sell on a box.

    Also accessing memcached over the network is going to be slower guaranteed for 2 reasons.

    1. All requests are going through the network stack
    2. All requests are now slowed down by network latency

    Thank you for posting this.

    I recommend trying a comparison with and without Membucket, which is why we're offering a Free BETA.

  • mkshmksh Member
    edited August 2018

    @Josephd said:
    1. Linux has a default RAM disk max of 16, you can tweak it to do 128 but it gets very unstable when you do this. Also it's a far shot from the 10,000 instances of membucket we've tested on a single server.

    Why would you need 128 RAM disks? Maybe i miss something here but wouldn't a single one suffice?

    1. Lets say a RAM disk locks up, you will have to reboot the server to get that customer back online. Membucket runs all processes in userspace with the ability to restart caching services without disrupting neighbors.

    Why not use tmpfs then?

  • JosephdJosephd Member
    edited August 2018

    @mksh said:

    @Josephd said:
    1. Linux has a default RAM disk max of 16, you can tweak it to do 128 but it gets very unstable when you do this. Also it's a far shot from the 10,000 instances of membucket we've tested on a single server.

    Why would you need 128 RAM disks? Maybe i miss something here but wouldn't a single one suffice?

    I guess to start it, RAM disks are not that performant for this type of operation as proven in the screenshots.

    Anyone loading slower than 1.5s per page is deprioritized by search engine indexes, and to overcome that you have to pay $$$ in advertising.

    Membucket is currently making an offer to Shared Hosters looking to sell a performance addon/upgrades to their customers. Those customers would be densely packed onto a system because as a provider you want to keep your costs low.

    Shared Hosting does not mean VPS + cPanel + Stack of Caching layers, that's VPS hosting.

    1. Lets say a RAM disk locks up, you will have to reboot the server to get that customer back online. Membucket runs all processes in userspace with the ability to restart caching services without disrupting neighbors.

    What's wrong with tmpfs?

    tmpfs is great but even slightly slower than a raw RAM disk because of the file system virtualization overlay

Sign In or Register to comment.