Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Membucket.io BETA Registration ANNOUNCEMENT - CALLING Shared Hosts - Site Accelerator cPanel Plugin - Page 3
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Membucket.io BETA Registration ANNOUNCEMENT - CALLING Shared Hosts - Site Accelerator cPanel Plugin

13»

Comments

  • JosephdJosephd Member
    edited August 2018

    @qtwrk said:

    @Josephd said:

    Vanilla Wordpress v4.9.5 in php7 loads the index.php in 2.3 seconds on cPanel from an SSD. Running the same site with a minimal Membucket install, it began loading under 300ms.

    With all due respect, I have a different opinion about this part.

    ...

    It's simple , brutal, not-really-scientific benchmark , but I think it proofs my point ...

    Have you registered for the BETA and tested Membucket out yet?

    We're pretty confident that will finally prove the point.

    After all, we're giving it away for FREE right now.

  • @Josephd said:

    @qtwrk said:

    @Josephd said:

    Vanilla Wordpress v4.9.5 in php7 loads the index.php in 2.3 seconds on cPanel from an SSD. Running the same site with a minimal Membucket install, it began loading under 300ms.

    With all due respect, I have a different opinion about this part.

    ...

    It's simple , brutal, not-really-scientific benchmark , but I think it proofs my point ...

    Have you registered for the BETA and tested Membucket out yet?

    We're pretty confident that will finally prove the point.

    After all, we're giving it away for FREE right now.

    Yeah, I'd like to try it out , but I am not hosting provider :(

  • williewillie Member
    edited August 2018

    Josephd said: This is the type of caching that LiteSpeed offers, and Apache's mod_cache, they load the files into cache RAM and they're not very performant as everyone can see from the screenshots.

    @qtwrk was getting around 1 millisecond per request which seems pretty good to me, unless I misunderstood something. I didn't see in those graphs where PHP used more resources though. I can believe it, but I didn't see it in those numbers.

    Josephd said: I still fail to see the point you've brought up 3 times now... Shared hosting customers aren't interested in 1,000s of GB of RAM.

    I just mean that the use case for memcached that I'm familiar with involves sharding a ram cache across a lot of physical servers. Since you're not doing that, but you mentioned that you're using memcached anyway, I'm wondering why you chose that approach instead of letting the system handle things.

    Here's a test I ran on a 2gb Hetzner cloud vm just now, counting the files in the /usr tree of a Debian 9 system with a bunch of stuff installed. It takes 2.4 seconds the first time I run it, since it has to access the SSD to find and stat all the files:

    ~$ time du -a /usr|wc
      40801   81604 1915542
    
    real    0m2.427s
    user    0m0.408s
    sys     0m1.452s
    

    When I run it again right away, it's 6x faster, since all the directories, inodes etc. are cached by the kernel:

    ~$ time du -a /usr|wc
      40801   81604 1915542
    
    real    0m0.402s
    user    0m0.120s
    sys     0m0.276s
    

    It's scanning 3008 directories to find those files, fwiw:

    ~$ find /usr -type d | wc
       3008    3008  106833
    

    Similar experiences have told me in the past that the linux file cache works really well. At most you might want to keep it warm by having something access the files of interest a few times an hour. I ran a cron job for that as part of a search engine a while back, periodically reading the hot parts of the search index so they would be in cache when a query arrived. It made a big difference since the particular application frequently needed to retrieve large result sets.

    Also, you mention putting static assets like images in those ram buckets, but a CDN sounds even more promising.

  • mkshmksh Member
    edited August 2018

    @Josephd said:

    @mksh said:

    @Josephd said:
    1. Linux has a default RAM disk max of 16, you can tweak it to do 128 but it gets very unstable when you do this. Also it's a far shot from the 10,000 instances of membucket we've tested on a single server.

    Why would you need 128 RAM disks? Maybe i miss something here but wouldn't a single one suffice?

    I guess to start it, RAM disks are that performant for this type of operation as proven in the screenshots.

    Anyone loading slower than 1.5s per page is deprioritized by search engine indexes, and to overcome that you have to pay $$$ in advertising.

    Membucket is currently making an offer to Shared Hosters looking to sell a performance addon/upgrades to their customers. Those customers would be densely packed onto a system because as a provider you want to keep your costs low.

    Shared Hosting does not mean VPS + cPanel + Stack of Caching layers, that's VPS hosting.

    1. Lets say a RAM disk locks up, you will have to reboot the server to get that customer back online. Membucket runs all processes in userspace with the ability to restart caching services without disrupting neighbors.

    What's wrong with tmpfs?

    tmpfs is great but even slightly slower than a raw RAM disk because of the file system virtualization overlay

    So what you are saying is basically that your lookup table is faster than FS overhead. While i don't doubt that in general i highly question if that's a place where you can archive any kind of noticeable performance gain. Might be you have a better cache logic but that's pretty much it.

    Also your reply is misleading. If you are talking about @vovler's screenshots i am pretty sure none of them used a raw ramdisk so pointing out FS overhead as drawback for tmpfs is bullshit. Besides those tests were run vs. SSD which also has FS overhead. So all in all not making a lot of sense.

  • @Josephd I will gladly test it again, but give me how you obtained those numbers.

    Provider and server configuration and wordpress configuration so I can re-create your tests

  • FHRFHR Member, Host Rep

    @Josephd
    If you want to market something, you better write installation instructions for it.

    I registered and followed everything on site. Packages installed. Result:
    Nothing changed in cPanel/WHM and no extra service is running nor listening.

  • williewillie Member
    edited August 2018

    tmpfs and ramfs should work exactly the same way, except tmpfs contents can be swapped (but if you have enough ram this shouldn't happen, especially with a bit of ongoing warming) and you can set a maximum size of a tmpfs. By "ramdisk" I figured everyone meant tmpfs or possibly ramfs, not doing something silly like creating an actual disk image in a chunk of ram. Tmpfs and ramfs should have almost no fs overhead. But I'd still want to see tests before believing that they're significantly faster than just using the normal on-disk file system, once the contents are in the kernel cache.

    My laptop has a crappy old msata ssd on 1.5gbps interface iirc. The ssd has a certain directory with around 240MB of text files (around 9k files). Running "grep -r" for a string in this directory takes around 5 sec with the cache cold and 0.25 sec with it warm. So the warm speed is around 1GB/s and 36k file opens/sec and most of the time used is user cpu usage by grep. I just don't see significant fs overhead even without any explicit ramdisk.

    "find . -type f -print0 |xargs -0 cat > /dev/null" run in the disk cache directory every few minutes from a cron job should be enough to keep the kernel ram cache warm, on a node with enough ram. Once warm, repeating it takes 70 msec for the 240MB directory mentioned above, over a million file operations per second on this old i5 laptop. Doing that might be poor behaviour from an end user on a cheap shared plan since it hogs ram, but maybe it's something the host could do instead and charge extra for, as a low tech alternative to this fancy membucket module.

    So I have to ask the wordpress users here: is there significant difficulty for an end user wanting to enable a caching module/plugin for wordpress under typical existing shared hosting, without host intervention?

  • @willie said:
    So I have to ask the wordpress users here: is there significant difficulty for an end user wanting to enable a caching module/plugin for wordpress under typical existing shared hosting, without host intervention?

    Litespeed enables the host to install LSCache plugin on all wordpress installations regardless if the user wants it or not

  • @vovler said:

    @willie said:
    So I have to ask the wordpress users here: is there significant difficulty for an end user wanting to enable a caching module/plugin for wordpress under typical existing shared hosting, without host intervention?

    Litespeed enables the host to install LSCache plugin on all wordpress installations regardless if the user wants it or not

    Admin can choose either enable it not on certain vhost, so I feel it's more like admin wants it or not, instead of user wants it or not...

  • @qtwrk said:

    @vovler said:

    @willie said:
    So I have to ask the wordpress users here: is there significant difficulty for an end user wanting to enable a caching module/plugin for wordpress under typical existing shared hosting, without host intervention?

    Litespeed enables the host to install LSCache plugin on all wordpress installations regardless if the user wants it or not

    Admin can choose either enable it not on certain vhost, so I feel it's more like admin wants it or not, instead of user wants it or not...


    So you said exactly what I said, I'm confused

    @vovler said: regardless if the user wants it or not

  • @vovler said:

    @qtwrk said:

    @vovler said:

    @willie said:
    So I have to ask the wordpress users here: is there significant difficulty for an end user wanting to enable a caching module/plugin for wordpress under typical existing shared hosting, without host intervention?

    Litespeed enables the host to install LSCache plugin on all wordpress installations regardless if the user wants it or not

    Admin can choose either enable it not on certain vhost, so I feel it's more like admin wants it or not, instead of user wants it or not...


    So you said exactly what I said, I'm confused

    @vovler said: regardless if the user wants it or not

    My point is , there is a difference between LiteSpeed plugin disregard user's desire , or ADMIN disregard user's desire...

  • @vovler said:
    @Josephd I will gladly test it again, but give me how you obtained those numbers.

    Provider and server configuration and wordpress configuration so I can re-create your tests

    The provider doesn't matter. We're not here to test out providers (mileage may obviously vary).

    Earlier we said that latest Wordpress vanilla serves approximately 2.1s per page from SSD, and we were jumped on by the hecklers as if we're trying to lie to people, yet your screenshots proved what we said.

    You even used RAM disks and Litespeed caching and still obtained relatively the same results as our raw from SSD number.

    Test it anywhere, we're confident the numbers will prove themselves out.

    It is already obvious that if we post the numbers nobody will believe it anyway, lol.

    @FHR said:
    @Josephd
    If you want to market something, you better write installation instructions for it.

    I registered and followed everything on site. Packages installed. Result:
    Nothing changed in cPanel/WHM and no extra service is running nor listening.

    Remember this is BETA.

    And FREE maybe isn't totally free. The real BETA test is our portal, registration process, and guided tour verbiage. Thank you again for the help and remember to submit a ticket if something doesn't seem to act quite right. We are dedicated to fix these small registration issues and get you to testing as quickly as possible.


    Post Registration Instructions

    Since the guided instructions in the portal are not standing out as well as we originally thought, I'm posting them here and will edit the thread with the following.

    Once you have registered and successfully logged into the portal YOU MUST STILL DO THE FOLLOWING:

    1) Please go to Address Spaces and add the IP Address of your server like this:

    ie,

    192.168.1.99/32 - if per single IP

    192.168.1.0/18 - or by actual network of IPs

    Any server within the IP space you define will authorize against YOUR account.

    2) Restart Membucket on your server for it to register.

    3) In the "Servers" tab here in the portal, choose your server and select "Authorize".

    This should get you up and running!

  • FHRFHR Member, Host Rep
    edited August 2018

    Josephd said: 2) Restart Membucket on your server for it to register.

    See, that's the problem. I installed the two yum packages of yours (server, client) and authorized the IP range.

    Package membucket-client-0.7.1-1.el7.x86_64 already installed and latest version
    Package membucket-server-0.7.1-1.el7.x86_64 already installed and latest version
    

    How can I restart the Membucket service, if it didn't install any service in the first place?

    # systemctl | grep -i membucket
    #
    
  • JosephdJosephd Member
    edited August 2018

    @FHR said:

    Josephd said: 2) Restart Membucket on your server for it to register.

    See, that's the problem. I installed the two yum packages of yours (server, client) and authorized the IP range.

    >

    How can I restart the Membucket service, if it didn't install any service in the first place?


    Please open a ticket so our developers can help you directly.

    and

    systemctl status membucketd

    If systemctl does not list our service, you can use systemctl daemon-reload to have it reload / scan for new services.
    You can then use the systemctl start membucketd command to start membucket daemon for the first time.

    @qtwrk it looks like you need to follow these steps as well

  • @Josephd said:

    The provider doesn't matter. We're not here to test out providers (mileage may obviously vary).

    Earlier we said that latest Wordpress vanilla serves approximately 2.1s per page from SSD, and we were jumped on by the hecklers as if we're trying to lie to people, yet your screenshots proved what we said.

    You even used RAM disks and Litespeed caching and still obtained relatively the same results as our raw from SSD number.

    Test it anywhere, we're confident the numbers will prove themselves out.

    It is already obvious that if we post the numbers nobody will believe it anyway, lol.

    Provide:

    • Server specs and location
    • Server configuration (if apache was used, indicate what MPM, PHP version, etc)
    • Wordpress version
    • Wordpress theme
    • ALL Wordpress plugins used, especially if any caching plugin
    • What tool did you use to get TTFB and overall load time
    • If the testing tool was self-hosted, indicate the server location
    • The results you got with your testing
    • Proof that increasing buckets, increases performance.

    This should be in your inicial post.

    Litespeed for example doesn't make claims without showing you the exact benchmark process they went through, and their results.
    https://www.litespeedtech.com/benchmarks/wordpress

    If you don't provide the things above, It will make your product look scammy

  • @Josephd said:

    @qtwrk it looks like you need to follow these steps as well

    tried that, didn't work, ticket it in, awaiting for reply.

  • williewillie Member
    edited August 2018

    Vovler can you explain the 2nd screen shot: is it really taking 2.3s to serve one page? What is the number 84? Total number of retrievals including little gifs etc.? Where is the browser, what is the network connection, how long does it take to transfer a single 1.37MB jpeg? Can you post a timing picture (like in https://developer.mozilla.org/en-US/docs/Tools/Network_Monitor) of the page load stages? Thanks.

  • @willie said:

    Vovler can you explain the 2nd screen shot: is it really taking 2.3s to serve one page? What is the number 84? Total number of retrievals including little gifs etc.? Where is the browser, what is the network connection, how long does it take to transfer a single 1.37MB jpeg? Can you post a timing picture (like in https://developer.mozilla.org/en-US/docs/Tools/Network_Monitor) of the page load stages? Thanks.

    The tests second image is ran on gtmetrix.com, it is the load of the entire page.
    I don't know why there are beetween pingdom and gtmetrix number of requests.

    Location & browser
    https://i.gyazo.com/760f8e57871506d84af05af7aad9f4da.png

    Gtmetrix Waterfall without Ramdisk
    https://i.gyazo.com/dd321c6821e5f41d398cd2b2b284752a.png

    Gtmetrix Waterfall with Ramdisk
    https://i.gyazo.com/b0aac27fe5617df8751aad6a158f56a8.png

  • FHRFHR Member, Host Rep

    Once I manage to get Membucket to work (waiting on support), I'm posting benchmarks.
    Going to compare stock WooCommerce, WooCommerce + W3 Total Cache disk caching, WooCommerce + Membucket.

    Thanked by 1vovler
  • vovler said: The tests second image is ran on gtmetrix.com

    That was a test between a headless browser in Vancouver, CA to a server in Germany, i.e. with 150-ish ms of ping between the browser and the server? That surely affects the round trips needed to get the resources.

    I'll see if I can set up a test server with mod_cache later but can't do it now because of too much stuff going on here at the moment.

  • @vovler said:

    @willie said:

    Vovler can you explain the 2nd screen shot: is it really taking 2.3s to serve one page? What is the number 84? Total number of retrievals including little gifs etc.? Where is the browser, what is the network connection, how long does it take to transfer a single 1.37MB jpeg? Can you post a timing picture (like in https://developer.mozilla.org/en-US/docs/Tools/Network_Monitor) of the page load stages? Thanks.

    The tests second image is ran on gtmetrix.com, it is the load of the entire page.
    I don't know why there are beetween pingdom and gtmetrix number of requests.

    Location & browser
    https://i.gyazo.com/760f8e57871506d84af05af7aad9f4da.png

    Gtmetrix Waterfall without Ramdisk
    https://i.gyazo.com/dd321c6821e5f41d398cd2b2b284752a.png

    Gtmetrix Waterfall with Ramdisk
    https://i.gyazo.com/b0aac27fe5617df8751aad6a158f56a8.png

    If you're doing alot of page load speed testing, instead of the manual method with alot of screenshots, you can use my gitools.sh script it supports SSH command querying of Google PageSpeed Insights, GTMetrix and Webpagetest.org respective API systems and you can get text based results along with links to the online results (for WPT) as well as being able to send results to Slack channels https://github.com/centminmod/google-insights-api-tools

    You can even setup cronjob schedule runs https://github.com/centminmod/google-insights-api-tools#cronjob-scheduled-runs I usually setup a weekly cronjob to send results to a custom Slack channel which is searchable

    Example output https://community.centminmod.com/threads/pagespeed-testing-via-apis-google-pagespeed-insights-gtmetrix-webpagetest-org.15103/

    FYI for pageload speed I'd use Webpagetest as you can choose from a larger choice of geographical regions for test server as well.

    Thanked by 1vovler
  • @eva2000, that is pretty cool, but do you have any idea what is going on with that slow page load? I think of WP plugins doing silly stuff like network queries on every view, e.g. to access a geolocation server. So caching basically routes around that. Who knows though.

  • @willie said:
    @eva2000, that is pretty cool, but do you have any idea what is going on with that slow page load? I think of WP plugins doing silly stuff like network queries on every view, e.g. to access a geolocation server. So caching basically routes around that. Who knows though.

    Only way to really know is to compare the page load's respective waterfall and requests.. Webpagetest.org does this better for comparisons. See my WPT how to - including how to compare 2 or more WPT test results via history log https://community.centminmod.com/threads/how-to-use-webpagetest-org-for-page-load-speed-testing.13859/

    GTMetrix has many test servers within same region AFAIK, so when you test a url, you may not be hitting the exact same test server on GTMetrix's end. Same with Webpagetest.org online web page test tool. At least for my gitools.sh for Webpagetest.org for most locations especially Dulles, VA, I coded it to hit the same WPT server via the WPT API so results are more comparable.

    Example here's WPT test servers all listed https://www.webpagetest.org/getTesters.php - notice some locations like Dulles, VA has many test servers in their rotation

    If you're not hitting the exact same test server, there will be variances in page load times due to different servers's resource utilisation and demand/usage at that particular point in time. Even if you hit the same test server there may be variables too. So with WPT test, usually you can do 3 to 9 runs on web page side and take the median result for better averages.

  • FHRFHR Member, Host Rep

    Here are the promised test results, I hope it's somewhat readable.

    TL;DR;
    WordPress performs terribly without caching as expected. Disk caching on SSD clearly wins. Membucket doesn't perform that bad.

    Testing methodology: Hammered using ApacheBench (ab). This is the best testing method for comparing something like this, since we want to test just the speed of the website generator itself (WordPress), not static file serving capabilities of the web server (like WebPageTest and others would do).

    Final notes: I wanted to test their native WP plugin as well, but it does not seem to work at the moment.

    I also did some testing with pure memcached and it was a tiny tiny bit slower - that is most probably caused by the communication method used though; TCP vs local socket.

    Thanked by 1willie
  • JosephdJosephd Member
    edited August 2018

    @FHR said:
    Here are the promised test results, I hope it's somewhat readable.

    TL;DR;
    WordPress performs terribly without caching as expected. Disk caching on SSD clearly wins. Membucket doesn't perform that bad.

    Testing methodology: Hammered using ApacheBench (ab). This is the best testing method for comparing something like this, since we want to test just the speed of the website generator itself (WordPress), not static file serving capabilities of the web server (like WebPageTest and others would do).

    Final notes: I wanted to test their native WP plugin as well, but it does not seem to work at the moment.

    I also did some testing with pure memcached and it was a tiny tiny bit slower - that is most probably caused by the communication method used though; TCP vs local socket.

    Thank you for following up with some benchmarks.

    We are fixing a bug with the latest release of WordPress. While we're waiting for that next release could you run a test with W3 + Membucket?

    W3 caches preprocessed PHP pages similar to an alpha feature we have in the works. Our goal at this time isn't to take on W3 and other caching types but rather enhance them since they work at different layers of the stack.

    Once this round of bug fixes are released next, we will be following this up with much more detailed benchmarks as well.

    So far our responses have been great from the community, so I would like to extend a big thank you to everyone who has participated, and welcome to any newcomers!!!

  • FHR said: I also did some testing with pure memcached and it was a tiny tiny bit slower - that is most probably caused by the communication method used though; TCP vs local socket.

    If you run memcached on the local server it should bypass the tcp stack, but there's still all the copying around of stuff through the socket. I'd be interested in a test with mod_socache_shmcb (uses an mmap segment as a cache I think) if that fits in with how the rest of the stack works.

  • so after few days struggle , I still couldn't make it work with WordPress, but I think I have gathered enough information.

    It requires memcache extension, which , compile it in php7x is not really smooth.

    so just in case , I installed memcached and set to unix socket mode

    and then connected to it , and runs a stats command, this version info specially interests me....

    2 different applications have exact same output form and exact same version number, this has to be coincidence, right ?

    I can't make it work with Wordpress, plugin says connected , but check stats , I can see connection comes in and traffic occurs , but it couldn't cache any data , average ab on localhost is as high as 0.2x seconds, where LiteSpeed Cache is around 0.005 seconds.

  • FHRFHR Member, Host Rep

    Truth is, file based caching or server-side caching (e.g. LSCache) is always going to be faster than any memcached based caching.
    Where Memcached belongs are distributed environments, where you have for example 5 nodes behind a loadbalancer hitting one common cache.

Sign In or Register to comment.