New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
You installation is broken somewhere.
No package membucket-server available.
No package membucket-client available.
Forcefully installed them w/ "yum install -y URL", nothing on WHM Plugins and still can't "Authorize Server" https://i.gyazo.com/3a7d8eb4be695f884ce7657a902697bd.png
Still, the lowest price license for dedicated is $32/month, and yes not only shared hosting environments individual people also heavily use LiteSpeed for their Magento or Prestashop stores etc.
Not only apache-style configurations but built-in cache module along with cache plugins. Built-in cache module removes the need for Varnish or any reverse proxy based caching software.
And if combined with application side cache plugins it quickly improves performance. But apache drop-in replacement is only for LiteSpeed Ent, and not OpenLiteSpeed. However, OpenLiteSpeed has same LSCache module without the support of ESI.
For Membucket, I think it will take time for people to adapt and learning curve for end users will be tough.
I opened a ticket for it several-several hours ago but didn't receive a response and now ticket's deleted/gone from panel.
Very cool, we thought so too!
After the initial speed boost, this turns into a capacity issue where if you are evicting objects out of your cache that will impede performance.
The only other option is upgrading to a different product line such as a VPS or a dedicated server. Because a product like Membucket hasn't been available the industry and customers have been forced down this path of upgrading into more than they want to manage.
The impact addresses the capacity therefore maintaining performance.
We have no such domain limitations and a customer could purchase as many Buckets and Wells your system can allocate.
Technically a customer could share their Well across multiple sites as long as they're accessed Control Panel User.
We do limit the maximum size of Wells around 16 Buckets to alleviate high density memory fragmention.
I think we addressed this perception of a ceiling.
>
It will be a flat rate per Bucket per month as consumed, regardless of your pricing. This way you don't buy more than what you need or per server licensing costs preventing you from creating Product Availability to the end-users.
We want you to install it on all of your servers.
Our licensing fee will be a fraction of the minimum MSRP, but undecided at this time.
So it will be definitely less than $1 and you will definitely make money :-)
Our choice to impose an MSRP is explained here, and we should note that were inspired by ESET for having done the same.
Our Promise
Willie, this is a fair ask and we are working on doing exactly that.
As FHR said, even memcache can house approximately 60 pages of content per 1MB, this means that 1 Bucket can approximately hold over 400 pages.
That would be pretty sufficient to house a small site.
In addition, each Well can hold up to 16 Bucket allowing for over 7,000 pages of content per Well.
Wordpress as an example has 32 cache groups built into the core of its' code base, where each could point to their own Well.
@jetchirag Glad to hear that, now I don't have to waste my time trying to make it work.
After further thinking about this, I don't believe on the claims on this product.
The only advantage over any normal caching solution is seek time and I/O over a regular SSD, and even that will result at most in a 20~30ms improvement.
I will recommend staying away from this.
So... 50% of server RAM for this? No, thanks.
Please open a ticket, and our team is currently standby to help.
We agree that the learning curve will be difficult for end-user only because most people do not understand the stack and every provider uses the term caching differently.
We sent you an email hours ago and have been waiting for a reply.
Please check the inbox for the email address you signed up with.
Thanks, I was watching panel for reply and ticket is available anymore, technical issue?
Anyway, I'll surely have a look tomorrow.
FHR, that would be an assumptive misunderstanding.
Memory is consumed as it is provisioned to the customer.
Not a problem! We look forward to helping you when you are available.
We fixed an issue that has to do with the yum repo package not listing. We noticed it was was built with too specific of a target causing the problem.
For fraction of the MSRP we need to free up 50% of the server memory?
If the total cost per 64GB host node (32GB is small these days) is above a significant chunk of cpanel ($32/m as someone says) or paid Litespeed (don't know what that costs), it's likely to be a difficult sell around here. Maybe I'm wrong but that's my guess. Where it says "low end" in LET, we really mean it ;-).
@cyberpersons I still don't understand about Litespeed caching. Apache has mod_cache, is that different?
Ok I'm speaking from some ignorance since I've never run a wordpress server and I do get the idea that wordpress is horrendous, but really, small wordpress sites as things are now aren't that slow, barring misconfiguration or slow plugins, afaict.
During the BETA, this is 100% Free to use, Free to sell, and Free to Benchmark.
Membucket will allocate RAM on demand 8MB or 1 Bucket at a time until the recommended maximum is reached or your chosen administrator defined maximum is reached.
We do not consume or pre-allocated 50% of your RAM.
Thanks, that is helpful clarification.
I think we're still waiting to find out how much 4000 buckets (32GB, 50% of a 64GB server) will cost in licensing. People likely want to know whether they will be able to afford the product before they spend any time testing it. I know you haven't yet decided the final number, but if it's above $1/GB as mentioned earlier, my guess is there will be considerable resistance to say the least. (OK, maybe wordpress users won't use that many buckets, but I'm used to memcached server farms using 1000s of GB of ram).
To put that amount in perspective, I'm currently paying around $25/month for an entire 32GB dedicated server (E3-1230v3 iirc), though that's an unusual bargain.
Membucket does not replace cPanel or LiteSpeed.
Membucket is a plugin for cPanel, controlled through cPanel or cli.
Membucket does not replace LiteSpeed or their specific caching mechanisms, Membucket is located at the application stack increasing performance further. It is not our goal to say how much we increase someone else's infrastructure caching product, we are complimentary caching layers and not competing with each other.
There are many caching mechanisms out there, but all of which give zero granularity in which to sell and delegate in a shared hosting single server environment, ie Varnish, SQLproxy, haproxy, memcached and others.
Membucket does not debunk other caching mechanisms and our goal is not to say which mechanism is better over others except Membucket is only comparable to Memcached because they both provide caching for the executed application itself vs post process caching from Apache mod_cache, LiteSpeed, varnish, etc.
With that being said we believe our version is far superior than memcached in functionality, security, and is already productized to be sold as an addon/speedboost/upgrade to existing cPanel based shared hosting end-users that wish to give a nice jolt of performance to that slowly performing site.
If the minimum MSRP provider revenue is $1/month per Bucket or 8MB allocation. Then technically anything under $1/month per Bucket is profitable for the service provider.
Membucket does not raise the cost of your server by using it. It does brings in additional revenue that otherwise would not be captured.
We will only state at this time our price will be far less than the minimum MSRP of $1/Bucket per Month leaving plenty of that profit to the provider.
We are confident that guaranteed profit is affordable.
Theoretically same, but if we debate about this then Apache is a web server and so does LiteSpeed or NGINX doing the same thing.
But how they do is what matters, LiteSpeed or OpenLiteSpeed is built to keep speed in mind, and when combined with cache plugins it becomes a good combination, because at the end of day cache invalidation also matters, and LSCache plugins know exactly when to invalidate so it is possible that your page is cached for lifetime.
Then we've other PHP side cache plugins which you can use with Apache too, but they still involve PHP which cannot be as fast as the server-side cache.
I see that you're targeting shared hosts but don't all of them these days have CloudLinux with cageFS enabled?
"cagefsctl --addrpm" in combination with a simple daemon service (written in Python or Golang) which handles the logic of the end user creating/deleting and starting/stopping the service and rebooting users' defined services if the server gets rebooted can make your plugin not so much needed . Actually, we already implemented this for fun on one of our test servers. The daemon is written in Python and uses PM2 to automate things. We did this for fun, out of curiosity and to learn about some cool features that Igor and the guys at CloudLinux created. And the best thing is that all the services user creates will automatically be subjected to the CloudLinux's LVE resource limits and throttled if needed.
The method I mentioned works not just for Memcached but for many other services like MongoDB and Redis. Redis is in my personal favorite and far better/more feature rich then Memcached. Heck, there is already a pretty popular WordPress Redis plugin if you care about WordPress so much.
We use Redis in CrossBox intensively to store most of the users' hot cache data and it works like a beauty.
P.S. Don't go around telling people that vanilla WP loads in over 2sec on an SSD at forums like this one, where most of the guys are tech orientated as they might get easily offended about how stupid you think they are
Anyway, good luck with your product!
Only way is to really show benchmarks/usage cases for Membucket in comparison to alternative caching mechanisms. You'd probably also need to include instructions/tools so that users can reproduce those benchmark/comparisons on their own too.
Bit confused is Membucket leveraging memcached server ? Or something entirely different ? Does Membucket have the same max key length limit that Memcached server has at 250 bytes ?
I’m still confused what this offers over and above Litespeed and LSCache on Wordpress. Can anyone explain?
Well I think the comparison against raw wordpress is ok, since the proposition of this product (afaict) is that it's easier to set up than alternatives, not that it's faster. So you spend $X on licensing instead of burning $Y of your time deploying a FOSS solution. Of course that trade-off is impossible to evaluate until the OP is more forthcoming about the licensing costs. For it to be attractive (especially if you anticipate growth) you need X < Y.
I do believe the concept of a $70/month shared hosting customer has to be seen as quite fanciful around these parts (even if it's a beautiful thought). "The product brings in more revenue" is a euphemism for hosts raising prices, which in this highly competitive market is hard to pull off. One of the most popular shared hosting plans around here is BuyShared's $5/year offer. I use it myself. Adding $1/month to that makes it $17/year, more than triple what it is now, so I can't imagine feeling tempted by it.
I don't use wordpress though, so I'm not affected by its issues. My impression is that simple, low-traffic wordpress works ok because the total resources used are small; and larger high-traffic wordpress works ok because those sites are carefully optimized, so the worst wordpress horrors (plugin abuse etc) are at the middle levels. Maybe the main effect of caching is to bypass those plug-ins, but in that case why use them?
It's used to cache pages in RAM instead in disk. It offers performance improvement over caching in disk, very little if we are talking about SSDs.
Theoretically it would improve the seek time of cached files, SSD's have a seek time of about 0.1 ms. Even if you are serving 50 files and RAM had 0ms seek time, this would result in an improvement of 5ms.
The other thing that would improve speed is IO. Although that would be different depending on the IO limits of each shared hosting.
In a 2MB webpage, and with 2MB IO limit, it COULD shave off 1 second of load time, but the real bottleneck is the network. Especially if you live far from the server, serving the page from RAM is pointless.
If anyone wants to test the real speed of this, they can install wordpress with some caching plugin, find the directory where the cache is saved, and mount that directory into RAM.
Here is a tutorial for that : https://www.scalescale.com/tips/nginx/mount-directory-into-ram-memory-better-performance/
Will test this with cyberpanel(OLS) to see how it performs
It's used to cache pages in RAM instead in disk. It offers performance improvement over caching in disk, very little if we are talking about SSDs.
You could try installing memcached and W3 Total Cache, configured with a memcached backend as well. I would be very interested in a Membucket vs W3TC+memcached comparison actually.
Default EasyApache configuration isnt tweaked, doesnt benefit your benchmarks. Still using prefork IIRC which is being deprecated if not already.
Other than that, testing this on PHP 5.7 while making such claims for speed improvement is ridiculous. Do test it with mpm event or worker, PHP 7.* and MariaDB.
I do not see any potential on this product to be honest.
And considering that we have our own infrastructure, RAM etc its pretty expensive. Would make sense if you hosted the memcached servers yourself, globally dispersed etc. Not when we get to use our own resources, pay for the RAM which is already expensive, and meh. I just dont see this working. The 1.5s to 300ms claim on default EasyApache config, PHP 5.7 and I assume MySQL (though the latest version is awesome)... Meh.
Following this : https://www.scalescale.com/tips/nginx/mount-directory-into-ram-memory-better-performance/
Without RAMdisk on cache folder :
https://i.gyazo.com/a82ff91568b2c90acff3f51e29004d65.png
https://i.gyazo.com/4d2e2867f669ff66f2052d27262304ff.png
With RAMdisk on cache folder :
https://i.gyazo.com/3218f705f073430fa64efb94f4d05d3e.png
https://i.gyazo.com/3b0321cf0b58f43576c33c3c25aa5fd2.png
Test were ran on a VMHaus 2GB VPS, UK, Theme Newspaper w/ default demo. OLS w/ LSCache plugin
1) it looks like the ramdisk made almost no difference?
2) is that 2.4 sec for 84 requests, i.e. about 30 msec each? That's pretty good. Or does it mean the page has 84 resources (little gifs etc)?
3) I don't really see why a ramdisk/tmpfs should make much difference given that the files should be in the linux kernel cache after the first load, and kept there if loaded with any frequency.
4) Both tmpfs and the kernel cache on the local machine seem way saner to me than memcached and access through a socket. I always thought the idea of memcached was to have a farm of 100s or 1000s of memcached servers so your application could have LAN access to terabytes of ram. But if it's just a few GB, these days that's nothing and you don't need remote servers for it.
With all due respect, I have a different opinion about this part.
So I did a quick test:
Host: netcup RS2000 BF edition , 8 core + 12GB RAM + 60GB on RAID10
tester: netcup VPS 500 BF edition, 2 core + 2GB RAM
connection between them should be 1Gbps
both servers are in same DC and connection between them should be 1Gbps.
installed cyberpanel and created 2 WP 4.9.8 on it , one with redis full-page cache , and another one with LiteSpeed Cache , I see someone mention it earlier.
I didn't tested non-cached , who doesn't use cache nowadays anyway...
So I ran 5 times
ab -n 1 -c 1 -H "Accept-Encoding: gzip" https://myurl1/
, results:0.010, 0.005, 0.007, 0.010, 0.011, on localhost , average = 0.0086 seconds.
0.013, 0.012, 0.018, 0.015, 0.015 on second node, average = 0.0146 seconds
above is for redis cache, now goes LiteSpeed Cache, same command , 5 times.
0.004,0.006, 0.004, 0.003, 0.004 on localhost, average = 0.0042 seconds.
0.007, 0.007, 0.005, 0.006, 0.007 on second node, average = 0.0064 seconds.
now i will do
-n 1000 with -c 100
.1.343, 1.263, 1.279, 1.337, 1.337 on localhost , average 1.3118 seconds
1.446, 1.379, 1.267, 1.374, 1.352 on second node, average 1.3636 seconds
so for instant I got several dozens of PHP processes showing up , as this is PHP level cache , each request goes to PHP.
now let's go to LiteSpeed Cache.
1.065, 1.148, 1.128, 1.104, 1.048 on localhost, average 1.0986 seconds
1.105, 1.208, 1.124, 1.154, 1.045 on second node, average 1.1272 seconds.
and during this round , I only see openlitespeed processes eating very little CPU resources.
As @vovler said , if it's SSD disk , speed is pretty close , not even human noticeable , but the resources usage, is a totally different story.
My point is , not matter if it's LiteSpeed Cache, or Varnish or whatever full-page cache that does NOT need PHP , should acts better than PHP-based cache , as invoke PHP is resources costly.
It's simple , brutal, not-really-scientific benchmark , but I think it proofs my point ...
GTmetrix was one of the screenshots, I recognized it.
They take the average across all of the requests so they're loading 1.6s and 2.4s per page.
This is the type of caching that LiteSpeed offers, and Apache's mod_cache, they load the files into cache RAM and they're not very performant as everyone can see from the screenshots.
Also a couple of other notes on the RAM disk is that if one locks up, you have to reboot the whole server to get back.
Linux has a default RAM disk max of 16, you can tweak it to do 128 but it gets very unstable when you do this. Also it's a far shot from the 10,000 instances of membucket we've tested on a single server.
Lets say a RAM disk locks up, you will have to reboot the server to get that customer back online. Membucket runs all processes in userspace with the ability to restart caching services without disrupting neighbors.
@FHR take note here^ because W3 cache core operations create tiny files of pre-processed PHP pages and it will thusly perform roughly the same. But feel free to try and post results further proving the point.
We encourage it :-)
I still fail to see the point you've brought up 3 times now. Shared hosting customers aren't interested in 1,000s of GB of RAM. If they were they wouldn't be on shared hosting.
They're just interested in getting their site to load faster, and if +$1/month can do that, they will.
Remember that most end users cannot tell the difference between 1MB and 1TB other than the device they most associate with those terms.
And if it's a memory Bucket that does the job, then 1 Bucket it is. 8MB is knowledge for the provider so they can estimate how many they can sell on a box.
Also accessing memcached over the network is going to be slower guaranteed for 2 reasons.
Thank you for posting this.
I recommend trying a comparison with and without Membucket, which is why we're offering a Free BETA.
Why would you need 128 RAM disks? Maybe i miss something here but wouldn't a single one suffice?
Why not use tmpfs then?
I guess to start it, RAM disks are not that performant for this type of operation as proven in the screenshots.
Anyone loading slower than 1.5s per page is deprioritized by search engine indexes, and to overcome that you have to pay $$$ in advertising.
Membucket is currently making an offer to Shared Hosters looking to sell a performance addon/upgrades to their customers. Those customers would be densely packed onto a system because as a provider you want to keep your costs low.
Shared Hosting does not mean VPS + cPanel + Stack of Caching layers, that's VPS hosting.
tmpfs is great but even slightly slower than a raw RAM disk because of the file system virtualization overlay