high write DB Application need advice
Hey LET i have a question for any users currently running a high write DB Application. We are currently moving our applications InnoDB onto a sole dedicated server just for DB only which will be running 10.0.34-MariaDB on Ubuntu 16.04.1.
The application is a heavy write DB and currently using standard mysql config i've been trolling stack-overflow and i am still unsure of a good configuration for MariaDB and also for connections from the application machine to the db server.
Any suggestions, links, guidance would be much appreciated.
The server itself is a :
16 GB RAM
120 GB SSD
My main question is we want to make sure we are utilizing the dedicated server resources to the fullest.
update: Think i have this sorted this now Thanks to peepz below i overlooked percona and benchmarking tools. Currently bench-testing and using percona
Um do you have a question? Do the obvious stuff, monitor slow query log in case of slowdowns, use EXPLAIN and tune any weirdness etc.
benchmarking and monitoring and tuning is key. So start with monitoring proper first.
Percona Monitoring & Management (PMM) is good for the monitoring side starters
I'd also start with MariaDB 10.1 minimum and even look at MariaDB 10.2/10.3 for testing and comparison with MariaDB 10.1 to see which is best for your work loads
We have already done this, we don't have any slow queries at present. I think my question is i am just wanting to make sure that we utilize the dedicated servers resources for being a sole db server. Last 3 years Laravel app + DB have just been on the same machine.
Didn't think of EXPLAIN will check that out thanks.
Hammer it with heavy loads for testing, and notice the bottlenecks. You are best off writing custom tests that pound the real paths in your application, than using a generic web stress tool.
Nice, i will check that out do some benchmarking. I have seen alot of white papers from percona i think ill investigate more into percona.
This is true we have only done generic stress testing on dev machines. Thanks for that ill give that a go.
If high write is needed, you should consider using NoSQL db like Apache Cassandra, which is meant for high writes, and supports clustered configuration.
I got opposite situation, I got a high read problem... Even innodb buffer set way too high, still causing 100% CPU on MySQL for long period...
Cache the data if you can, use Redis for example, as it runs in memory. The most read data will be displayed immediately, if it's hit constantly keep it in memory, else just discard it from cache, and load it to cache the next time the user accesses the same data.
And refactor their entire application!?!? Does Laravel even support Cassandra? If Eloquent (the Laravel ORM) does then maybe that's possible...otherwise, what you propose is like switching languages.
OP...it's basic, but have you run mysqltuner.pl ? Just in case there's something glaringly obvious.
There is a driver that integrates Cassandra with Eloquent, I'm not sure though if it's straightforward and production ready, definitely they would need to change the app to some degree.
I agree with you @raindog308 you can fine tune MySQL, and get more performance out of it, maybe even scale it, do a master master replication, and try to do some horizontal scaling, but that is not a long term solution for the problem. This will pop up again if they expect rapid growth.
Sometimes when you come to a situation where you are not sure for how long you can push something to it's limit (MySQL in this context), it's maybe best to reconsider to redesign part or the whole app, rather than doing the tuning. Because in one moment you will not have any more headroom for tuning.
"SSD" is a broad term. I would look into NVMes for heavy writes.
Part of it could be the ORM itself. You're already going through its abstraction layer, and who knows what kind of SQL it produces. I'm not familiar with Eloquent but fundamentally you're paying a performance price in exchange for easier development. Which isn't necessarily a bad tradeoff...until suddenly it is :-)
thanks for the tip , that's my current situation , i have memcached in place , it works , great improvement , but in certain action that calls cache to be flushed , disaster comes back until it's cached again ...