Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


high write DB Application need advice
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

high write DB Application need advice

lukesUbuntulukesUbuntu Member
edited June 2018 in Help

Hey LET i have a question for any users currently running a high write DB Application. We are currently moving our applications InnoDB onto a sole dedicated server just for DB only which will be running 10.0.34-MariaDB on Ubuntu 16.04.1.

The application is a heavy write DB and currently using standard mysql config i've been trolling stack-overflow and i am still unsure of a good configuration for MariaDB and also for connections from the application machine to the db server.

Any suggestions, links, guidance would be much appreciated.
The server itself is a :

E3-1270v2
16 GB RAM
120 GB SSD

My main question is we want to make sure we are utilizing the dedicated server resources to the fullest.

update: Think i have this sorted this now :) Thanks to peepz below i overlooked percona and benchmarking tools. Currently bench-testing and using percona :)

Comments

  • williewillie Member

    Um do you have a question? Do the obvious stuff, monitor slow query log in case of slowdowns, use EXPLAIN and tune any weirdness etc.

    Thanked by 1lukesUbuntu
  • eva2000eva2000 Veteran

    benchmarking and monitoring and tuning is key. So start with monitoring proper first.

    Percona Monitoring & Management (PMM) is good for the monitoring side starters

    I'd also start with MariaDB 10.1 minimum and even look at MariaDB 10.2/10.3 for testing and comparison with MariaDB 10.1 to see which is best for your work loads

    Thanked by 1lukesUbuntu
  • lukesUbuntulukesUbuntu Member
    edited June 2018

    @willie said:
    Um do you have a question? Do the obvious stuff, monitor slow query log in case of slowdowns, use EXPLAIN and tune any weirdness etc.

    We have already done this, we don't have any slow queries at present. I think my question is i am just wanting to make sure that we utilize the dedicated servers resources for being a sole db server. Last 3 years Laravel app + DB have just been on the same machine.

    Didn't think of EXPLAIN will check that out thanks.

  • williewillie Member

    Hammer it with heavy loads for testing, and notice the bottlenecks. You are best off writing custom tests that pound the real paths in your application, than using a generic web stress tool.

    Thanked by 1lukesUbuntu
  • @eva2000 said:
    benchmarking and monitoring and tuning is key. So start with monitoring proper first.

    Percona Monitoring & Management (PMM) is good for the monitoring side starters

    I'd also start with MariaDB 10.1 minimum and even look at MariaDB 10.2/10.3 for testing and comparison with MariaDB 10.1 to see which is best for your work loads

    Nice, i will check that out do some benchmarking. I have seen alot of white papers from percona i think ill investigate more into percona.

  • @willie said:
    Hammer it with heavy loads for testing, and notice the bottlenecks. You are best off writing custom tests that pound the real paths in your application, than using a generic web stress tool.

    This is true we have only done generic stress testing on dev machines. Thanks for that ill give that a go.

  • VitaVita Member

    If high write is needed, you should consider using NoSQL db like Apache Cassandra, which is meant for high writes, and supports clustered configuration.

    Thanked by 1lukesUbuntu
  • qtwrkqtwrk Member

    I got opposite situation, I got a high read problem... Even innodb buffer set way too high, still causing 100% CPU on MySQL for long period...

  • VitaVita Member

    @qtwrk said:
    I got opposite situation, I got a high read problem... Even innodb buffer set way too high, still causing 100% CPU on MySQL for long period...

    Cache the data if you can, use Redis for example, as it runs in memory. The most read data will be displayed immediately, if it's hit constantly keep it in memory, else just discard it from cache, and load it to cache the next time the user accesses the same data.

    Thanked by 1lukesUbuntu
  • raindog308raindog308 Administrator, Veteran

    Vita said: If high write is needed, you should consider using NoSQL db like Apache Cassandra, which is meant for high writes, and supports clustered configuration.

    And refactor their entire application!?!? Does Laravel even support Cassandra? If Eloquent (the Laravel ORM) does then maybe that's possible...otherwise, what you propose is like switching languages.

    OP...it's basic, but have you run mysqltuner.pl ? Just in case there's something glaringly obvious.

    Thanked by 1vimalware
  • VitaVita Member

    @raindog308 said:

    Vita said: If high write is needed, you should consider using NoSQL db like Apache Cassandra, which is meant for high writes, and supports clustered configuration.

    And refactor their entire application!?!? Does Laravel even support Cassandra? If Eloquent (the Laravel ORM) does then maybe that's possible...otherwise, what you propose is like switching languages.

    OP...it's basic, but have you run mysqltuner.pl ? Just in case there's something glaringly obvious.

    There is a driver that integrates Cassandra with Eloquent, I'm not sure though if it's straightforward and production ready, definitely they would need to change the app to some degree.

    I agree with you @raindog308 you can fine tune MySQL, and get more performance out of it, maybe even scale it, do a master master replication, and try to do some horizontal scaling, but that is not a long term solution for the problem. This will pop up again if they expect rapid growth.

    Sometimes when you come to a situation where you are not sure for how long you can push something to it's limit (MySQL in this context), it's maybe best to reconsider to redesign part or the whole app, rather than doing the tuning. Because in one moment you will not have any more headroom for tuning.

  • niknik Member, Host Rep

    "SSD" is a broad term. I would look into NVMes for heavy writes.

    Thanked by 2lukesUbuntu claudio
  • raindog308raindog308 Administrator, Veteran

    Vita said: I agree with you @raindog308 you can fine tune MySQL, and get more performance out of it, maybe even scale it, do a master master replication, and try to do some horizontal scaling, but that is not a long term solution for the problem. This will pop up again if they expect rapid growth.

    Part of it could be the ORM itself. You're already going through its abstraction layer, and who knows what kind of SQL it produces. I'm not familiar with Eloquent but fundamentally you're paying a performance price in exchange for easier development. Which isn't necessarily a bad tradeoff...until suddenly it is :-)

  • qtwrkqtwrk Member

    @Vita said:

    @qtwrk said:
    I got opposite situation, I got a high read problem... Even innodb buffer set way too high, still causing 100% CPU on MySQL for long period...

    Cache the data if you can, use Redis for example, as it runs in memory. The most read data will be displayed immediately, if it's hit constantly keep it in memory, else just discard it from cache, and load it to cache the next time the user accesses the same data.

    thanks for the tip , that's my current situation , i have memcached in place , it works , great improvement , but in certain action that calls cache to be flushed , disaster comes back until it's cached again ...

Sign In or Register to comment.