SQLite3 vs MariaDB - Which to use?
I have used MariaDB and SQLite3 for my servers for a while.
I normally use SQLite3 if I need to store smaller databases for sites with 20+ visits per day, as loading SQLite database from disk takes a bit of time.
I go for MariaDB for larger databases or sites with higher visitor counts or sites that needs to load fast.
My understanding is that MariaDB loads the DB to RAM and runs from there so queries are faster, while SQLite3 has to load it into RAM every time for running. Do correct me if I am wrong.
All went fine until last week...
I was using SQLite3 for a uptime monitoring website i built a while back. It would check on the servers and save the latency in the database. The DB size was about 100 MB when the site just stopped working. I suspected it is due to the PHP ram limit of 128 MB that i set and reset the DB, to find it fixed the issue.
Should I go for MariaDB in this case or is there something else causing the issue?
Comments
You are wrong there. Neither load the whole database into RAM. Also, MariaDB can handle multiple databases, so it can only open your database when you connect to it with the database name (and as you say you are using PHP, you will have to connect to MariaDB for each page view)
Something else is causing the issue.
Is 100MB db not too much for sqlite already? For database of that size I'd definitely switch to mariadb/mysql...
IIRC doesn't sqlite make global lock on entire DB whenever it writes something?
Yes. But for many applications that's fine...e.g., a blog where there is little commenting.
SQLite is intended as an embedded solution, where you typically have only one process reading or writing. You can stretch it to web solutions as long as there isn't much concurrent writing.
A sqlite DB may very well be cached in RAM just due to OS/filesystem caching. But it doesn't explicitly do so.
MySQL caches in memory, but it doesn't explicitly load all rows of all tables in memory.
If you are already running MySQL for some sites, run it for all (if they're on the same system). There's little advantage to breaking some of them up into sqlite and others into MySQL if you're already running MySQL.
My uptime script runs every 2 minutes and it is the only script writing to DB. So that is not a issue.
I am running mysql on a completely different server. I wanted to avoid running it on here due to the memory overhead. Although recent MariaDB is very optimized so might try that (with Inno disabled).
That is what i was thinking as well. I didn't realize it would go to 100MB just logging latency timings...
Just remember that over-optimization is a key DB management problem. Why disable InnoDB? Row-locking is one of the key benefits that MySQL brings.
LOL, my mistake. I meant to say "PERFORMANCE_SCHEMA" disabled.
Well, it shows that @raindog308 has tons of experience with databases. I suggest to listen to his advice.
Some small bits and pieces I'd like to add
Most web related databases are quite read-heavy but lite on writes, so if your DB is small (hundreds of MB up to a few GB) just use Sqlite and be done. If your web server/application is multitasked just have a small "DB Manager" serialize the reads and writes.
Alternatively use a full SQL server and keep the indices fast because in almost all web server related cases (lots of reads, few writes) that, the speed of searching, is the secret sauce for speed.
Analyze your situation properly and use RAM wisely for smart caching.
Here it gets a bit more complicated ...
NO SQL server, none, can offer the ultimate speed. Simple reason: the comfort of SQL Keep in mind what SQL really is. It's not a DB but rather a language to query and interact with a DB in a (more or less) standardized and (more or less) comfortable way. But that also means that any SQL DB has an interpreter running and has quite a bit of internal work to do, e.g. to structure and maintain the DB for its needs and SQL conformance.
So if ultimate speed really is what you need then that almost invariably translates to not using SQL at all but rather to use a DB library directly (and lots of RAM and only very fast NVMe drives). Btw it usually also means that you run the DB locally (as opposed to on some DB server).
Depends on what you want to do....
Generally speaking, a db server makes sense if you have more than one connection connection going to the database at a time.
I like sqlite for its portability and you can get quite good performance out of it, if you mess with it enough. But once it comes to having the potential of more than one dbconnection/user at a time,... go for a database server like mariadb/mysql/postgres/.... as anything else will become a major headache/hassle.
I might say bullshit, so don't take my words for true.
I always had the (false?) impression, that, with MySQL/MariaDB there was less risk of DB corruption, than with SQLite. Also, I have the (false?) impression that MySQL/MariaDB have better tools to recover some (or all) of the data, after a db corruption.
ps: in all events, backup-backup-backup.