Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


tmpfs/ramdisk for MySQL DBs
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

tmpfs/ramdisk for MySQL DBs

NoxterNoxter Member
edited February 2013 in General

How many of you folks run your tables on tmpfs/ramdisk? Have you had any problems with it?

Comments

  • raindog308raindog308 Administrator, Veteran

    Why not just use MEMORY storage engine for MySQL?

    http://dev.mysql.com/doc/refman/5.6/en/memory-storage-engine.html

  • I've toyed with RAMDISK based MySQL.

    Unsure how you ensure the data is flushed to disk for long term storage. Assuming you just want to do reads from RAM?

    Well, probably accomplish the same / better by increasing RAM allocation to MySQL for various caches.

    There is also the MEMORY Storage Engine in MySQL:
    see: http://dev.mysql.com/doc/refman/5.0/en/memory-storage-engine.html

  • MEMORY Storage engine has a bunch of limitations BTW. It is not a drop in replacement for MyISAM and InnoDB. Research what it won't do.

    That said, I never noticed any real increase using it, as I have big RAM issued to MySQL and already probably in RAM.

  • jhjh Member

    Very bad idea in terms of retaining your data, but as others have said, busy tables can be made in MEMORY.

  • As far as risk, isn't memory the same difference as running all the tables in tmpfs/ramdisk? Won't both methods corrupt tables?

  • Thank you for the input, btw.

  • Read the manual @Noxter :)

    Ramdisk won't work. You need a way to mirror the data to spinny disk (in case of writes) or to mirror it from spinnies at boot/start in the case of read only tables.

    You can test you idea with sym links and relocating tables quickly and easily.

    MySQL does great managing the tables and speed when given proper high RAM amounts.

    In testing, I saw no increase in speed on queries by using the MEMORY storage or by sym linking the tables and tricking MySQL. Just created more complexity and potential for breakage. But, our MySQL install is seasoned and optimized highly. Plus typically run it from SSD.

  • jhjh Member

    AFAIK with MEMORY tables you retain the table structure on the disk, so it's fine for sessions etc., with a ramdisk you lose everything.

  • NoxterNoxter Member
    edited February 2013

    Quite glad I opened this thread before causing some sort of downtime/data loss.

    All my colo servers run their MySQL on SSDs, however I have a dedi in RAID1 (spinning disk) that I'd like to better optimize.

    The sites on this machine run heavy image traffic, I have recently moved MPS (mod_pagespeed) to cache via tmpfs and began wondering about MySQL as the backend is powered by it.

    I have a lot of free RAM, so like you said and practice @pubcrawler it's heavily configured to use most of the system's RAM. MySQL is reaching 2% into tables on disk.

    My I/O wait isn't a problem currently (0.8 average) however the spikes bother me when it gets hit with a lot of concurrent visitors (nothing you could notice surfing). Hopefully having moved mod_pagespeed (which is caching/rewritting images) to RAM should reduce my IO wait.

    I could remote connect to another server running mysql on double 830s (about 150ms round trip though)... bad idea? Any suggestions for a server serving mostly images/php? I understand I am being anal here, but I like to keep my systems running as smooth as possible.

  • IOWAIT of 0.82 = 82ms right? or is that 820 ms? That's sluggish and delayed. I'd resolve that issue. Ran ioping for a while? Let's see some numbers.

    Question is if this is to serve image files out, why the big MySQL hit? Doing lookups to determine which file to serve out? Are you running query cache and caching queries heavily? Recommend it if you can.

    Apache = mod+pagespeed? I front end with Nginx and even double stack it on something like this:

    Nginx #1: port 80 ---> Nginx #2: port 800

    Nginx #1 runs a big static cache and reverse proxy. #2 is there for sanity and so limited RAM reliability and where most of the site config is.. #1 is ax'ed when and if needed. #2 is steady and reliable.

    You mentioned using a backend elsewhere. 150ms latency sucks. Database queries over the net = bad idea. Would be better to just serve the entire data payload and query on that backend if so. Of course front end cache should get hit lots on Nginx or equiv and reduce overhead and latency.

    Are you serving full pages off this implementation or just random images as needed?

    What are the page load times and total latency overall? Tested that?

    PM me if you need some clarification and ideas.

  • raindog308raindog308 Administrator, Veteran

    @Noxter said: My I/O wait isn't a problem currently (0.8 average) however the spikes bother me when it gets hit with a lot of concurrent visitors (nothing you could notice surfing). Hopefully having moved mod_pagespeed (which is caching/rewritting images) to RAM should reduce my IO wait.

    If those concurrent visitors are mostly reading, no problem. If they're doing a lot of writes, then perhaps innodb would help. Innodb locks individual rows where myisam locks the entire table.

  • @pubcrawler said: In testing, I saw no increase in speed on queries by using the MEMORY storage or by sym linking the tables and tricking MySQL. Just created more complexity and potential for breakage.

    there's reason why there MAY not be any speed increase if you real mysql manual/docs more carefully you will see the way MEMORY engine works in mysql, it still has same or similar limitations as MyISAM in that there are still table locks for updates ;)

  • Better I think to (a) ensure you're giving MySQL sufficient memory -- squeezing it into a low-memory footprint can really hurt performance; (b) use Xen or KVM instead of OVZ so your free memory is used for caching (and ensure you have ample free memory for that purpose); and (c) ensure your MySQL config really is optimal.

  • I test RAM related @eva2000 with read only workloads.

    I'd never do writes to RAM with MySQL. I know MySQL may/can do such a thing through config, but I like my data to survive or best change of :)

  • MySQL optimization is really a greatly misunderstood thing.

  • NoxterNoxter Member
    edited February 2013

    @pubcrawler
    That's iostat average, so 0.8% which is fine (800ms of every minute waiting on IO).

    Images are marked in tables (URL points to image x), it's dynamic (PHP). No static files (other than the images). Using APC to cache PHP.

    One of my colos are currently using Apache, nginx proxy and mod_pagespeed and it runs well. However, I tried the same with this system (which is running ISPConfig), and I can't seem to get nginx or Varnish running correctly. Should take some more time in figuring why (dictionaries are not directing currectly).

    In the future, I'm never running with ISPConfig, a bit late now that I have everything setup and running.

    The sites run fine, from lookup to page load, in the US (NY and Taxes, talking about Pingdom here), 200 - 600ms (depending on the size of the image). Loadimpact is stable, no spikes. webpagetest ranks straight A's minus the CDN.

    @raindog308
    Mostly reading. mysqltuner is showing 69%/31% (read/write).

    @sleddog
    Er?

    Thank you for the comments. My best bet would be to work on getting Nginx to work on this system?

  • Yeah tighten up the front end cache on this. Nginx and Varnish typically for that.

    Your read/write ratio is actually high on the write side. Are you doing logging to the database of some sort?

    ISPCOnfig and other panels. Blarg! Makes life harder to do real things on the high performance side. Run risk of making mods that break those tools perhaps or overwrite something custom config. At least that is my fear. Good for starting, bad for future.

    250ms or less is ideal page time, inclusive of all DNS lookups, page load, element load, etc. Webpagetest is quite excellent for this.

  • Allocating 2GB of cache to MPS in tmpfs seems to be lowering spikes in IO. CPU is having a field day though (until it fills the cache). Really liking the idea of running it off the disks.

    Yes, I'm regretting it now (ISPConfig). Might just pick up a storage VPS close or in the same DC to move the files over while I reformat. I might just do that this weekend.

    250ms is pretty much out of the scope when your request has a 2MB image attached to it. I can't load thumbnails, I can't load x cut down quality, and I can't resize. I do strip meta data, though.

    Logging, yeah -- slow queries (rare). What do you mean?

  • Lots of writes to the database. Wondering why so many writes. Inferring lots of writes just based on the ratio.. Could be not so many reads either...

    Probably a bunch of ways to improve the lookup speed of these images. I'd look at hashing things and building relationship simply to file = servers and distribute where I could horizontally from multiple locations, CDN, etc. Given that this is not all just upload a photo and let us munge type activity which is PITA to scale.

  • That is strange now that I think about it. How would I go about checking what's writing so much? It deff shouldn't be that high.

    CDN isn't an option right, the revenue isn't there, most likely never will be. Roundrobbin is an option in the future, though. Right now the system runs fine, for what it does. Traffic increases weekly, so that has to say something I guess. I have nothing unique.

    I'm just a speed freak trying to squeeze more juice or should I say, cache out of the system.

    Thanks Pubcrawler

  • If you have control over the code you're running, then http://memcached.org/ is your friend.

  • NoxterNoxter Member
    edited February 2013

    It's running with a 1GB cache. mod_pagespeed allows you to use it without coding it into your site.

  • memcached is alright. ads more complexity than necessary and often isn't what folks think performance wise on a mixed used server, especially inside of unknown VPS.

    That said, use it on our dedicated servers.

  • Personally I wouldn't use the RAM for mySQL operations (other than read) but just run it from SDD's.

Sign In or Register to comment.