New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
How do you test SSD caching?
ultimatehostings
Member
in General
What're the various ways in which you can test the performance of SSD cache? Hardware specs are as follows;
CPU Xeon E5-2620 (6 Cores, 12 Threads, 2.5GHz Turbo)
2 x 240 GB SSD
4 x 4 TB SATA III
128 GB DDR3 RAM ECC
LSI Hardware RAID Controller
Also what do you guys think of this spec for a host node?
Comments
Anyone?
DD and ioping test obviously. Result should be higher than typical RAID 10 setup.
Are these any good;
@ultimatehostings what kind of SSD caching technology are you using?
@marcm
The server has a hardware raid card with BBU + SSD Caching Ability not quite sure if this would answer your question, the server was setup by the provider, I only asked them to enable SSD caching with 2 x 240 GB SSD's.
@ultimatehostings - who is the provider / DC ?
Incero
@ultimatehostings ask Incero about CacheCade Pro 2.0
I'm sure that it's a :LSI MegaRaid card, do you think the I/O should be better than what I posted?
@ultimatehostings no, it's right on, however SSD caching is not enabled.
I see, I'll definitely contact them.
You can check using the megacli tools on if CacheCade is enabled.
That ioping result look not good at all.
Thanks for the input, working on it to improvise.
That's because SSD caching isn't enabled for the RAID 10 array.
Maybe the instructions I provided were incorrect.
@Jack - You're assuming he is using FlasheCache.
The card is LSI 9271-8i with Cachecade
At @ultimatehostings reboot your machine, use the kvm and browse the lsi bios and configure the SSD caching however you want. When there are no specific details in an order of how to setup caching we usually setup caching in read only mode, so I don't think you would see an ioping improvement but would see a regular file serving improvement (I don't think it's going to cache a block that's been hit once!). Advanced users (who actually monitor the status/health of their ssds via megacli) might want to enable read and write caching. However if you enable write caching without monitoring ssd health then be prepared for raid failure when the SSDs (presumably in RAID 1) cache die at the near exact same time :-). With monitoring enabled you can ask us to swap out the SSDs one at a time before they fail. We've had a couple of customers using 120GB SSDs (bad idea) for caching in R/W mode and allowed them to both fail at the same time. The older lsi bios at the time didn't handle the event well.
Feel free to open a ticket also, and someone can provide you with courtesy assistance on your unmanaged machine during regular hours to help you setup megacli, change your caching modes, etc.
p.s. awesome machine :-) way better spec than I see a lot of vps companies running (we see a ton of people using 2x 1TB s/w raid 1 or even 0!! for their VPS hosts).
I have nothing to complain with you guys, I got this sorted with the assistance and guidance of @marcm he's an awesome person.
a straight dd test will not invoke cachecade it's an algorithm so you would have to run many dd's across a file for it to finally get added to the cache just like any other cachecade technology. The problem is it will never cache /dev/random or other devices.
Same goes for ioping.
I didn't think you were complaining at all! I'm just on watch here and it's quiet right now on thanksgiving night, so thought I would chime in and try to help. :-). If you need help in future just open a ticket, we're being more proactive in helping on unmanaged systems these days (during business hours when not busy).
Thanks your input is much appreciated.
I would like to thank @marcm for helping me setup SSD caching, he's a great guy and I do appreciate what's he done for me.
A bit improved results;
this one will run great.
Thanks, will be KVM node.
Iops is still too low.
I think vps companies should choose Hardware Raid 1 or Software Raid 10 for some redundancy atleast.Software Raid 0 is the worst practice of them all and you hardly have any chance of recovering data in case the disk fails
No doubt RAID 0 is a bad idea! We sell servers, not business plans. :-) People cutting corners is what happens when people rush to compete on price only (ala a ton of the vps industry). The good hosts will survive.