Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Where do you draw the line on SSD performance.
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Where do you draw the line on SSD performance.

This question has been in my mind for a while and since i have been trying different providers lowend and not. I am trying to figure out where the SSD performance line is drawn. I will explain this a little bit. A hosting provider claims they have SSD drives in their servers after running a server bear benchmark the IO numbers may look like an empty or 50% used SATA server. At this point you dont know if your simply on an empty server or a very busy server. So back to the question where do you as a customer or as a provider consider the IO performance on a VM SSD quality. For anyone that wants to continue with this where you also see SSD cached, and SAS, SATA, and "to slow to be usable"?

Comments

  • Instead of simply looking at number, rather look at what you actually NEED and what you USE in real-world scenarios. Do your applications or deployment really need a fancy 1.5GB/s at all times?

    Thanked by 1tchen
  • @blergh_ said:
    Instead of simply looking at number, rather look at what you actually NEED and what you USE in real-world scenarios. Do your applications or deployment really need a fancy 1.5GB/s at all times?

    i totally understand why real world numbers for your application can be mean a lot more then a benchmark. But when vm providers get an ssd node and then overcrowd the server to the point the IO is the same on a spinning disk machine does not seem to make any sense excellently if they are still trying to charge based on SSD.

  • wojons said: then overcrowd the server to the point the IO is the same on a spinning disk machine does not seem to make any sense excellently if they are still trying to charge based on SSD.

    The merit of SSD isn't dd scores that compare to a HDD RAID with cache. That HDD RAID with cache itself isn't reality. There's also types of SSD just like types of HDD. So speeds vary.

    End of the day, SSD is for faster random IOPs where you'll most likely notice a performance improvement and not sequential.

    Having said that, the provider that will overcrowd will, regardless of SSD, HDD or whatever tech.

  • I expect atleast 100mb/s IO speed on a SSD vps.

  • blackblack Member

    300+ MB/sec on DD if it's on raid SSDs.

  • I, personally, don't put much stock in dd test results when it comes to performance. So long as the test shows speeds of 30MB/s or higher, then I'm fine with it. dd performance doesn't correlate 100% to actual performance. IOPS are where RAID arrays, and SSDs in general, really are able to shine.

  • tchentchen Member

    Oddly, I see more people bringing new SSDs online because it lowers cost to build a performing node. I don't know of any quality host that's charging more because of it, unless it's specifically a provisioned performance line. There's two markets to SSD VPSs, don't confuse things by lumping them together.

  • MaouniqueMaounique Host Rep, Veteran

    If you really need 1 GB/s in DD tests, you put 1 GB cached raid card and you dont need to worry about which drives are there, LEB ppl will run DD and will be happy witht he results, even if their server is just average. Some will even open topics to show their affiliate links and how good their dd speed is in cache.
    At the end of the day, it depends on what market you wish to compete, on LEB market, tricks as the above work very well, in real world application, redundancy and enterprise level service, people are happy with 30 MB/s and 3K iops, except when they need a good database server in which case they get all cached in RAM, 64 GB rams are not unusual there.

    Thanked by 1rds100
  • tchen said: Oddly, I see more people bringing new SSDs online because it lowers cost to build a performing node. I don't know of any quality host that's charging more because of it, unless it's specifically a provisioned performance line. There's two markets to SSD VPSs, don't confuse things by lumping them together.

    Sure i can see this it lowers the number of complints that disk IO is super poor and so on. And most people that are hosting a blog or something small dont really need the extra disk space they could get with a sata node.

    black said: 300+ MB/sec on DD if it's on raid SSDs.
    Mark_R said: I expect atleast 100mb/s IO speed on a SSD vps.

    How did both of you come to these numbers is it something that you look test for deploying your application on a new provider or node?

    concerto49 said: The merit of SSD isn't dd scores that compare to a HDD RAID with cache. That HDD RAID with cache itself isn't reality. There's also types of SSD just like types of HDD. So speeds vary.

    End of the day, SSD is for faster random IOPs where you'll most likely notice a performance improvement and not sequential.

    This is very true that SSD provide amazing random io speeds that can give it the advantage edge over an sata array. But the reason for doing benchmarks is to at least know is it worth installing my platform on this node if you know what your old one had. It is very much based on realworld and knowing what you need before hand.

    Maounique said: At the end of the day, it depends on what market you wish to compete, on LEB market, tricks as the above work very well, in real world application, redundancy and enterprise level service, people are happy with 30 MB/s and 3K iops, except when they need a good database server in which case they get all cached in RAM, 64 GB rams are not unusual there.

    The database server is the reason i opened this since i am working on something with a very complex database on it ram can save me in most cases since i can easily predict my customers usage. But when things are not in cache is when things get different.

  • MaouniqueMaounique Host Rep, Veteran

    wojons said: The database server is the reason i opened this since i am working on something with a very complex database on it ram can save me in most cases since i can easily predict my customers usage. But when things are not in cache is when things get different.

    Hum, if MOST of the things are in cache, then a few iops now and then wont make much of a difference. Only if you have big DBs such as >60 GB and fairly busy (thousands of queries a minute) always with some data which cannot be cached, then you do need a SSD array, but in this case a dedi will do better. Cheap 128 GB ram are easier to find than cheap 500 GB SSD arrays with controller that knows what to do, with 4x disks at least and so on. In case of really big and busy DBs, you need to shard them.

  • Maounique said: Hum, if MOST of the things are in cache, then a few iops now and then wont make much of a difference. Only if you have big DBs such as >60 GB and fairly busy (thousands of queries a minute) always with some data which cannot be cached, then you do need a SSD array, but in this case a dedi will do better. Cheap 128 GB ram are easier to find than cheap 500 GB SSD arrays with controller that knows what to do, with 4x disks at least and so on. In case of really big and busy DBs, you need to shard them.

    Where would you recommend getting 128gb ram server. I partition my data well so in most cases i dont have to worry about large tables. My tables can get large but luckily my database supports sharding so splitting it across a few servers is also an option.

  • MaouniqueMaounique Host Rep, Veteran

    Normally, you will need some ram for tables, some ram for processing and some for queries, it depends on usage patterns, usually that is the hardest part to guess for the db admin. If you have large databases in which only parts are very used, you should split them, like, for example, wikipedia, they should have a database for the running of the site, replicated and cached, and one central DB with the articles, maybe another site with pictures, you get the idea. It is an art to break up the workload between servers, db sharding is an ever changing voodoo art, like routing optimizations, every complex DB system is evolving, patterns change, areas become more used, some obsolete, you need a modular structure to be able to optimize it all the time, but guessing where to draw the line comes only from experience and intelligence. You can throw more power at it, more money, for example, when you start a project, you take some reserve and then you can live with it for some time only from optimizing it.
    A poor admin will ask for more resources very soon, a good one will take reserves and will live off them for a long time (of course, if nothing unexpected happens).
    So, in short, it depends on usage. General usage, I will take some twice the size of the DB for ram if it is intensively accessed, so you can cache both the tables and the most requested query results as well as leaving space to breathe for the system.

    Thanked by 1gattytto
  • DaveDave Member

    @wojons said:
    i totally understand why real world numbers for your application can be mean a lot more then a benchmark. But when vm providers get an ssd node and then overcrowd the server to the point the IO is the same on a spinning disk machine does not seem to make any sense excellently if they are still trying to charge based on SSD.

    I think what you need to consider here is that the same host would have AT LEAST as many, if not MORE users on the same non SSD node.

    When it comes to benchmarks it's very hard to compare hosts because we're all set up differently.

    For instance, when people benchmark our servers they often compare us to AnyRandomHost and fail to take into consideration if the host values your data integrity or if they've tweaked the server to appear to be blisteringly fast. Our top priority is your data integrity so we give you the best performance/data integrity ratio we can.

    As for running a big DB, if you need guaranteed resources or if you think other customers are going to negatively impact you, you could go dedicated and have the whole SSD I/O to yourself.

  • Dave said: I think what you need to consider here is that the same host would have AT LEAST as many, if not MORE users on the same non SSD node.

    This was really the idea of the post and what i was getting at. Ofcourse memory is fixed cpu is variable dpepending on what users are doing but there is a lot less io than anything else on a system on of the first limits you run into in virtualization when running other peoples stuff like a host will but if they can get servers with more ram more cpu and throw ssd and fit a lot more people on them they would some providers SSD providers will have more or less IO then there sata counter part depending on how many people they are running per server and how much profit they are looking for per node.

    Dave said: As for running a big DB, if you need guaranteed resources or if you think other customers are going to negatively impact you, you could go dedicated and have the whole SSD I/O to yourself.

    My primary data is on some dedicated delmiter servers so far handling the test data pretty well going to open it up to a beta shortly and see how it handles that. But right now planing for the next steps. I have vps servicers for running and storing all the other types of small data sets that are imporant but dont require as much disk space but are read and written to more, its intresting how its all setup.

Sign In or Register to comment.