New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
@Falzo may you put more light how to understand the benchmarks?
or you may share with some good resources?
which tests are the best to test hdd? or overall whole VPS?
I think it's difficult to recommend anything at all, as different people might prioritize on different things. in my opinion it comes down to at least always compare only the same benchmarks with each other.
so for disk performance the widely used dd test might be useful to give a quick and very rough idea - if you stick to the same parameters. but you have to keep in mind that the results might heavily depend on caching and the momentary IO load of the overall system.
that's why you often see quite good numbers on freshly deployed nodes during a sale, which might degrade over time, when they get filled and put to real world use.
still this measures just transfer speed when writing lots of zeros to the disk - which is quite far away from being a real world use case.
that said, I personally like fio much more. problem with that is, it comes with much more options and you need to have a look at the command (if possible) to get a clue what has been tested, what the result really says and if that is comparable to other tests you might have seen.
as explained above you'll see how many IO operations you can achieve and can therefore conclude better if you'll get a good performance.
think about it this way: a single spinning hard-drive 7200k achieve about 100-120 iops and usually also a transfer rate up to 120MB/s. either of these numbers can be a limit. so if you read or write a large file, where the file-system can grab big consecutive blocks, the transfer rate will be the limit, while you don't neccessarily need all the iops whereas if you'd hammer that drive with only small 4k blocks your transfer rate would drop down to around 500kb/s but all iops are eaten up.
that is what more or less happens if you copy tons of small files - of course all sorts of caching try to help to not let go down that much, but still, I think you get the idea.
with that in mind and back to fio, it makes sense to test with a small blocksize like 4k to see what the drive is capable of, in terms of operations per second. and then do the same test with a higher blocksize like at least 64k, to see if the disk can maintain the iops or at some point run into limitation by transfer rate.
the netcup example is a good one, because 1600 iops are about ten times what you could see on single harddrive, so not bad at all. on the other hand from a single modern ssd you could expect something like >25k, which you don't clearly get here - but it is not a dedicated server anyway.
as a conclusion I'd say it seems netcup runs some sort of array, which might involve a caching layer but probably also limit iops per VM to a certain degree to balance the performance for all users.
btw. same problem of 'interpreting results the right way' appears for networking benchmarks.
most of the widely used scripts simply wget testfiles from different testservers. so all you really measure is a download speed at that very moment. the testfiles could be just zero'd and therefore compressable or the testservers might just be congested.
even worse it doesn't tell anything about upload speeds... (which to be honest would be much more interesting if you want to run a heavy traffic website f.i.)
however, if you always keep that in mind, of course you can get a rough estimate or comparison on the overall network connection to different regions or providers from it.
TL;DR;
I disagree and a significant part of why I disagree is also why I wrote my vpsbench.
For a start simple (read: most) benchmarks provide results of very limited value, particularly with VPSs.
A VPS vCore is typically anything between 1/2 and an 1/8 of what the hypervisor considers to be a core - which often enough actually is a (hyper-) thread. Hence the usual benchmarks, which might be halfway OK for a dedi, are pretty much a lottery with a VPS depending a lot on the load at any given point in time.
Another problem is that scripts by their nature are very limited. One particularly bad limit is time resolution.
Coming back to the above mentioned nature of a VPS also means that a disk - and it's performance (incl. IO) - can actually be a lot of things and handled in quite different ways. Btw. those "tricks" aren't new at all, the drive manufacturers also use them. Example: SSDs claiming to do xyz MB/s. The truth however often is at least one level of cache. Some well renowned "speed demons" from Samsung, for instance, have 8 GB of DRAM cache. Once that's "depleted" speed drops dramatically, e.g. from 450 MB/s to 80MB/s.
And, of course, due to the nature of a VPS, which is shared system one usually ends up in a dilemma: Either one really pushes the system, e.g. to see real disk performance (as opposed to a cache) or one is a socially responsible actor and avoids to basically abuse the VPS (as seen by other users). That's another issue my vpsbench addresses and takes care of; using it one can both push the system and still be a socially responsible user.
Finally, at the end of the day usually we simply want to know how a given system performs real world tasks. This IO number or that processor performance number might be nice but their meaning is quite limited. My netcup root server is a good example. Looking from a number wanking perspective its disk performance seems mediocre but in real life use it's an extremely nice VPS with excellent performance. One reason (among others): It comes with lots of RAM which easily compensates for drive spindles that are much slower than SSDs.
If one wants/needs more than what a good system benchmark (as opposed to an array of specific scripts, e.g. for disk IO) can offer, one needs a clearly defined use case and do customized benchmarking.
Side note: I think that comparability is grossly over valued and really just good for number wanking. What is important though is repeatability and comparability within a given benchmark.
TL;DR
Most benchmark scripts are of very limited value. Find a full system benchmark, preferably one that is made with VPSs in mind, and be sure to know that you understand what you are measuring. Also try to stay away from number wanking but to rather keep in mind what your real world needs are.
You disagree with what exactly?
What @Falzo wrote seems to me to be a cautionary and non-dogmatic view about benchmarks, and you also seem to express a cautionary and non-dogmatic view about benchmarks. I don't see what the significant disagreement would be.
I (We) understand that you prefer the benchmark tool that you wrote (vpsbench), but you don't need to frame this preference as a significant disagreement with what @Falzo wrote.
Benchmark tools are what they are, it's important to try to understand what they do, also to understand their limitations, and also to bear in mind all of the variations of a VPS environment. This is the gist of what @Falzo wrote (going into more detail), and I don't see how you really disagree with this (unless you feel that your benchmark tool is the only one worth using).
I got myself the Netcup Johann two times for 2€ each.
@Falzo thank you for your comments
@jsg thank you for your words
@angstrom
It seems you got me quite wrong.
First, No, I do not like my benchmark because I wrote it. It's the other way around: I felt that I had to write it because I didn't find a benchmark tool that met my (relatively modest) requirements.
And that is also the basis for my disagreement with (part of) what @Falzo wrote. Of course, I fully agree with his cautionary and non-dogmatic view. Where I disagree is the usual toolset. dd for example is a useful tool for its intended purpose but it is a rather poor tool for disk benchmarking. With other tools (e.g. fio) it's similar; to be useful those tools require a level of understanding that many don't have and that is difficult to transfer/compare.
Let me give you another example: the CPU string. As nice as it looks it's next to worthless because a provider can have it set to anything he likes.Plus basically no software uses it. Software needing knowledge about the processor will almost invariably look at the flags - those however are often not provided by ?ench scripts.
@adamus007p
You are welcome. And: I agree with @angstrom that you should also keep the cautionary and non-dogmatic part of @Falzo's post in mind.
oh, I miss recurrent Paypal payments here...
FWIW, what I've found is that there are good VPS providers (who provide a certain stable/consistent performance, maybe with occasional blips of degradation) and the rest.
Depending on the use case (and considering that it's a shared environment), the benchmark is useful to know on average what one can expect from the VPS and decide whether to host something specific on it or not. For anything more stringent one should of course go with a dedi where the performance is essentially consistent depending on what you alone do with it.
I of course like, respect and use different benchmarks (with varying degrees of complexity and runtime), but a large majority of the time a few quick typical script runs (dd, ioping, a few openssl commands, maybe a bzip to memory, pick-your-preferred-checksum-algo etc.) provide a baseline on which to compare performance variations (esp. over longer stretches of time).
I've very rarely seen cases where dd+iopiong run very well (for IO) but the VPS has very poor IO (or vice versa) - so they are very good qualifiers IMHO.
Of course one can run an fio and get a proper thorough number - but I'm lazy when it comes to extracting value from my
idlersVPS-lab-hosts.Hello Guys may you advise how to read iopiong results?
I couldn't find any tutorial about it.
Very quickly, without delving into too much of the details:
See this:
The numbers you care about are the
1.17 k iops
and the292.8 MiB/s
The IOPS are I/O operations per second - more the better (typically) and SSD will/should have better values than HDDs.
The MiB/s is the actual throughput at that IOPS.
Think of the IOPS as latency and the MiB/s as the bandwidth to compare with a network analogy.
SSDs can give you high IOPS even in a shared environment (like VPSs) but the throughput is not likely to be great unless others aren't using it as much (unless there's some heavy caching/distribution/RAID etc. that is boosting the raw device number due to distribution across multiple devices etc.).
There - I've scratched the itch. You should now put in your own effort.
this is important, because it depends on the blocksize used (probably ~256k in this case)
so as written above with a smaller blocksize you might see higher IO but lower read speed and vice versa...
Exactly. As with all benchmarks, it depends and so it is important to understand what those numbers mean (or represent) rather than blindly comparing them and to also compare/generate them for the use-case in question.
Don't forget to factor the phase of the moon. There are quantum variations (as yet unproved or unreproducible, but I have a theory and this comment box is too small for me to describe it here) that can significantly impact your performance
Hello @nullnothere and @Falzo , I would like to hear what you think of the VPS Eierpower 1 Ostern 2019 (NetCup). Do you think its performance is superior to a dedicated KIMSUFI Intel i3-2130 (KS-7)? My biggest need is to use a better-performing CPU, despite having less storage space (the 750 GB is sufficient for my needs). I do not need to use 24x7 CPU power, but I want the CPU as fast as I can when I need it. Please, if possible, help me with this doubt.
Thank you.
https://browser.geekbench.com/v4/cpu/compare/12916401?baseline=12617387
I picked a random linux geekbench4 for that i3-2130 and compared it with the VPS 1000 G8 Plus (last advent) which is more or less the same as the Eierpower 1 (less RAM).
I'd definetly go for the netcup one in this case... it's just qemu but it still comes with AES passthrough which the i3 doesn't have at all.
please consider me biased though, as I wouldn't go for single disk and 100 Mbit anyway.
Based on your needs: ... better-performing CPU ... CPU as fast as I can when I need it ...
The're both somewhat comparable in terms of raw numbers but the issue with Netcup is that it is shared so there can be times when the performance degrades (and I think they'll throttle you if you hog too much CPU). On the Kimsufi you can be sure it's all your fault
The netcup will not have disk failures. On the KS though it depends - they're good disks and you should be good to go (but it depends on your luck). Plus 100Mbit vs GBit for network if that matters.
Here's a quick benchmark summary:
Kimsufi:
vs
Netcup Similar Spec VPS:
Look at the multi core performance - the Netcup is much more sustained (vs the i3). I think if you look around you'll find more comprehensive benchmarks including a few Geekbenches for the Netcups and the Kimsufis - they're both pretty popular.
If I was in your position, I'd go with the Netcup as I think it is better/more reliable in the long run for CPU/Memory bound stuff (and I guess it's also cheaper). They're using much better (Gold?) CPUs and it will show unless there's a lot of contention (which is usually not the case).
Also, check this out:
https://www.cpubenchmark.net/compare/Intel-i3-2130-vs-Intel-Xeon-Gold-6148/755vs3176
I hope this helps.
Just a short detour to reality ...
Where does the data written to disk come from? Where does the data read from disk go to? Usually the answer on both a VPS and a dedi is the network - which is typically a pipe with 50 to 300 Mb/s on a VPS and a Gb/s on a dedi. Oh, and kindly note that network bandwith is given in M|Gb/s while disk IO is given in M|GB/s.
So, sure, there are cases where disk IO is extreme, e.g. when benchmarking, haha, but in the real word (TM) the bandwith bottleneck is the network where even a full Gb/s translates to real world 100 MB (and theoretical 1000/8 MB) per second. One hundred MB/s. At best.
Usually, however, there is some processing in between. Data get read from disks, some headers are created, some wrapping is done and data get out to the network. At a rate of some ten to one hundred MB/s.
The relevant advantage of SSDs is random access. A spindle must spin, heads must move; that translates to a latency advantage of some 100:1 (and up to some 1000:1) for SSDs. However, the bigger the chunks to be read or written, the smaller that advantage becomes.
However, a similar advantage is held by RAM over SSDs (and again by L1 cache over RAM). That's why all them funny numbers carry little meaning - unless one has a clear use case. For a database on a VPS with little memory a SSD is a tremendous advantage, for a typical database on a system with lots of RAM the best solution is probably large memory caches and the advantage of SSDs becomes smaller.
For most web pages - which are basically semistatic - having more memory is increasing performance by far more than an SSD and spindles will be good enough and more economical. This can (and in many situations should) be broken down even more, e.g. looking at index vs data ratio, request distribution, read vs. write, etc., but it should be clear by now that them funny numbers don't mean a lot unless one has a use case in mind and the necessary knowledge and understanding.
I'll close with a reminder: The party that is most interested in performance numbers is the manufacturers like intel, Samsung, etc, simply because the absolutely need reasons for selling you new and/or more expensive products.
Good example: during my work I almost always need to compile for and test my stuff on different architectures (e.g. amd64, x86, Power, Arm,...) and also different generations. A typical case is pre-Nehalem and "modern" (post-Nehalem) amd64 and also old 386/586/686 like Via C3/7, Amd Geode etc (because they are still in heavy use in many SME routers etc). The result? Almost always even pre-Nehalems easily fill Gb/s pipes with AEAD ciphers, modern hashes (like Blake2), etc. Maybe even more interestingly the real world performance difference between pre-Nehalem and the most current amd64 processors is far, far less than manufacturers make believe.
Users being fixated on performance numbers usually simply means that intels etc. marketing people have done a good job. But your network pipe still is just a couple of 10 MB/s on a VPS and about 100 MB/s on a dedi. THAT is the relevant measure., because that's what your server is all about at the end of the day.
Both, @Falzo and @nullnothere provided good answers. I'd like to add 2 things:
(a) netcup root servers (as opposed to the simple VPS) have dedicated vCores which gets pretty close to the situation with a dedi (no significant slowing by bad neighbours, etc).
(b) a good providers (like netcup's) VPS provides features that are very desirable and often not available with cheap dedis. Raided disks or SSDs are a good example.
Many thanks @Falzo, @nullnothere and @jsg for the help.
Even so, as I understand they do impose limits on them and can/do throttle them if you use them 24/7 - at least this is what I vaguely remember having read here on LET from various other members who did buy the Netcup Root servers around the time they first showed up here.
One other clarification, @EHRA wants to buy the
VPS Eierpower 1 Ostern 2019 (NetCup)
offer which as I understand is a vCORE (not to be confused with the Root Server dedicated Cores) - so legitimately the performance on the VPSs line up should/will be less than the Root Server line up.The numbers I have posted are for the vCores (i.e. VPS lineup).
If you don't already have it, I don't think that it's available any longer.It's still available after all -- see @Falzo below!
it still is... the egg has been on the german about us page for a while.
I'll be darned -- you're right!
I decided to take the VPS Eierpower 1 Ostern 2019. Initially, I will not cancel my Kimsufi. I will test the performance of VPS in the first two months, if performance is good I cancel my kimsufi and only get VPS NetCup.
I hope NetCup makes an offer on its SAS Root Servers, the performance I'm getting on the RS 500 G8 SAS is being very good. Root servers are the premium version of the Netcup VPS (dedicated cores and satisfaction guarantee). The ideal specifications for me would be:
Root-Server
Intel® Xeon® Gold 6140 or Intel® Xeon® E5-2680V4
8 GB DDR4 RAM (ECC)
3 dedizierte Kerne
600 GB SAS
But I believe that something like this will hardly appear.
Good shopping for everyone.
I got one on Sunday. It's a good choice.
The "cores" in netcup root servers and vservers are both vcores, I'm pretty sure. The root servers have dedicated vcores while the vservers have shared vcores. Because of hyperthreading, depending on system load a vcore can be between about 55% and 100% of a physical core.
If I were doing cpu intensive stuff I'd definitely go for the i3 dedi, where you get two physical cores (and the rest of the physical hardware) outright. I don't know what happens with netcup or other vm providers if you do sustained computations with vcores (sustained means 24/7 for weeks or months nonstop): it's possible you eventually get both vcores pinned to 1 physical core. So far I've only done long computation on dedis. It doesn't occur to me to use vps for that. The other stuff mentioned like not having RAID on kimsufi is true. Added: the i3 cores are also faster (run at higher frequency) than the rootserver, unless you need new features like AVX-512 (the i3 probably does have hardware AES).
At least on small vps, I notice significant difference in interactive response depending on whether the server has SSD or HDD. HDD's keep getting bigger but not faster, so the constant # of iops are spread across more GB of storage. I haven't done anything on VPS that need sustained high iops, but the latency of loading a program from HDD can be noticible. These days I only use HDD for storage. All my small VPS except for a few lingering NAT containers are SSD now.
I don't understand LET's obsession with serial disk throughput (i.e. large file copying speed). Who copies large files on their VPS all day? Latency (access speed) is much more important since small transfers are more common than large ones. Also for databases you need iops. That hasn't been an issue for my particular stuff most of the time though.
Do you think a kimsufi ()i3/i5) would be better for plex than a VPS Eierpower 1? I mostly watch videos in the original quality and sometimes bring the quality down while watching. Sometimes I just stream the file to VLC, in a scenario like this, what would be better?
I largely agree and am glad to see others being oriented at the real world (TM).
... not use either a (small) dedi nor a (no matter how good) VPS but rather systems made and rented for that purpose (AWS, etc). If by "cpu intensive stuff" you mean things like a heavy duty/high load database, one would probably instinctively prefer a dedi but still and anyway make a proper analysis (and find that some high grade VPS can do no worse than a 4 core dedi).
Careful there! Processor clock is not the decisive factor. L1 and L2, for example, are by far more relevant. I've just experienced that again hands on with an old LGA 755 backup server where I changed the processor (256KB cache/core) for a slightly slower one (in MHz) but 2 MB/core. The "slower" (new) one is brutally faster than the old higher MHz one.
Absolutely. But then, on a server booting/startup - and that's about the only time programs get loaded - is insignificant vs months or years of running time.
On a desktop, of course, that's very different. But I largely agree for 2 other reasons: (a) cost of TB and (b) number of devices. The first one is evident; In many use cases getting e.g. 2 Raid 1 spindles for about the same price (or even less) a single SSD costs spindles simply are the better solution, particularly if money is not a "free" resource. We here at LET should know that.
It's not just LET and not just disk IO numbers. Number wanking is a wide spread disease nowadays. I've seen colleagues, for example, grown up men and women, who like me have a first gen. 8 core Ryzen and who seriously feel that they "really need"a 2nd. gen 8 core Ryzen. Similarly I've seen adult people obsessed over exchanging their "slow" NVMe (~ 1.6 GB/s) for a "better" (faster) one.
As I already said, the way I see it they are victims of marketing. You see, intel, nvidia etc. absolutely vitally need to have some "reason" for selling ever new processors. But frankly, there is almost none. Who really suffers and is hampered in his work because his CPU is 11% slower than the newest one? And keep in mind that those "11%" are a "marketing optimized" number anyway (by carefully choosing the "load").
Don't take that personal but if even very experienced people like you with lots of experience occasionally fall for those tricks, what are the chances of the 99% to look through it?
Final word: Guys (and gals), this is LET! In this segment the relevant issue isn't "even faster than..." but rather "solid, reliable, not brutally oversold, and reasonable support". The "pearls" in this segment, at least the way I see it, is not "3.4 GHz rather than 3.0 GHz" or "NVMe with 2.5 GB/s rather than one with 1.8 GB/s". No, it's "performs reliably, uninterrupted for months with a stable network and if there is a (rare) problem it's solved quickly".
That's why I love my netcup and Prometeus VPSs.
Obviously application workloads differ. My own stuff uses general purpose compute facilities in a mostly average way, and I find passmark to be pretty accurate in predicting its processing time for given servers. I used an i3-2130 for a while and it was faster per core than the current Xeons with newer architecture but slower clock, and passmark reflects that too. Older cpus (pre Sandy Bridge) are quite a bit worse though. Post Sandy Bridge the improvements have been incremental and not that great.
EDIT: actually my i3 may have been an i3-3xxx (Ivy Bridge), which is a bit faster than 2xxx of the same frequency. It was a low end OVH server at BHS before they split off the SYS brand.
By CPU intensive I mean exactly that. Databases need lots of IO and memory and high availability and only sometimes raw cpu. Using AWS for CPU tasks on an LET budget is a bizarre concept. Transcoding 10000s of hours of video is a standard example of what I'd call cpu intensive. You don't need enormous high-memory or SSD or 100-core servers for it. If your standard LET 4-core server is too slow, use a bunch of them in parallel. If something crashes, restart it, restore from backup, or whatever. My main cpu task uses around 100MB of memory and I run 1 instance per core, so the last thing I need is a huge Xeon Platinum box with TBs of ram.
The best cpu offer right now is the Kimsufi i3 flash sale in Canada, which appears to usually be delivered as a 4-core i5-3570S, which is around 7k passmark for $15 a month, with 1x2tb disk. That is fantastic. The best "regular" cpu offers are Hetzner auction servers or EX servers if you don't mind setup fees.
To the person who cancelled an i3: if your i3 is not yet gone (i.e. cancellation still pending), you might try your workload on a Hetzner dedicated core cloud server for a few hours and see if the speed matches your i3. The Netcup machine will at best be comparable to the Hetzner (but cheaper since it's monthly billed).
Regarding HDD responsiveness: in fact if I have an HDD vps that's idle for a week or so and then I ssh to it and type "ls", there is a noticible pause. That's what I mean about access time. With SSD there is no pause.
I currently have a Hetzner i7-3770 and a Kimsufi i5-3570s and I just find it great to have a dedi to knock around with. Both are idle a lot of the time and I could probably get by with just one, but I use the storage too.
Hello Guys where to find eggs for 1,79€ VPS 200 G8 for the netcup https://www.netcup.de/bestellen/produkt.php?produkt=2000 ?
I was trying to get a VPS 500 G8 but it is sold.