No sure why buyvm.net ranks high
It's my first VPS, 512M OpenVZ at BuyVM.net. Yes the pricing is awesome for $5.95/month I got 512M, 50G, 2TB! LED's script also worked great to set up the box with Nginx, PHP-CGI, MySQl. However the performance is really not good. :-(
I keep on seeing the slowness of my Wordpress website, which worked great on the shared hosting.
Furthermore, I see many "upstream timed out (110: Connection timed out) while reading response header from upstream" errors in the log. Finally, I figured out that the IO is slow most of time in that node.
See the following, you must be WOWed!
BuyVM node07:
512+0 records in 512+0 records out 33554432 bytes (34 MB) copied, 16.6113 seconds, 2.0 MB/s
Hawkhost, a shared hosting (my previous hosting)
512+0 records in 512+0 records out 33554432 bytes (34 MB) copied, 0.335879 seconds, 99.9 MB/s
What do you think guys? When LEA provides some many cheap VPS hostings here, do we need to think about performance of the box before we buying it? (I had been worried about the performance already, that's why I pick the one which is listed in top3 by LEB) Or maybe I just be an unlucky man?
BTW, You may want to know why I switched to VPS. The reason was that the shared hosting only provides 6G space and 60G/month data transfer and contract was going to end.
Comments
so submit a ticket asking them to move you or take a look?
@ciapps - if you're having any performance act ups you should just log a ticket and we can check it out
As we've been rolling out sales i've been trying to get back to some of the older nodes and make sure they're still doing fine.
There's a LOT for us to get done, though, so we're working hard at it all. Node07 is due quite a bit of love, it's on my TODO list for tonight. I can assure you it'll be rockin' soon enough
Francisco
The upstream errors may be due to your fastcgi settings. Are you using an accelerator (eg. xcache or apc) to speed up Wordpress?
In one of my nginx configuration files, for php I have....
fastcgi_connect_timeout 60;
fastcgi_send_timeout 180;
fastcgi_read_timeout 180;
fastcgi_buffer_size 256k;
fastcgi_buffers 4 256k;
fastcgi_busy_buffers_size 256k;
fastcgi_temp_file_write_size 256k;
fastcgi_intercept_errors on;
@francisco. The server is down now. Hopefully, it's you working on something to make it better.
@pezzy, I used LEA's script, so I think all of those settings are using default value. I did Google for answer and found some similar answers as you said. I will give it a try.
And I am not using an accelerator.
@francisco,
Server is up. It's apparently much faster, and io is improved too (it's 20-30MB/s) now. Thanks!
I am not sure if you did something. If so, can you explain what problems the node07 had? Hopefully, it can keep on the good shape.
Thanks again!
@caiapps - I have been using the script at http://vbtechsupport.com/920/ , but if you do use this, the OS should be a new install and avoid MariaDB as it will use 400MB alone unless it's heavily tweaked.
@caiapps - 07 needs a bit of love still
It should be more like 150MB/sec so i'm looking to see if a disk is shot.
I'm thinking 07 didn't get some of the adjustments we put in for it, i'll know more soon.
Francisco
@pezzy, Thanks PeZzy! Too many changes for me now, but I will definitely give it a try when I get another box.
@francisco
Sounds good. I feel much better now. Thank you so much!
buyvm is awesome...i/o was 164 MB/s when i checked
@caiapps - and thank you for being patient
If you ever have a concern you can ticket, PM, tweet, IRC, carrier pidgin, etc, and we'll do our best to look after it.
Thanks again!
Francisco
Just to throw this out, if you're using some form of caching, make sure it's actually working. I know when clients have transferred sites, they usually mess up permissions and ownership so your caching may be misfiring.
Francisco, I was wondering -- why is your 512/1024 package 5,95 and your 1024/2048 12,95? HD space also only upgrades with 10GB.
Hmm I'm getting pretty good disk IO on node7. Never had a problem with it so far always been above 100MB/s.
Both my BuyVMs are around 160mb/s
node07 needed a fair bit of attention but it has been running great since the checkup.
If you're having any quirks just let me know.
Thanks for the chance!
Francisco
node09:
node37:
I've gotta admit, the way BuyVM manages to get such good IO with such cheap plans is pretty darn amazing! I get great speeds on my systems (>100MB/s)!
@alertdb - i'll check on those later on to see if maybe a drive went out, but it's quite possible it was just during a busy period
use
and
The result will be very different.
Yes, if I use "conv=fdatasync", I also get over hundred. So the new question is that which one we should used to test IO.
Thanks!
We tried to come up with a standard set of commends to use for comparison a few months ago but many folks here didn't understand what we were trying to do. For some reason, disk space was important to them.
Really would like to see a set collection of commands to use for comparsions.
fdatasync is more considerate to your neighbors and drives. dsync pretty much skips any caching the drives will do which ends up being a rape fest :P
Francisco
I myself do have no reason to test io with my buyVM boxes. I am sure they are excellent regarding this issue. The only problem I have is all my buyVM boxes perform worse than those from 123sys and hostiga with the same computing power.
@drmike
It would be a great idea if there was some kind of standard.
It IS hard to standardize the tests as there are many varying points of view. In addition, we always see write test but rarely see "read" tests. I suppose it's just assumed that reads will be acceptable if write is acceptable. This is true to an extent but it basically lumps all the decent read speeds with excellent read speeds.
As for the dd tests, it's sequential writes (which does have its faults) and oftentimes, the switches are added in like fdatasync and sometimes it's left off.
All the dsync and fdatasync options force you to wait...they aren't "real world" tests of the user experience.
In the real world, fdatasync doesn't come into play as the system is returned back to the user before datasync.
Why should we care what the system is doing in the background as long as it doesn't affect our performance.
For example are the 2 dd tests above... one has fdatasync on and the other doesn't. So according to these tests, on our VPS, how long does it take before my command prompt is returned back to me after writing a 1.1GB file?
1.5 seconds, and NOT the 6.86 seconds fdatasync is claming.
So which is the more important metric, a user who waits 1.5 seconds or a synthetic benchmark which claims it takes 6.86 seconds?
@kiloserve Agreed but again most folks really seemed too understand that.
@kiloserve None. As I see it the dd test (with fdatasync) isn't about how fast you get back your prompt after writing a 1.1 GB file, but an easy to do quantification of how fast your VPS can do physical IO.
In most cases I do not commit commands for file-IO on a VPS by hand but I leave it to scripts.
When scripts do copy files and work with these files they need the files on the disk not in the RAM cache. The fdatasync exactly shows this. I'm primarily not interested in how fast a file can be copied into non-permament memory. This non-permanent memory is not secure because the file might get corrupted before it is written to disk (high load etc.).
In my eyes the fdatasync is of more interest to a real-world example because it shows what times arise when files are copied and physically written on the disk again. Just like in the real-world example when the scripts on the VDS work on their own.
@skagerrak. File copies are 2 task operations (both read of original file and write of copied file). File writes, by default on most Linux systems, do not use fdatasync. I think what you are referring to is more related to "read speeds" rather than write speeds as there wouldn't be previously cached write data...only the read portion is cached beforehand. The reads are separate from the writes and reading from cache is usually in the several GB/s range, rather than 1GB for writes.
Even reading directly from a drive, most of our 10+ disk arrays can do sequential reads in 1GB/s+ territory; so read speeds really aren't a factor.
So you will still get the 1.5 second write speed for a 1 Gigabyte file and not the imposed fdatasync speed of 6.86 seconds. The only time you'll see the 6.86 seconds to copy a 1.1 GB file is when you run benchmarks with fdatasync turned on.
@Leo, to the user, 1.5 seconds IS how fast the the VPS can do physical I/O. The 6.86 seconds is how long it takes when we force the system to do a synthetic benchmark. But in real life, it only takes 1.5 seconds to write a 1GB file.
Only during a benchmark forcing fdatasync does a user experience 6.86 second writes for a 1GB file. All the other times, it only takes 1.5 seconds to write a 1GB data file.
@BuyVM, sorry to hijack your thread, it's an interesting discussion but not here. Gentlemen, let's continue this in a different thread.
@kiloserv: Well, the manpage of fdatasync(2) tells it like this: "Applications that access databases or log files often write a tiny data fragment (e.g., one line in a log file) and then call fsync() immediately in order to ensure that the written data is physically stored on the harddisk. Unfortunately, fsync() will always initiate two write operations: one for the newly written data and another one in order to update the modification time stored in the inode. If the modification time is not a part of the transaction concept fdatasync() can be used to avoid unnecessary inode disk write operations."
So I understand the manpage that when using dd with conv=fdatasync (for our testing purposes), it is even more friendly in measuring the time for write operations.
And since I rely on databases and file-write-operations, the measuring of speed using fdatasync is more exact to what actually happens.