All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Community Review: PixelsHost Basic OpenVZ Plan
So I really like @maxexcloo 's format on how he does this so I'm going to following it to the best of my abilities with my own twist(except I'm just going to get to the meat on this one).
Here we go, first "presentation" by me (I wouldn't call it a review).
PixelsHost recently started up as a VPS company in Great Britain. I believe they were originally part of CreeperHost.net, but instead of MineCraft hosting they're now providing VPS hosting under a new brand.
Today I'll be reviewing their Basic plan which comes with these specs:
- OpenVZ
- 5GB HDD Space
- 500GB Monthly Bandwidth
- 1 300MHz CPU (I'll explain this later)
- 256 MB Guaranteed Ram
- 256 MB vSwap
- £ 3.99 / month
Disclaimer: I am in no way affiliated with PixelsHost. I decided to give them a shot and purchase a plan to provide the specs to the community (which their representative was unsure of if he could provide them). I'll be as un-bias as I can from this point on.
Basic:
So it seems the server is located in Chicago, Illinois. I didn't look too closely at the Terms of Service, but it seems they do allow IRC (as long as its not egg drops). They do NOT allow proxy servers though. Operating system wise they offer AltLinux, ArchLinux, CentOS, Cern, Debian, Fedora, Scientific, Ubuntu, and OpenSUSE (it seems they have all the standard version including Ubuntu 12.04 and Fedora 15 and 16).
Support:
Looking only at the support tickets, I received a response 12 hours after my initial ticket (I did submit it pretty late). But after the initial paypal issue (I think the account blocks a few countries from paying and the US could be one of them, so they should check it out) everything went pretty smoothly. Luke caught me on IRC and helped me out too with a couple of problems I was having. I did have the unfortunate time of trying to get into SolusVM when they were doing some server migrations but Luke was more than happy to set the VPS to my specifications.
Defaults:
I installed their Debian 6 32-bit OS template then ran the update (apt-get update && apt-get upgrade). Nothing else has been altered.
ps aux of a fresh install here:
root@LET-test:~# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.2 2028 724 ? Ss 16:50 0:00 init [2] root 2 0.0 0.0 0 0 ? S 16:50 0:00 [kthreadd/1032] root 3 0.0 0.0 0 0 ? S 16:50 0:00 [khelper/1032] root 356 0.0 0.3 8668 800 ? Ss 16:50 0:00 /usr/sbin/sasla root 357 0.0 0.1 8668 376 ? S 16:50 0:00 /usr/sbin/sasla root 371 0.0 0.2 1736 644 ? Ss 16:50 0:00 /sbin/syslogd root 382 0.0 0.9 5140 2440 ? Ss 16:50 0:00 /usr/sbin/apach www-data 385 0.0 0.7 5140 1848 ? S 16:50 0:00 /usr/sbin/apach root 404 0.0 0.3 2288 868 ? Ss 16:50 0:00 /usr/sbin/cron root 539 0.0 0.6 9984 1580 ? Ss 16:51 0:00 sendmail: MTA: root 554 0.0 0.3 5488 952 ? Ss 16:51 0:00 /usr/sbin/sshd root 563 0.0 0.3 2392 864 ? Ss 16:51 0:00 /usr/sbin/xinet root 572 0.0 1.1 8544 3000 ? Ss 16:53 0:00 sshd: root@pts/ root 574 0.0 0.6 2960 1604 pts/0 Ss 16:53 0:00 -bash root 610 0.0 0.3 2348 932 pts/0 R+ 17:06 0:00 ps aux
Basic Information:
/proc/cpuinfo showed 8 cores with the following:
root@LET-test:~# cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 26 model name : Intel(R) Xeon(R) CPU L5520 @ 2.27GHz stepping : 5 cpu MHz : 423.936 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 sse4_2 popcnt lahf_lm ida tpr_shadow vnmi flexpriority ept vpid bogomips : 4533.64 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual power management:
For the 300MHz CPU statement that they had in their post (and on their website), after talking with Luke, that isn't the maximum use, its what you'll get at minimum (I think they should have clarified that though).
/proc/meminfo showed some standard results:
root@LET-test:~# cat /proc/meminfo MemTotal: 262144 kB MemFree: 221936 kB Cached: 13952 kB Active: 17828 kB Inactive: 836 kB Active(anon): 4644 kB Inactive(anon): 68 kB Active(file): 13184 kB Inactive(file): 768 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 262144 kB SwapFree: 262144 kB Dirty: 0 kB AnonPages: 4712 kB Shmem: 2624 kB Slab: 3288 kB SReclaimable: 1600 kB SUnreclaim: 1688 kB
Inode allocation:
root@LET-test:~# df -i Filesystem Inodes IUsed IFree IUse% Mounted on /dev/simfs 2621440 26136 2595304 1% / tmpfs 32768 4 32764 1% /lib/init/rw tmpfs 32768 1 32767 1% /dev/shm
vmstat:
root@LET-test:~# vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 1 0 0 221896 0 13660 0 0 0 1 0 21807 0 0 99 0
Edit: I updated a few things plus tried to change some of my grammar. I seriously need to read my own work before publishing... sorry guys.
Comments
Tests:
Each test was run three times and the middle ranked test was picked.
The Cachefly test:
Ping Tests is amazing with Google (they don't have IPv6):
Disk I/O:
ioping results:
Well, I don't own a GeekBench license, but here:
It did get a overall score of 3715.
Check out the full test here: http://browser.primatelabs.com/geekbench2/view/848266
Unix Bench:
Note:
After talking with Luke for a bit apparently I'm currently on their old hardware (L5420s) that usually only has 8 to 9 customers per node. They're from CreeperHost.net and ran low contention gameserver VPSes.
They're currently in the process of setting up their first offical node which will have the following specs:
So if anyone's curious I'm more than willing to re-run all of these tests once I get onto the new node (I'm assuming that is).
Personal Opinion:
Obviously I'm not going to use this in any production environment until I get atleast above 20 MB/s I/O and until/unless I am switched to the newer node. After talking with the guys, I think they need some time to adjust to everything (I mean they kinda JUST opened for business and all). They probably had several problems right off the bat (like SolusVM being a pain and whatnot) so I don't blame them, its just a rocky start. Let them mature a bit and we'll see where it goes. I'll report back if I do get onto a different node.
Author Note:
Let me know how I did. This was pretty hastily done (which you can obviously assume) and I actually really wasn't planning on reviewing anything (just posting the specs onto the main thread). But yeah tell me how I can improve.
Let me know how I did. This was pretty hastily done (which you can obviously assume) and I actually really wasn't planning on reviewing anything (just posting the specs onto the main thread). But yeah tell me how I can improve.
I like it.
I like it; And I remember CreeperHost from when I was a MC Host; They never seemed to do too well; but hopefully, this time around, they'll do good. :]
@HalfEatenPie thank you for spending money and effort for this review. Your the pie!
Updated with the Unix Bench test.
Also not a problem for you guys almost anything :P
@HalfEatenPie
Thank you very much for your kind and detailed review, it is highly appreciated and I think this format is perfect. I hope that you will get better results on our new machines.
@Damian
+1
@eastonch
We have been doing quite well with Creeperhost, we had one extremely bad patch where we were knocked out for three weeks, however Creeperhost is still going strong as well.
Ouch!
@Luke_H -- The MCForums were utter gay after they implemented "approved" hosts. -- They wouldnt accept a Sole Proprietors letter from HMRC to say I was "approved"; considering most of them were not even legally self-employed.
I can't remember if it was you, or another MChost who was really bad at things... Must of been another; It's funny how many hosts actually called themselves ([insertminecraftnamehere]host)... -- [Creeper]Host; [Pigsaddle]Host; etc.
Where are your servers hosted at for CH?
@eastonch
The reason that was implemented is because a lot of VPS Resellers, and Quick-Buck hosts, and null-billing hosting companies were causing issues and the owner of MinecraftForums had to pay out his own pocket at one point. Legal documents and registrations were implemented to stop this. A bit of a double edge sword, however it is effective, and you can still post in the "other hosts" section to offer wares.
Again, we did have a bad patch, but not constant "bad things".
We have a few locations available, can I re-direct you to our post so you can look over them? Here you go: http://tinyurl.com/cqf7aqy
@GetKVM_Ash
Yes, I know, ouch. Don't worry, our new servers are beast.
@Luke_H are your servers managed from within a VPS? Just setting a client up with a VPS modelled around MC hosting?
@eastonch
Your first one is correct, we create an individual VPS for each Minecraft Server (And install script is ran on the VPS, comes with a dedicated IP and Free Sub Domain), users are given root SSH access as well, so the more advanced users can take advantage of the package they are buying. Not very common for Minecraft hosting I believe?
We allow users to install other programs on their service as well, such as teamspeak, but we don't offer support if it breaks. We do have install scripts on the Control Panel for MySQL/Apache/Terraria/CSS :P
@Luke_H didnt uhm... Redstone Host do the same thing; are they still around?
@eastonch
Redstonehost are the pit of the Minecraft Hosting Community, they turned into another Brohoster :P
They still accept orders since it is all automatic, still haven't been shut down, however you get absolutely no refund, and all of their servers are oversold like hell.
@Luke_H -- They tried to say they were colocating inside of OVH, yet they rented from OVH? Strange guys.
I never had a server from them but they always seems "good".
They must still be paying for theit WHMCS and stuff then, and the servers.
I was aware of the drive issues for about 4 months, we've been testing the best way to improve our service at creeperhost.net and decided the drive access speed was definately our weakness.
The box up above was our first EVER machine we rented when we started CreeperHost.net... Our modern ones fare much more like;
``[root@mdn3 vz]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; rm test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 6.01612 s, 178 MB/s
[root@atl5 vz]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; rm test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 5.99772 s, 179 MB/s
[root@la6 vz]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; rm test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 4.30006 s, 250 MB/s``
These are the same drive setup as we have planned for our node for pixelshost.net when it comes up, it's being custom built for us then shipped from CA all the way to GA, should be ready for next week.
These boxes except for the la6 box are all fully loaded, and honestly oversold slightly (Only a couple of gb due to stock shortages, the actual usage % is still below 30% of them however).
So I personally think the new boxes fix the issue quite well, and hopefully we'll provide a really good service on pixelshost.net, we're only using our old chicago boxes before we retire them for a 'beta' period.
We do still use L5420/E5420 processors as we decided the extra money was better spent on improving the drives instead of switching to E3 processors. (We tried a few E3's and our customers didn't actually notice the difference.)
I'm still concerned about why that system performed THAT poorly on that test, I'll be taking a look at it later today, mind you, I haven't rebooted it or anything since October last year...
We have some rented boxes inside OVH, they're not all that good when things go wrong, and the lack of raid arrays etc hurt, that's why we're moving away from OVH ourselves now.
~Paul T
I like @Paul_T's answer.
So you're saying we can expect better specs by next week?
Thanks for the review, you did an excellent job :-)
My interpretation of the above quoted information:
They provided very low quality Minecraft servers if they used this system, and that they were aware of it's low performance way back when they started their previous venture and had the nerve to try to pass it off for the new one, safe to say they don't share the priorities that I expect from either my Minecraft or generic VPS provider. Thanks for the review. I won't be trying them.
For the guys at Pixelshost...tough love is what you get here, make it better and move forward. Don't take my words as an insult, take them as a challenge. Respect warranted for honesty.
Honestly, it wasn't that bad at the start, that machine is heavily loaded by minecraft servers, and those are being replaced now we're more established.
We was a small company, with no leverage to bring prices down, we couldn't afford the 4 and 8 drive setups we use now, simple as, we gave people ramdrives to resolve the issue regarding drive contention.
I understand your notes, and we are working on them, just unfortunately the guy who builds for us in the US lives on the other side of the country to where we want the new machine to be, so we're relying on UPS actually delivering to the right place. (They sent our machine destined for Los Angeles to New York last week... Only a small mistake I guess... They're like, right next to each other, right?)
Thanks for the 'tough-love' although my admittance of the issues should have shown we're ready to work on them, as I know very few places who'd admit it then refuse to fix it.
But yes, hopefully there'll be far better performance next week, and I didn't realise this particular machine was suffering so badly, I'll actually be looking into that later today once I've done migrating our solusvm master to a new machine to cope with the extra load.
I just re-ran the reviewers test on the node he is on and got these results;
[root@us1 vz]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; rm test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 7.14918 s, 150 MB/s
If the reviewer doesn't mind, I'd like permission to access his VPS so I can investigate the performance difference closer.
To avoid a tripple post
I created a VPS on the same node, and ran the two tests which concerned me again.
[root@us1 vz]# vzlist -a
CTID NPROC STATUS IP_ADDR HOSTNAME
129 37 running
136 148 running
770 37 running
818 66 running
936 36 running
938 80 running
1032 12 running LET-test
1039 3 running test
[root@us1 vz]# vzctl enter 1039
entered into CT 1039
[root@test /]# wget -O /dev/null http://cachefly.cachefly.net/100mb.test
--2012-07-13 00:19:28-- http://cachefly.cachefly.net/100mb.test
Resolving cachefly.cachefly.net... 205.234.175.175
Connecting to cachefly.cachefly.net|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: â/dev/nullâ
100%[======================================>] 104,857,600 2.73M/s in 40s
2012-07-13 00:20:07 (2.53 MB/s) - â/dev/nullâ
[root@test /]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; rm test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 7.58158 s, 142 MB/s
rm: remove regular file `test'? y
[root@test /]#
I'm personally quite satisified with the performance of the VPS, but still concerned as to why the reviewers VPS performed so poorly and would like to resolve this, as this is not the service we intended to provide.
Are you saying that right now, that server had MC Servers with 10MB/s? That must lag like a bitch!
I think you've misread 142MB/sec for 10MB/sec.
Plus we used RamDrives for the MC servers world files so everything was instant.
Every Provider is banging on about RamDisk; there's only so much you can do with RamDisk !
Especially with the new MC versions eating drive space for worlds, thats why we've switched to RAID arrays with 15,000rpm SAS drives like the machine I'm setting up as we speak;
[root@mdn5 ~]# hdparm -tT /dev/sda1
/dev/sda1:
Timing cached reads: 12258 MB in 2.00 seconds = 6138.03 MB/sec
Timing buffered disk reads: 1402 MB in 3.00 seconds = 467.21 MB/sec
I still want to know why the reviewers VPS performed so badly, need to wait for his permission before I start poking around though.
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 126.118 s, 8.5 MB/s
@Paul_T -- this is what I was referring to; and you said this was on the same node as MC servers.
[root@test /]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync; rm test
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 7.58158 s, 142 MB/s
So was that which is a test I did about 5 minutes ago... I'm not sure why he got such poor results. (The same node as evidenced by the vzlist command in post above)
What's an ioping test look like from the node now?