All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Help test a VPS node?
Alright, I confess that I've been working on a little project for a while. I would be extremely grateful to anyone willing to assist us in testing a VPS node. This will be available until July 27, at which date we will compile statistics and remove all containers.
Please keep in mind that I am personally aware of possible points of failure in the configuration which I want to observe. That being said to warn you that this is not intended for production use at this time and is intentionally not production ready. I do want to hear about any issues you have, and I do not want you to take it easy on the node. Crash it for all I care, but do so by heavy use and not because you took that statement as a challenge.
Here is the package:
2 CPU Cores
512MB RAM / 1024MB Burst
20GB Storage (RAID10)
100Mbit Unmetered
Denver, CO
I do ask that you agree with our Terms of Service as this test does represent Catalyst Host's reputation in FDC Servers.
http://www.catalysthost.com/tosaup.php
Link to order: http://catalystvps.com/
Ultimately we are hopeful that this will be a small foot in the door for providing VPS services.
Warning: The VPS website is intentionally separate from Catalyst Host at this time and is currently lacking secure connectivity.
Comments
Thats one long TOS page.
I like to cover my butt
Signed up
Get One.
I'd help you test it, but the container won't boot.
Get One.Thank you!
Looks like the solus master is acting up. It's hosted on a hostigation VPS for now while I figure out what best system to put it on is going to be.
Should be ok now.
@jarland It looks like powerful.
What is the normal price you want to make?
@airski It's hard to say right now, but definitely within LEB prices. I'm hoping the unmetered bandwidth is the selling point.
hey, I signed up.... doing the debian upgrade dance right now. I would definitely suggest making/getting a minimal debian template. One that doesn't come with samba, apache, and the other crap that is installed on the default template.
Thanks!
Agreed, and reinstalls seem to be unusually slow as well. Thanks for helping!
Debian minimal installed. Completely untested, straight from OpenVZ wiki. Again with the "intentionally not production quality" thing
Around 1 minute build time, pretty average.
root@crashingyourbox:~# cat /proc/cpuinfo | grep name
model name : AMD Phenom(tm) II X4 960T Processor
model name : AMD Phenom(tm) II X4 960T Processor
Eh, if you use these for production it would put me and many others off - but for testing I guess its fine.
16384+0 records in
16384+0 records out
268435456 bytes (268 MB) copied, 2.05343 s, 131 MB/s
Average speed test for an almost unused node.
Now an actual benchmark :
Wasn't quite able to use the full 11.2Mb/s capacity - but still a good 90-95%.
Download speeds from other areas of the US where pretty crappy I must admit, taking a hell of a long time for the script to complete purely because of this.
root@crashingyourbox:~# bash bench.sh.1
CPU model : AMD Phenom(tm) II X4 960T Processor
Number of cores : 2
CPU frequency : 3400.020 MHz
Total amount of ram : 1024 MB
Total amount of swap : 0 MB
System uptime : 8 min,
Download speed from CacheFly: 9.90MB/s
Download speed from Linode, Atlanta GA: 4.69MB/s < Decent
Download speed from Linode, Dallas, TX: 665KB/s < Really bad
Download speed from Linode, Tokyo, JP: 2.62MB/s < Decent
Download speed from Linode, London, UK: 442KB/s < I'm pretty sure that this isn't just a Linode problem, since we've now tested 4 locations - It appears the bandwidth quality is pretty bad.
Download speed from Leaseweb, Haarlem, NL: 179KB/s < Ouch
I actually couldn't complete the bench because of how long it was taking (waited more than 30minutes) to complete each download test, that's not good at all. So I'll edit it later today when it completes.
0% Packet loss from a good few locations
Ping statistics for 76.73.109.131:
Packets: Sent = 1002, Received = 1000, Lost = 2 (0% loss),
Trace route to check information
Tracing route to node01.catalysthost.com [76.73.109.131]
over a maximum of 30 hops:
1 2 ms 5 ms 4 ms BTHOMEHUB [192.168.1.254]
2 11 ms 12 ms 12 ms 217.32.145.5
3 12 ms 11 ms 12 ms 217.32.145.30
4 19 ms 18 ms 22 ms 213.120.181.190
5 18 ms 18 ms 18 ms 217.41.169.215
6 19 ms 23 ms 19 ms 217.41.169.109
7 18 ms 68 ms 18 ms acc2-10GigE-10-3-0.sf.21cn-ipp.bt.net [109.159.2
51.225]
8 34 ms 30 ms 28 ms core1-te0-2-4-0.ilford.ukcore.bt.net [109.159.25
1.141]
9 27 ms 34 ms 31 ms peer4te-0-7-0-0.telehouse.ukcore.bt.net [62.172.
102.21]
10 23 ms 24 ms 24 ms 195.99.126.154
11 110 ms 111 ms * tge2-3.nyc01-1.us.as5580.net [80.94.64.234]
12 135 ms 128 ms 129 ms tge2-4.chi01-1.us.as5580.net [78.152.34.150]
13 148 ms 154 ms 146 ms tge1-1.den01-1.us.as5580.net [78.152.34.214]
14 216 ms 170 ms 239 ms fdcservers-30058-gw-1.den01-1.us.as5580.net [78.
152.32.102]
15 150 ms 147 ms 146 ms node01.catalysthost.com [76.73.109.130]
16 151 ms 150 ms 151 ms node01.catalysthost.com [76.73.109.131]
So it goes through Atrato IP networks' AS which consists of a really confusing bunch of bandwidth providers :
http://bgp.he.net/AS5580
AS3257 Tinet SpA
AS286 KPN Internet Backbone
AS3549 Level 3 Communications, Inc. (GBLX)
AS12389 OJSC Rostelecom
AS22773 Cox Communications Inc.
AS8220 COLT Technology Services Group Limited
AS6128 Cablevision Systems Corp.
AS6830 UPC Broadband Holding B.V.
AS9121 Turk Telekomunikasyon Anonim Sirketi
The majority being 28% AS286 KPN Internet Backbone and 21% AS3257 Tinet SpA which explains the bad download speeds to certain locations.
I'll be installing some software and tearing this apart with constant copying of files and such, see what it can do to force some pressure and see how the node responds under stress, more importantly I'll attempt to crash good old OpenVZ with that sustained load that it hates so much
Further feed back, I don't think your unmetered is or should be a good selling point, since you're using FDC and the bandwidth seems to be very bad quality unmetered wouldn't look great when you can hardly use it that much, that type of plan attracts high users of bandwidth and if they can't access the majority of the bandwidth then that would probably put them off.
I would recommend getting some better quality bandwidth and selling metered plans, its a much better image.
@PAD : It's possible that many people are running the download test at the same time, give it some time to settle down and then try again ?
Well I don't believe that to be true, I ran it multiple times over a 30-45 minute period.
I still have it running - latest check just completed.
Download speed from Softlayer, Singapore: 172KB/s
@PAD Thanks, useful feedback
The CPU will be used in production, I can say that almost for sure. It's continued performance is definitely going to be the determining factor of the node's capacity and ultimately the virtualization and pricing choices. It's not bad, but it is definitely going to be my first bottleneck.
As for the network, I was definitely waiting for that feedback. I've got another server in FDC that is doing a lot better, so there's going to be some discussion this week.
Comparison from DataShack:
Download speed from CacheFly: 43.1MB/s
Download speed from Linode, Atlanta GA: 8.55MB/s
Download speed from Linode, Dallas, TX: 19.6MB/s
Download speed from Linode, Tokyo, JP: 7.74MB/s
Download speed from Linode, London, UK: 4.60MB/s
Download speed from Leaseweb, Haarlem, NL: 12.0MB/s
Download speed from Softlayer, Singapore: 5.08MB/s
Download speed from Softlayer, Seattle, WA: 10.6MB/s
Download speed from Softlayer, San Jose, CA: 21.2MB/s
Download speed from Softlayer, Washington, DC: 16.3MB/s
@jarland
No problem I'll continue testing and if I find anything new will send you information.
Don't be afraid to push that CPU, I wanna hear it cry
It's been hanging around 70-73% idle quite consistently from about 10 clients to 17 clients. The total on here is 17, but I may add more later today just to stress it further. Just waiting on IPs.
Download speed from CacheFly: 9.90MB/s
Download speed from Linode, Atlanta GA: 4.69MB/s
Download speed from Linode, Dallas, TX: 665KB/s
Download speed from Linode, Tokyo, JP: 2.62MB/s
Download speed from Linode, London, UK: 442KB/s
Download speed from Leaseweb, Haarlem, NL: 179KB/s
Download speed from Softlayer, Singapore: 172KB/s
Download speed from Softlayer, Seattle, WA: 659KB/s
Download speed from Softlayer, San Jose, CA: 1.02MB/s
Download speed from Softlayer, Washington, DC: 1006KB/s
Completed finally :P
Lol that's terrible. I posted above running that from DataShack, which gives me even more confidence in their network. Oh well, that's what these two weeks are for
I will not be available for several hours. Been awake entirely too long. I won't be setting anything to wake me on failure.
So far so good from my perspective. The CPU has actually exceeded my expectations, despite flash and qemu hammering it. The billing site (catalystvps.com) will go through a migration sometime within the next 24 hours. SolusVM master will follow in a few days.
Appreciate you guys helping me break this thing in.
Later today I'll post a status page here so I don't have to bump this thread to communicate with you guys, sorry to the mods for that. Just restarted all containers, working on IPv6.
sweet, was about to update this thread about the reboot
@jarland
i'm seeing some weird networking issue... might want to hit up fdc about it:
http://sprunge.us/ZfFj (from nyc - catalyst)
http://sprunge.us/RjKS (catalyst - nyc)
@kbar Yeah I had to reboot because I hadn't enabled ipv6 in the kernel. Everyone has an ipv6 address assigned now. That networking issue is not pleasing me at all. I did a bunch of tests from their Chicago location and compared to Denver so they can compare and agree that something is horribly wrong with this.
This page will be updated with any issues, tests, or changes to the node.
http://jarlanddonnell.com
Honesty is key. If I'm selling access to this node later, I want you guys to know how I operate. Input is welcome.
Thanks for the help testing guys. Going to do a little cleaning up and hardening of the system as well as re-evaluating some things. The containers will be brought down today. The network issues were cleared up to reasonable levels, the system handled everything far better than expected. Hoping to have a marketable product in the next few days.
sweet!
Go go go! I want one!