All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Help us test our "LowEndBox" servers
Hello,
I am fairly new to LET community, I should introduce myself. Hi, I'm Eric Andrews, the owner of CubixCloud Web Services. We're currently working to update our website, services, and team. I thought I would find some better interest from the LowEndTalk community rather then posting on WebHostingTalk. (I hope this is allowed on LET, hehe)
We need about 5 users to runs some tests on our new Chicago OpenVZ servers. (information not currently available on our website) I would like to know your thoughts on overall speed, network, control, etc. Posting your results in this forum would probably benefit anyone interested as well as help us to get a good idea of what's going on from a customer point-of-view. If you have any suggestions, compliments, complaints, etc, we would love to hear them! First 5 to e-mail me with interest would get access to a 1GB "LowEndServer".
Please let me know if you have any questions about this beta test or about the servers, we would be happy to answer them here
E-mail: [email protected]
Comments
Thanks for the emails and all of the interest - a few more server still available
I have sent you an email. Yet to receive a reply.
Sorry guys for the late replies, I had something to do. Getting back to replies now
We have received more then enough interest in this. Thanks for your help - LET community is great
Just got setup, I'll post a quick benchmark:
io looks a bit low for a new server but i guess its bc everyone is runnign tests at the same time
might be that it is running raid 1.
These servers are running RAID1 with 1TB drives. Could you suggest another solution?
ioping as well:
I'll run the dd again in a few hours, as well.
We're still answering e-mail questions and sending out the login details to everyone who has e-mailed us. Sorry for the delay. Thanks to all so far for posting benchmark and other results
@EricCubixCloud that is rather worrying for a host to be asking if there is a better option than raid 1.
Raid 10?
Or I suppose raid 1 is fine if you only ever intend to put <20 VPS p/node but given that it is openvz and you describe 1GB as an LEB it is highly doubtful.
If you really want to test your node put 50 users on it and see how well it does
Ioping looks good.
I understand what you're saying about RAID - I shouldn't have worded it that way, what I was really asking was your opinion. Thanks for your input.
Our 24GB i7 nodes will have a maximum of 30 users.
To all who are interested, I should have mentioned, our 1GB servers will be around $12. We can let them got as a LowEnd sale for $7.
To all who I gave a 256mb, 512mb, or 768mb test server to: here will be our regular pricing.
256MB: $4
512MB: $7
768MB: $9
1GB: $12
Please let me know if you have any more questions. All thoughts and suggestions are taken into consideration. Please keep in mind we're still in a testing stage.
Why do all these beta tests thing have such a low user limit yet more people always seem to get them.
Good question! Sorry, I figured 5 was enough but once I saw the interest I figured many more opinions and suggestions would be beneficial. We did have to cut it off at a certain point though.
Have you e-mailed me?
ah ok...
But not 30 on raid 1 though I hope
Honestly, RAID 1 Is not that bad aslong as you have decent harddrives and you keep the Data Write/Read Hoggers to a minimum, Otherwise you will have issues.
I also disagree, We are running 40 Containers on this node and It is RAID 1.
dd if=/dev/zero of=test bs=16k count=64k conv=fdatasync && rm -rf test 65536+0 records in
65536+0 records out
1073741824 bytes (1.1 GB) copied, 10.9559 seconds, 98.0 MB/s
top - 19:04:06 up 6 days, 2:37, 1 user, load average: 1.79, 1.61, 1.36
Tasks: 1323 total, 2 running, 1320 sleeping, 0 stopped, 1 zombie
Cpu(s): 8.2%us, 8.9%sy, 0.0%ni, 81.6%id, 0.7%wa, 0.0%hi, 0.5%si, 0.0%st
Mem: 16391800k total, 15138052k used, 1253748k free, 1064156k buffers
Swap: 18481144k total, 1608k used, 18479536k free, 9933072k cached
But not 30 on raid 1 though I hope
I agree, thanks so much for your input.
Additionally, you could upgrade to at least RAID5/6 with a couple more drives and double or triple the amount of containers you can comfortably fit on a modern node.
Thanks for your suggestion. We're going to see how it goes for now - around 30 users. The speeds have proven to be quite good considering it is a software RAID1 without a high cache like most expensive RAID card setup (the I/O file they are writing sticks in the cache and it shows an I/O speed of nearly 300MB/s or whatever it may be. However, it does not really effect actual performance from what I've seen.)
People around here like their large numbers, even though they'll never actually have any realistic utilization of them
We began with web hosting and some other services. Now finally starting up a new node for VPS - focussing more on quality and rather then working with other companies, we're now beginning our own custom servers and such
I have many other domains too - what does registration date mean now a days? :P
Speeds seem pretty solid from what I've seen so far.. downloaded the 200mb cachefly file just to be different :P.
--21:45:43-- http://cachefly.cachefly.net/200mb.test
Resolving cachefly.cachefly.net... 205.234.175.175
Connecting to cachefly.cachefly.net|205.234.175.175|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 209715200 (200M) [application/octet-stream]
Saving to:
200mb.test' 100%[=======================================>] 209,715,200 71.8M/s in 2.8s 21:45:46 (71.8 MB/s) -
200mb.test' saved [209715200/209715200]`
From what I've been seeing, i/o has been pretty consistent to these results too:
[root@ChiBeta233 ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync 16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 12.0929 seconds, 88.8 MB/s
`
About to get off to bed, but I'll be running lots more tests tomorrow , hope everything goes well Eric!
Hey Liam! Glad to hear this - those speeds look great
This is great, please run as many tests as you would like Enjoy!
Here is my Initial Results. Not bad..
http://pastie.org/3731947
My results are at https://piratenpad.org/ro/r.bjxyLxFqxjMwOHJ9 (maybe I will update some in a few hours)
Thanks!
Great results so far. Thanks for putting in the comments there. I see some users making use of almost the full 1Gbps
In, out or both? I'm wondering if anyone is testing the Rx side.
Test file? @EricCubixCloud
@EricCubixCloud: How about some more test slots, this time for a small fee, say 1 "testing" month at 50% off or 75% off, etc.?
I don't understand why more LEB providers don't have RAID 5/6/60 and prefer SW RAID1 or RAID10; the former gives you redundancy AND blazing read+write speeds. I assume it's because hardware controllers coupled with the decent number of drives needed is expensive and it's cheaper/faster to roll a 2/4-disk SW raid setup?