New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Serverbear can you post it to me in my inbox
[root@backup ~]# wget freevps.us/downloads/bench.sh -O - -o /dev/null|bash
CPU model : Intel(R) Xeon(R) CPU E31230 @ 3.20GHz
Number of cores : 4
CPU frequency : 1600.000 MHz
Total amount of ram : 2048 MB
Total amount of swap : 0 MB
System uptime : 16:06,
Download speed from CacheFly: 11.1MB/s
Download speed from Linode, Atlanta GA: 7.72MB/s
Download speed from Linode, Dallas, TX: 7.15MB/s
Download speed from Linode, Tokyo, JP: 6.13MB/s
Download speed from Linode, London, UK:
Download speed from Leaseweb, Haarlem, NL: 4.63MB/s
Download speed from Softlayer, Singapore: 2.64MB/s
Download speed from Softlayer, Seattle, WA: 9.65MB/s
Download speed from Softlayer, San Jose, CA: 3.10MB/s
Download speed from Softlayer, Washington, DC: 10.6MB/s
I/O speed : 36.0 MB/s
Killed everything (webmin, proftpd) on the server, then ran again:
[root@backup ~]# wget freevps.us/downloads/bench.sh -O - -o /dev/null|bash
CPU model : Intel(R) Xeon(R) CPU E31230 @ 3.20GHz
Number of cores : 4
CPU frequency : 3201.000 MHz
Total amount of ram : 2048 MB
Total amount of swap : 0 MB
System uptime : 16:25,
Download speed from CacheFly: 11.1MB/s
Download speed from Linode, Atlanta GA: 9.10MB/s
Download speed from Linode, Dallas, TX: 8.92MB/s
Download speed from Linode, Tokyo, JP: 3.39MB/s
Download speed from Linode, London, UK:
Download speed from Leaseweb, Haarlem, NL: 5.05MB/s
Download speed from Softlayer, Singapore: 2.87MB/s
Download speed from Softlayer, Seattle, WA: 9.55MB/s
Download speed from Softlayer, San Jose, CA: 3.13MB/s
Download speed from Softlayer, Washington, DC: 10.3MB/s
I/O speed : 11.7 MB/s
Ouch.
I can't work out what's going on - the server really doesn't seem to be performing that well. Memory usage is all over the place; I don't know why as there is nothing running at this moment in time.
Edit: rebooting the VPS as it seems to have ground to a halt.
Edit2: wondering if there's something up as SolusVM is taking ages to do anything
Edit3: it's back up, re-running the original bench.sh - Consoled in first and ended everything, then restarted SSH so that's all that should be running
[root@backup ~]# wget freevps.us/downloads/bench.sh -O - -o /dev/null|bash
CPU model : Intel(R) Xeon(R) CPU E31230 @ 3.20GHz
Number of cores : 4
CPU frequency : 3201.000 MHz
Total amount of ram : 2048 MB
Total amount of swap : 0 MB
System uptime : 8 min,
Download speed from CacheFly: 11.3MB/s
Download speed from Linode, Atlanta GA: 9.24MB/s
Download speed from Linode, Dallas, TX: 8.47MB/s
Download speed from Linode, Tokyo, JP: 6.19MB/s
Download speed from Linode, London, UK:
Download speed from Leaseweb, Haarlem, NL: 5.37MB/s
Download speed from Softlayer, Singapore: 2.56MB/s
Download speed from Softlayer, Seattle, WA: 7.89MB/s
Download speed from Softlayer, San Jose, CA: 3.05MB/s
Download speed from Softlayer, Washington, DC: 7.71MB/s
I/O speed : 72.4 MB/s
SolusVM isnt hosted on the node your on. I'm monitoring the node right now to see if the performance is improving. Do you mine working directly with me via live chat?
I've just got ServerBench running - it seems to be running better now, I'll be back at my desk in about 10 minutes, so I'll hit you up on Live Chat when I get back.
http://serverbear.com/benchmark/2012/11/09/KIRGPb4VVR46dx7w
We really need to stop running benches at the same time. I think I'm going to ban the scripts on the servers because although i know you all want to know what your getting. Your killing the server and when 3-4 of you run benching at the same time you are just getting a bad bench anyways.
^ mine had only started when I posted, it doesn't explain previous issues. I've cancelled it anyway now that serverbear has posted.
These jerks abusing cpu by running stupid and useless dd test, that's all they do rather than actually using the box.
Much better now Crystal
[root@backup ~]# free -m
total used free shared buffers cached
Mem: 2048 44 2003 0 0 15
-/+ buffers/cache: 28 2019
Swap: 0 0 0
=#=#=#=#=#=
Kid, scratch that, don't reply. I've just seen some of your other posts.
@Dean I'm glad your happy with it
This post attracted a good amount of people. As of right now i have to stop all chicago orders until i get more IPs. I don't order huge blocks at a time keeps costs down for you guys Ill update this thread once i get the IPs routed.
So how are the UGVPS'es folks ordered working out for them?
Anyone care to share their 24 hour experience?
Meh, don't do that do the next 24 hours from now.. Everyone killed the node from all the damn benchmarks and than wondered why it was "slow" it seems CPU and IO has stabilized now.
1073741824 bytes (1.1 GB) copied, 14.6533 s, 73.3 MB/s
1073741824 bytes (1.1 GB) copied, 15.1379 s, 70.9 MB/s
Download speed from CacheFly: 10.7MB/s
Download speed from Linode, Atlanta GA: 9.05MB/s
Download speed from Linode, Dallas, TX: 6.96MB/s
Download speed from Linode, Tokyo, JP: 3.08MB/s
Download speed from Linode, London, UK:
Download speed from Leaseweb, Haarlem, NL: 3.02MB/s
Download speed from Softlayer, Singapore: 2.46MB/s
Download speed from Softlayer, Seattle, WA: 6.16MB/s
Download speed from Softlayer, San Jose, CA: 2.89MB/s
Download speed from Softlayer, Washington, DC: 9.98MB/s
I/O speed : 75.6 MB/s
Got more IPs opening Chicago back up =D
"Currently right now our datacenter is experiencing a massive DDOS attack. They are working to resolve this issue as soon as possible. You may notice connection issues to any service in Chicago at this time."
@wdq, where did that message about the DDOS come from?
@pubcrawler An email from UGVPS.
I apologize about this, we are working to get this resolved ASAP.
It seems to have been resolved. We will continue to monitor this until we are confident the issue is over. I sent out another mass update. I appreciate everyone's understanding and I apologize for the inconvenience.