New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
For some reason, after I forgot to paid my bills and they all went down!
What kind of companys are these?
My BuyVM 128MB had been up for 137 days before I took it down to reinstall about a week ago.
Until recently, my other VPSes were with ChicagoVPS... I don't recall the uptime (it was quite good) but obviously it's back to zero now.
root@fw03:/root# uptime
5:39PM up 650 days, 10:51, 1 user, load averages: 0.27, 0.13, 0.10
openbsd on vmware
[root@nuova2 ~]# w
17:16:26 up 629 days, 4:49, 7 users, load average: 0.19, 0.07, 0.01
centos on xen
[root@db01 ~]# w
17:51:11 up 408 days, 21:57, 4 users, load average: 0.12, 0.20, 0.18
[root@db02 ~]# w
17:47:00 up 408 days, 22:14, 1 user, load average: 0.00, 0.00, 0.00
centos on vmware
BuyVM 128MB OVZ
BuyVM 256MB OVZ
BuyVM 250GB Storage
GetKVM 236MB
KimSufi mVS
httpZoom 64MB
Hudson Valley Host 64MB
StormVZ 256MB
XenVZ
XenVZ win big, although I think the HVH had an uptime of about 140+ uptime before the recent hurricane that knocked them offline.
Not wanting to start a flamewar but do those long uptimes show that people don't like to keep their systems up to date? Most of my VPSes today have maximum of 30 days of uptime due to OS patchings and release upgrades. And the highest uptime i've had is about 6months or so.
My VPSs with the best uptime are Prometeus at 99.99% and EDIS (CH) also with 99.99% (rounded). Both were only down for 10-20 minutes in the 6 months.
IPXcore 32mb (VPN)
22:05:49 up 90 days, 6:36, 1 user, load average: 0.00, 0.00, 0.00
FrontRangeHosting
22:07:27 up 23 days, 2:09, 1 user, load average: 0.00, 0.00, 0.00
My 2 prometeus boxes at 6 days both (restarted them...)
Not necessarily. Updating a kernel is rarely due to security issues, since the VPS doesnt change, there is little need for hardware support, the only need would be because some apps require some newer features and that is rare too.
Uptimes at 30 days is not great, but, of course, anything that lasts more than 10 days at a time and has just a few minutes downtime is good at these prices, i think the average is much lower than 30 days, but that is because most VPSes are OVZ and that is not particularly stable and one abuser might break the node since isolation is low.
root@dallas1 [~]# uptime
21:39:46 up 50 days,
UrPad VPS.
You must be using a different Linux kernel to the rest of us
https://dl.dropbox.com/u/1998652/uptimes_again.png
I had a Kimsufi 2G for about 6 months straight w/o reboot and network downtime. I just replaced it with the cheaper one though.
NY BuyVM and Quickpacket have been working best for me. Not a fanboy, but the BuyVM just seems snappier than any LEB I've used so far.
@herbyscrub, what monitoring software is that?
Uptime since last reimage:
BuyVM SJ, OVZ - uptime since I'm not really sure but it was probably my fault:
ChicagoVPS LA, OVZ - uptime since last memory replacement or whatever the maintenance was I'm not really quite sure:
Regarding long uptime... I, for one, can say that the box I have that's been up for over a year is way out of date on kernel patches. It's running Ubuntu 10.04 2.6.32, which by itself isn't that bad, but I'm sure it has several vulnerabilities. Everything else on it is up-to-date though. Well, as up-to-date as it can be since it's a 2 year old LTS release. It's not accessible from the internet, so I can live with whatever kernel vulnerabilities are present.
This is pretty much how I feel about it. I'm not going to always update to every new kernel, but if I've tested it and I find value in a bug fix then I'm going to update it. They've fixed some issues with OpenVPN on the last 2 or 3 kernels and I've updated. I don't value raw uptime numbers over stability upgrades. Obviously a high uptime number means stability though, so preference obviously plays a bigger role than I give it credit for. Even still, I don't like to leave well enough alone, I like to improve, but I'm not going to upgrade a kernel without an excessive amount of notice.
Ubuntu gets free patches from Ksplice, myself I still have my old Ksplice account prior-Oracle, so I keep using it on all nodes.
So jealous.
This is why I think OVZ should be kept separate.
For example, if i run own kernel, I will update it only when I need and KVM/VMWare/Xen don't need much updates as they are not using such a zoo of modules and stuff that the poor OVZ kernel must cope with and also does not have to do all the things OVZ kernel has to.
When you have a kernel that has to do everything on all kinds of possible usages on a node, it has to be up to date because privilege elevations work much better and that is the kind of exploit a kernel has usually, also DoS, but on KVM the vm does not access the host kernel much and updating userspace utilities does not require a reboot of VMs most of the time.
OVZ will have to reboot way more often because of kernel updates (unless ksplice and stuff) as well as crashing because of bugs in the modules.
I frankly believe the guys at OVZ are miracle workers, they practically make a hard working kernel, patched all over stable enough to be considered production grade material. Nevertheless, it will never be as stable as the others.