New on LowEndTalk? Please Register and read our Community Rules.
How often do you maintain/update/patch your servers?
1 - Not including options like unattended-upgrade, how often do you maintain/update/patch your servers?
2 - Not including kernel updates, how often do you reboot your servers?
3 - Bonus question for the brave: Confession time: What is the longest you've gone without updating/patching a server?
Thanked by 1emgh
Comments
Like once a month:
apt-get update && apt-get upgrade -y && reboot
I’m lazy.
I do it when I’m awake at night, like when I can’t sleep or have been out drinking.
That way, people (or in worst case my client), don’t notice.
> scroolls Twitter
> sees a bug report on a pretty important library like OpenSSH
>> ignores
> redditers are talking about it
>> meh..they exaggerate alot
> the incident reaches top of HN
>> PANIK!
alright...guess we have to apt upgrade and cross our fingers hoping nothing breaks
> [Morgan Freeman voice] But stuff did break
But that's the past. I am more up-to-date now. I swear! :P
At some point things become pretty stable. So as long as it works, usually I never do anything to it.
So that would be like uh... 6 years ago.
Sometimes I still reboot though, even though it does nothing. Well its like spamming refresh on desktop thing.
I'm not sure how long it was unpatched for, but I had a KS-1 with an uptime of 1040 days. I only realised how massively out of date it was when I came to install something and discovered that the Debian 8 repo had been removed because it was EOL'd.
Oh, and another story that's not mine, but the sysadmins for another department in the same organisation. This was back in the day with Solaris 2.6 where you could apply kernel patches to a running system as well as on disk so that it didn't need a reboot immediately, but because these machines were in constant use, we only ever actually rebooted them when they need hardware upgrades. One of the main servers was finally rebooted after about 2-3 years of incremental patches and they discovered that one had failed and the entire lot had to be backed out and re-applied. The machine was out of action for over a day.
Probrably not often enough. As long as it is running stable and I don't get notified of security issues I can go many months without updating.
Shared servers because I'm dumb and lazy lol
Never. Yolo.
I update every 3 month without reboot, and every 6 month with a reboot.
Once a month
apt update && apt upgrade -y && apt autoremove -y
But seriously...
But the real answer is not as often as I should even though 99.999% of the time you can run apt -y upgrade unattended and it works just fine because Debian. Debian is the way. Debian is life.
I never update and my ip is 192.168.0.1
Try to hack me skids
In the evening, every second Tuesday of the month.
1 - once a day
2 - never. only reboot when there is a kernel update .
3 - 4 days (was sick)
1) Whenever
cron-apt
sends me an e-mail about new updates. I am to lazy to manually check for updates.2) I don't understand the question. Why should I intentionally reboot a server if there are no pending kernel updates?
3) I won't tell.
I have some checks in Icinga that alerts me whenever a server have a critical update or 5 non-critical ones. Usually happens once or twice a month.
Production servers rarely go above 50-60 days of uptime.
Servers maybe a couple of years, but I've seen routers and firewalls with uptime close to a decade.
every single day I'm updating or patching something somewhere. It is the most time consuming chore. I have outsourced a lot of it now, it's much better for ones health
Honestly, I never update anything, as I eventually will screw something up on my box, and just reinstall the OS instead of fixing the issue...
I have a problem.
>
As a proud owner of my first VPS I regularly updated the server as is recommended with apt update. Imagine my face when about a year later I found out about apt upgrade.....
It's like painting the Golden Gate Bridge.
1) never
2) never
3) 24 hours
More like repairing a fence so there is no hole or a thief may get in.
Well @jlet88 what are your answers?
I'm of the opinion that there is no "right" answer to this, which is supported by some IT friends I trust. I take it seriously, but I've never found a pattern/policy I feel 100% comfortable with yet TBH. That's why I posted the thread out of curiosity about what other folks do, and it's really interesting to see the responses.
For the last few years it depends on:
A - how busy I am with clients and projects
B - if I've read any terrifying security reports recently
C - what kind of stack I'm running on the server, and
D - what clients are on the server (i.e., is the server just for me tinkering around, or do I have a paying client or critical project on the server?).
But generally, it goes something like this:
1 - About once every 1-4 weeks. Average is about once every 2 weeks.
2 - Depends on the weather. Or my mood. Or an old fashioned irrational habit that every once in a while I gotta kick the jukebox to make sure it works. Clears out the cobwebs, right? Averages about once every 3-4 months. There's a nice zen-like feeling with a freshly booted server.
3 - Highly embarrassed about this, but a long time ago I forgot I had a virtual server and left it running about a whole year without touching it. Logged in, updated it, and to my shock and delight it still worked like a charm. But I was a little paranoid about it and I didn't trust it entirely after that, so I eventually wiped it and gave it a fresh OS install.
BTW, thanks everyone for posting your responses. It's been really interesting and also entertaining to read your comments.
If you have changed something where there's a decent risk of it not behaving correctly on reboot, it's better to do a controlled shutdown and test it through a reboot cycle while you still remember what you did and at the very least how to back it out if you can't immediately fix it.
To be fair, I usually only make these kind of changes when setting up a machine for the first time and before the server is getting any traffic, but e.g. I have custom firewall scripts that forward services to VMs and whitelist traffic to specific places from certain VMs. You can test them and have a reasonable confidence that they're working correctly without rebooting, but IMHO it's better to stop the service for 60 seconds to verify behaviour on a reboot once you're pretty sure it'll be fine.
If you have a redundancy strategy in place anyway, e.g. multiple haproxy instances on different machines handing out work to their closest backend and falling back onto the further away ones, that lost minute from that server at a time you control is worth taking the hit, compared to a prolonged outage e.g. after a power failure and you find that your system doesn't boot any more.
@ralf - thank you, perfectly said!
Usually I don’t, but when new version of php or Caddy is released, I will run an update to make sure I get latest version.