New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
Comments
Your logs look around 1 SSH attempt a second, seems doubtful this alone would be the cause of the server outage, more likely you've just spotted this for the first time because you were looking more intently.
If you've taken this step, unless it's filling the disk or chewing up CPU, seems best to just ignore it as background noise.
Yeah, my intention was to check for any possible cause related to the hardware, then I saw this.
As of now, it's fine, no affect to the system yet.
your gonna keep getting more and more of these attacks, it's like they have a database of IP/ports with SSH active and they share with the black market.
Eventually you will attract a decent hacker that will simply take over your system.
Easy solution: SSH Keys or simply keep change the ssh port.
Change the port to a random number.
Nope, most of attacks against ssh to my hobby projects comes from china & russian & popular big providers IPs (OVH, Linode, DO, Racknerds, BuyVM).
I did not checked auth.log for 1 month, and then i realize that 6GB of free space dissapear by journalctl log... I did check wtf happened -> and auth.log size > 4GB+ (just in one month). I've tried to parse logs, and that was a hell....
CrowdSec: strongly recommend. Fast, simple, has web-interface and so on and so on.
1 line script setup.
Results pretty interesting. Filtered everything just in one day.
Turn off IPv4 inbound and enjoy the peace.
IPv6 is much quieter now.
logrotate ?
We all started somewhere, there have been some tips on this thread put them to good use
I have somewhere the image as far as I remember. There were multi-vector attack for some unknown reasons. Really. Some kind of absurd. Never seen anything like that.
About logrotate -> nah, i like to shrink journalctl via:
Plus
extremely useful things.
But crowdsec surpriced me a lot. Really lightweight. Damn, i'm in love with GoLang tools. Almost all things that i tried from:
For example some really useful things are:
If back to security part... This is very complex thing to be done...
I do not agree with statement: "security through obscurity", that this is bad.
My practice of being bullied in internet and traced/attacked by hundreds of Gbits in 2014-2015 years show me just for my own personal experience, that usually "hackers", extremely stupid (most of them), and basic hidding technics, closed ports, waf / proxy / gre or other tunnels works the best.
If back to ssh problem, with CrowdSec - this is not a problem at all.
Also there is a basic tool for fast checking wrong configs:
https://cisofy.com/lynis/
Also:
this shit happened directly against my small server. And he lied here xD. They did not filter the attack.
Here is another provider that i switch like an idiot at that time (x4b).
I purchased 100gb plan, and i got nullrouted with x4b.net (but this guys pretty good to be honest for that time).
I've purchased at that time 5-6 different ddos protection providers (ddos-guard, voxility, other providers, i really forgot the name... CN servers or like that, plus many others (at that time there were not many providers with good ddos protection like a lot of providers offer right now out the box). All ddos protection services fall under pressure of gigantic ddos attacks from some military russian motherfucker who tried to blackmail me and force me to do things that in principle not agree to do, until i start to trick the hacker by providing proxy, and hide different basic info. And voila - attacks mitigated...
Yea, there were another vector of attacks after ddos, like xss/sqlinj, but this bullshit with simple
https://goaccess.io/
I share my experience, maybe a wrong one, but i faced summary more than 100+ attacks (different vectors) against my hobby projects (that does not even generate any income).
I faced literally every new threat vector that got popular a weeks after the attacks initially happened to me in 2014-2017 years.
So, if back to ssh problem, this is just my own opinion, but i think switching port, adding firewall & crowdsec will mostly solve the problem.
Have a great day. Hope helped someone.
My suggestion would be to disable SSH on IPv4 and use SSH on IPv6. No more probes again in your lifetime.
Using a different SSH port kind of works. But its security by obscurity.
What you should do is whitelist the IP addresses/hosts that are allowed to connect to SSH via that port.
A bit harder to do but the more secure option.
If you just change SSH port and leave password authentication, then you're relying on security by obscurity. But changing SSH port in addition to locking down to keys would be hardening the security.
Less probes (smaller attack surface), less resources wasted and smaller logs to dig through the noise is what you want.
Hi!
The log snippets you posted are mostly related to ssh connection attempts and not to the "halt" problem.
What exactly did you observe? What time were the observations?
Could the problem have been a power interruption?
Could the problem have been a network connectivity interruption?
Which OS is the server running? Which version?
Apparently the server might have been running and not "off" when Dedispec looked at it because they "rebooted" it?
Maybe you could post more of the logs which would show more about what was happening on the server at the time you saw the halt problem?
Best wishes and kindest regards!
Tom
Tailscale or a jump server can help a lot.
Yes.
I discovered that because I suddenly can't connect to my any VM on the server. Nor SSH or via web. 9am on UTC+8, so I have to wait there for quite awhile.
I actually don't know which kind of interruption, since @Dedispec doesn't provide any IPMI or control panel, looks like a halt on my end.
PVE 7
Maybe, I will try to confirm with them.
Unfortunately, I cleared out the log last night after I surely enhanced my security.
is there some security issue with giving public key that I don't know? Please explain. It's a public key and it doesn't work without the private key pair.
There is no issue with handing out your public key - the clue is in the name
Never use 22 port for ssh... change that and configure firewall.
I think you should close the connection port. Use a whitelist policy.Open ports are the wrong idea.
Tried out CrowdSec because of this thread and even with only SSH enabled it was randomly using like 7% of my CPU. It's not that much, but it's somehow much heavier than fail2ban (python)
Welcome to fail2ban
Hmm, 7% is quite heavy for a weak CPU, I probably will remove it from my KS1.
Does it have the same behavior for you? I might have run into some bug. (I'm only getting like 2-3 malicious IPs per hour.)
Just checked my three servers KS-1 3.9% the other two more powerful one is 0%. So maybe it's on your end.
Tiny Chinese probes violating your backend without detection.
guanxi lol prob cantonese and low income (not being racist cause im chinese )
I also recommend you reject every incoming ip to your changed ssh port and use port knocking to allow yours.
So the conclusion is:
Getting probed by Chinese people once a second has zero effect on your backend.
Thats normal, turn of icmp echo to reduce their attention to probe more
Like throwing a sausage up a hallway.