Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Some Chinese IPs keep probing my ssh port - Page 2
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Some Chinese IPs keep probing my ssh port

2

Comments

  • @jason5545 said:
    My Dedispec server just suddenly went halt this morning, I contacted them and after a few hours got rebooted, I decided to investigate further, because it's quite rare for a Linux server halting itself down

    Your logs look around 1 SSH attempt a second, seems doubtful this alone would be the cause of the server outage, more likely you've just spotted this for the first time because you were looking more intently.

    so far I've set up: log in with keys only...

    If you've taken this step, unless it's filling the disk or chewing up CPU, seems best to just ignore it as background noise.

  • @cochon said:

    @jason5545 said:
    My Dedispec server just suddenly went halt this morning, I contacted them and after a few hours got rebooted, I decided to investigate further, because it's quite rare for a Linux server halting itself down

    Your logs look around 1 SSH attempt a second, seems doubtful this alone would be the cause of the server outage, more likely you've just spotted this for the first time because you were looking more intently.

    Yeah, my intention was to check for any possible cause related to the hardware, then I saw this.

    so far I've set up: log in with keys only...

    If you've taken this step, unless it's filling the disk or chewing up CPU, seems best to just ignore it as background noise.

    As of now, it's fine, no affect to the system yet.

  • reliablevps_usreliablevps_us Member, Patron Provider

    your gonna keep getting more and more of these attacks, it's like they have a database of IP/ports with SSH active and they share with the black market.
    Eventually you will attract a decent hacker that will simply take over your system.

    Easy solution: SSH Keys or simply keep change the ssh port.

    Thanked by 1jason5545
  • qzydustinqzydustin Member
    edited April 2022

    Change the port to a random number.

    Thanked by 1jason5545
  • @jason5545 said: which is quite rare?

    Nope, most of attacks against ssh to my hobby projects comes from china & russian & popular big providers IPs (OVH, Linode, DO, Racknerds, BuyVM).

    I did not checked auth.log for 1 month, and then i realize that 6GB of free space dissapear by journalctl log... I did check wtf happened -> and auth.log size > 4GB+ (just in one month). I've tried to parse logs, and that was a hell....

    @jason5545 said: so what I should do next?

    CrowdSec: strongly recommend. Fast, simple, has web-interface and so on and so on.

    1 line script setup.

    Results pretty interesting. Filtered everything just in one day.

    Thanked by 2jason5545 SinV
  • Turn off IPv4 inbound and enjoy the peace.
    IPv6 is much quieter now.

    Thanked by 1yoursunny
  • @desperand said:
    I did check wtf happened -> and auth.log size > 4GB+ (just in one month).

    logrotate ?

  • ZyraZyra Member

    We all started somewhere, there have been some tips on this thread put them to good use :smile:

  • @cochon said: logrotate ?

    I have somewhere the image as far as I remember. There were multi-vector attack for some unknown reasons. Really. Some kind of absurd. Never seen anything like that.
    About logrotate -> nah, i like to shrink journalctl via:

    journalctl --vacuum-size 100M

    Plus

    extremely useful things.

    But crowdsec surpriced me a lot. Really lightweight. Damn, i'm in love with GoLang tools. Almost all things that i tried from:

    For example some really useful things are:

    If back to security part... This is very complex thing to be done...
    I do not agree with statement: "security through obscurity", that this is bad.

    My practice of being bullied in internet and traced/attacked by hundreds of Gbits in 2014-2015 years show me just for my own personal experience, that usually "hackers", extremely stupid (most of them), and basic hidding technics, closed ports, waf / proxy / gre or other tunnels works the best.

    If back to ssh problem, with CrowdSec - this is not a problem at all.

    Also there is a basic tool for fast checking wrong configs:
    https://cisofy.com/lynis/

    Also:

    this shit happened directly against my small server. And he lied here xD. They did not filter the attack.

    Here is another provider that i switch like an idiot at that time (x4b).
    I purchased 100gb plan, and i got nullrouted with x4b.net (but this guys pretty good to be honest for that time).

    I've purchased at that time 5-6 different ddos protection providers (ddos-guard, voxility, other providers, i really forgot the name... CN servers or like that, plus many others (at that time there were not many providers with good ddos protection like a lot of providers offer right now out the box). All ddos protection services fall under pressure of gigantic ddos attacks from some military russian motherfucker who tried to blackmail me and force me to do things that in principle not agree to do, until i start to trick the hacker by providing proxy, and hide different basic info. And voila - attacks mitigated...

    Yea, there were another vector of attacks after ddos, like xss/sqlinj, but this bullshit with simple

    https://goaccess.io/

    • trigger for sending instnat notifications probably almost instantly solve the problem and help to find a holes in my webapp and close them via strict web-server rules, than waf, than something else, and so on and so on.

    I share my experience, maybe a wrong one, but i faced summary more than 100+ attacks (different vectors) against my hobby projects (that does not even generate any income).

    I faced literally every new threat vector that got popular a weeks after the attacks initially happened to me in 2014-2017 years.

    So, if back to ssh problem, this is just my own opinion, but i think switching port, adding firewall & crowdsec will mostly solve the problem.

    Have a great day. Hope helped someone.

  • My suggestion would be to disable SSH on IPv4 and use SSH on IPv6. No more probes again in your lifetime.

  • Using a different SSH port kind of works. But its security by obscurity.
    What you should do is whitelist the IP addresses/hosts that are allowed to connect to SSH via that port.
    A bit harder to do but the more secure option.

    Thanked by 1Nekki
  • If you just change SSH port and leave password authentication, then you're relying on security by obscurity. But changing SSH port in addition to locking down to keys would be hardening the security.

    Less probes (smaller attack surface), less resources wasted and smaller logs to dig through the noise is what you want.

    Thanked by 1_MS_
  • Not_OlesNot_Oles Moderator, Patron Provider

    @jason5545 said: My Dedispec server just suddenly went halt this morning,

    Hi!

    The log snippets you posted are mostly related to ssh connection attempts and not to the "halt" problem.

    • What exactly did you observe? What time were the observations?

    • Could the problem have been a power interruption?

    • Could the problem have been a network connectivity interruption?

    • Which OS is the server running? Which version?

    Apparently the server might have been running and not "off" when Dedispec looked at it because they "rebooted" it?

    Maybe you could post more of the logs which would show more about what was happening on the server at the time you saw the halt problem?

    Best wishes and kindest regards! <3

    Tom

  • nfnnfn Veteran

    Tailscale or a jump server can help a lot.

  • jason5545jason5545 Member
    edited April 2022

    @Not_Oles said:

    @jason5545 said: My Dedispec server just suddenly went halt this morning,

    Hi!

    The log snippets you posted are mostly related to ssh connection attempts and not to the "halt" problem.

    Yes.

    • What exactly did you observe? What time were the observations?

    I discovered that because I suddenly can't connect to my any VM on the server. Nor SSH or via web. 9am on UTC+8, so I have to wait there for quite awhile.

    • Could the problem have been a power interruption?

    • Could the problem have been a network connectivity interruption?

    I actually don't know which kind of interruption, since @Dedispec doesn't provide any IPMI or control panel, looks like a halt on my end.

    • Which OS is the server running? Which version?

    PVE 7

    Apparently the server might have been running and not "off" when Dedispec looked at it because they "rebooted" it?

    Maybe, I will try to confirm with them.

    Maybe you could post more of the logs which would show more about what was happening on the server at the time you saw the halt problem?

    Unfortunately, I cleared out the log last night after I surely enhanced my security.

    Best wishes and kindest regards! <3

    Tom

    Thanked by 1Not_Oles
  • @Otus9051 said: No. 3, too scary to give public key to others, too lazy to change it afterwards

    is there some security issue with giving public key that I don't know? Please explain. It's a public key and it doesn't work without the private key pair.

  • @DanSummer said:

    @Otus9051 said: No. 3, too scary to give public key to others, too lazy to change it afterwards

    is there some security issue with giving public key that I don't know? Please explain. It's a public key and it doesn't work without the private key pair.

    There is no issue with handing out your public key - the clue is in the name

  • Never use 22 port for ssh... change that and configure firewall.

    Thanked by 1jason5545
  • I think you should close the connection port. Use a whitelist policy.Open ports are the wrong idea.

    Thanked by 1jason5545
  • Tried out CrowdSec because of this thread and even with only SSH enabled it was randomly using like 7% of my CPU. It's not that much, but it's somehow much heavier than fail2ban (python)

    Thanked by 1jason5545
  • chacha Member

    Welcome to fail2ban

  • @MallocVoidstar said:
    Tried out CrowdSec because of this thread and even with only SSH enabled it was randomly using like 7% of my CPU. It's not that much, but it's somehow much heavier than fail2ban (python)

    Hmm, 7% is quite heavy for a weak CPU, I probably will remove it from my KS1.

  • @jason5545 said:

    @MallocVoidstar said:
    Tried out CrowdSec because of this thread and even with only SSH enabled it was randomly using like 7% of my CPU. It's not that much, but it's somehow much heavier than fail2ban (python)

    Hmm, 7% is quite heavy for a weak CPU, I probably will remove it from my KS1.

    Does it have the same behavior for you? I might have run into some bug. (I'm only getting like 2-3 malicious IPs per hour.)

  • @MallocVoidstar said:

    @jason5545 said:

    @MallocVoidstar said:
    Tried out CrowdSec because of this thread and even with only SSH enabled it was randomly using like 7% of my CPU. It's not that much, but it's somehow much heavier than fail2ban (python)

    Hmm, 7% is quite heavy for a weak CPU, I probably will remove it from my KS1.

    Does it have the same behavior for you? I might have run into some bug. (I'm only getting like 2-3 malicious IPs per hour.)

    Just checked my three servers KS-1 3.9% the other two more powerful one is 0%. So maybe it's on your end.

    Thanked by 1MallocVoidstar
  • Tiny Chinese probes violating your backend without detection.

  • @jason5545 said:
    My Dedispec server just suddenly went halt this morning, I contacted them and after a few hours got rebooted, I decided to investigate further, because it's quite rare for a Linux server halting itself down,
    I ran sudo journalctl -b -1 -e , and this is what I got:
    Apr 26 15:52:39 s123348 kernel: perf: interrupt took too long (6190 > 6170), lowering kernel.perf_event_max_sample_rate to 32250 Apr 26 15:54:00 s123348 sshd[620333]: Received disconnect from 116.252.87.31 port 46099:11: Bye Bye [preauth] Apr 26 15:54:00 s123348 sshd[620333]: Disconnected from authenticating user root 116.252.87.31 port 46099 [preauth] Apr 26 15:54:01 s123348 sshd[620353]: Received disconnect from 116.252.87.31 port 46192:11: Bye Bye [preauth] Apr 26 15:54:01 s123348 sshd[620353]: Disconnected from authenticating user root 116.252.87.31 port 46192 [preauth] Apr 26 15:54:03 s123348 sshd[620459]: Invalid user ubnt from 116.252.87.31 port 46245 Apr 26 15:54:03 s123348 sshd[620459]: Received disconnect from 116.252.87.31 port 46245:11: Bye Bye [preauth] Apr 26 15:54:03 s123348 sshd[620459]: Disconnected from invalid user ubnt 116.252.87.31 port 46245 [preauth] Apr 26 15:54:04 s123348 sshd[620463]: Received disconnect from 116.252.87.31 port 46281:11: Bye Bye [preauth] Apr 26 15:54:04 s123348 sshd[620463]: Disconnected from authenticating user root 116.252.87.31 port 46281 [preauth] Apr 26 15:54:06 s123348 sshd[620465]: Received disconnect from 116.252.87.31 port 46320:11: Bye Bye [preauth] Apr 26 15:54:06 s123348 sshd[620465]: Disconnected from authenticating user root 116.252.87.31 port 46320 [preauth] Apr 26 15:54:07 s123348 sshd[620467]: Received disconnect from 116.252.87.31 port 46419:11: Bye Bye [preauth] Apr 26 15:54:07 s123348 sshd[620467]: Disconnected from authenticating user root 116.252.87.31 port 46419 [preauth] Apr 26 15:54:09 s123348 sshd[620559]: Received disconnect from 116.252.87.31 port 46550:11: Bye Bye [preauth] Apr 26 15:54:09 s123348 sshd[620559]: Disconnected from authenticating user root 116.252.87.31 port 46550 [preauth] Apr 26 15:54:10 s123348 sshd[620561]: Received disconnect from 116.252.87.31 port 46629:11: Bye Bye [preauth] Apr 26 15:54:10 s123348 sshd[620561]: Disconnected from authenticating user root 116.252.87.31 port 46629 [preauth] Apr 26 15:54:12 s123348 sshd[620580]: Received disconnect from 116.252.87.31 port 46677:11: Bye Bye [preauth] Apr 26 15:54:12 s123348 sshd[620580]: Disconnected from authenticating user root 116.252.87.31 port 46677 [preauth] Apr 26 15:54:13 s123348 sshd[620645]: Received disconnect from 116.252.87.31 port 46724:11: Bye Bye [preauth] Apr 26 15:54:13 s123348 sshd[620645]: Disconnected from authenticating user root 116.252.87.31 port 46724 [preauth] Apr 26 15:54:15 s123348 sshd[620653]: Received disconnect from 116.252.87.31 port 46760:11: Bye Bye [preauth] Apr 26 15:54:15 s123348 sshd[620653]: Disconnected from authenticating user root 116.252.87.31 port 46760 [preauth] Apr 26 15:54:16 s123348 sshd[620655]: Received disconnect from 116.252.87.31 port 46811:11: Bye Bye [preauth] Apr 26 15:54:16 s123348 sshd[620655]: Disconnected from authenticating user root 116.252.87.31 port 46811 [preauth] Apr 26 15:54:19 s123348 sshd[620730]: Received disconnect from 116.252.87.31 port 46925:11: Bye Bye [preauth] Apr 26 15:54:19 s123348 sshd[620730]: Disconnected from authenticating user root 116.252.87.31 port 46925 [preauth] Apr 26 15:54:21 s123348 sshd[620749]: Received disconnect from 116.252.87.31 port 47120:11: Bye Bye [preauth] Apr 26 15:54:21 s123348 sshd[620749]: Disconnected from authenticating user root 116.252.87.31 port 47120 [preauth] Apr 26 15:54:22 s123348 sshd[620759]: Received disconnect from 116.252.87.31 port 47166:11: Bye Bye [preauth] Apr 26 15:54:22 s123348 sshd[620759]: Disconnected from authenticating user root 116.252.87.31 port 47166 [preauth] Apr 26 15:54:25 s123348 sshd[620822]: Received disconnect from 116.252.87.31 port 47205:11: Bye Bye [preauth] Apr 26 15:54:25 s123348 sshd[620822]: Disconnected from authenticating user root 116.252.87.31 port 47205 [preauth] Apr 26 15:54:26 s123348 sshd[620824]: Received disconnect from 116.252.87.31 port 47280:11: Bye Bye [preauth] Apr 26 15:54:26 s123348 sshd[620824]: Disconnected from authenticating user root 116.252.87.31 port 47280 [preauth] Apr 26 15:54:28 s123348 sshd[620826]: Received disconnect from 116.252.87.31 port 47391:11: Bye Bye [preauth]
    Quite interestingly, I saw you guys saying, most of the abusers came from DO, Linode, or cloud providers, but this seems to come from CHINANET Guangxi with a residential IP, which is quite rare?
    so what I should do next?
    so far I've set up: log in with keys only, only allow my IP and my VPS to login. And I missing something else?
    I was also considering reporting abuse, but since it's not a provider or hosting, I have no idea how to do it
    Thanks

    guanxi lol prob cantonese and low income (not being racist cause im chinese )

  • cpsdcpsd Member

    I also recommend you reject every incoming ip to your changed ssh port and use port knocking to allow yours.

    Thanked by 1jason5545
  • @jason5545 said:

    @cochon said:

    @jason5545 said:
    My Dedispec server just suddenly went halt this morning, I contacted them and after a few hours got rebooted, I decided to investigate further, because it's quite rare for a Linux server halting itself down

    Your logs look around 1 SSH attempt a second, seems doubtful this alone would be the cause of the server outage, more likely you've just spotted this for the first time because you were looking more intently.

    Yeah, my intention was to check for any possible cause related to the hardware, then I saw this.

    so far I've set up: log in with keys only...

    If you've taken this step, unless it's filling the disk or chewing up CPU, seems best to just ignore it as background noise.

    As of now, it's fine, no affect to the system yet.

    So the conclusion is:
    Getting probed by Chinese people once a second has zero effect on your backend. :D

    Thanked by 1jason5545
  • Thats normal, turn of icmp echo to reduce their attention to probe more :smile:

  • @turdski said:

    @jason5545 said:

    @cochon said:

    @jason5545 said:
    My Dedispec server just suddenly went halt this morning, I contacted them and after a few hours got rebooted, I decided to investigate further, because it's quite rare for a Linux server halting itself down

    Your logs look around 1 SSH attempt a second, seems doubtful this alone would be the cause of the server outage, more likely you've just spotted this for the first time because you were looking more intently.

    Yeah, my intention was to check for any possible cause related to the hardware, then I saw this.

    so far I've set up: log in with keys only...

    If you've taken this step, unless it's filling the disk or chewing up CPU, seems best to just ignore it as background noise.

    As of now, it's fine, no affect to the system yet.

    So the conclusion is:
    Getting probed by Chinese people once a second has zero effect on your backend. :D

    Like throwing a sausage up a hallway.

    Thanked by 1turdski
Sign In or Register to comment.