Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


Bitninja Abuse Reports - Page 6
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

Bitninja Abuse Reports

123468

Comments

  • @Maounique said:
    That being said, starting with uceprotect et all and ending with bitninja, any list which requests money or trials for "removal" I hold in contempt, maybe some people make more money from this than through a donation mechanism or a legitimate product to be sold, but I do not think that is enough a justification.

    We are working on the automatic delist mechanism already and currently accepting delist requests via email. Dealing with the reports from our side is completely non profit, and it is rather a side effect of the service not the service itself.

    We do charge only for bitninja pro, the server security product we develop. (we also offer a free version of it btw)

  • @Layer said:
    @bitninja_george what's name of your hosting company which you're running?

    It's web-server.hu and here is the about page: https://web-server.hu/tarhely/rolunk/

    Can I run web-proxy on your hosting and try to browse sites/forums with fake useragent like googlebot?

    No, it is not allowed on our platform. If you try to the outbound WAF will detect it and intercept the request and alert our sysadmins with the path of the web-proxy script and your userid.

    Can I expect that your dedicated server will be suspend?

    For what?

  • ricardoricardo Member
    edited November 2016

    If you do simple requests...

    Thanks, your description therein sounds a lot more like what I have observed for myself. If you could change the 200 HTTP thing, I think that would be an improvement to your system. Cheers.

    There's countless examples where you can say UA spoofing is bad, and I could say that it's reasonable to use different UA's in different circumstances without malice intended. To each their own.

  • bitninja_george said: I still think spoofing the user agent is malicious by itself

  • @ricardo said:

    If you could change the 200 HTTP thing, I think that would be an improvement to your system. Cheers.

    Sure, it's on the way.

  • @AnthonySmith said:
    Could you show us an example of your abuse report?

    The following is an example mail for data collected from the [fake] open relay:

    Hello,

    This is a notification of unauthorized use of systems or networks.

    On October XX, 2016, a total of 1 IP addresses from your networks tried
    to relay mail through my server without permission. After examining the log,
    they are suspected to be compromised botnet computers.

    The connection log is included below for your reference. Each line lists the
    date, time, time zone, attacker IP, attacker's network name (as found in
    WHOIS), local IP, and local TCP port number of a relay attempt. To prevent
    this mail from getting too big in size, only 5 relay attempts from each
    attacker IP are included.

    If you regularly collect IP traffic information of your network, you will see
    the IPs listed connected to TCP port 25 of local IP at the time logged, and
    I suspect that they also connected to TCP port 25 of many other IPs.

    Please notify the owners of those botnet computers so that they can take
    appropriate action to clean their computers, before even more severe incidents,
    like data leakage, DDoS, and the rumored NSA spying through hijacked botnets,
    arise. This also helps prevent botnets from taking up your network bandwidth.

    Full internet email headers of the first relay attempts from those IPs,
    logged on local IP which they tried to abuse, is also included below for
    your reference.

    Chih-Cherng Chin (Please follow: @[twitter account, but I don't really twit much])
    Daily Botnet Statistics
    [URL, a daily web log for this]

    *** DISAN: a proposed framework for botnet mitigation
    *** [URL for the topic above]
    *** If you can read Chinese, I described my approach to botnet detection
    *** and notification here:
    *** [URL for the topic above]

    ---- connection log (time zone is UTC; sent to [abuse contact of attaker's IP]) ----

    date => time => TZ => attacker IP => network name => local IP => local TCP port#

    2016-10-XX YY:YY:YY UTC XXX.XXX.XXX.XXX NETWORK_NAME [My server's IP] 25

    ---- internet email headers ----
    [spam mail header]

  • @Layer said:
    @chihcherng you wake up one day and said "Hey I should detect malware-infected computers and send abuse email". Are you hosting provider too ?

    The botnet was thought to be a difficult problem to tackle around 2007. Perhaps it still is. I thought so, too, initially. But with experience gained from my other little projects, I started to have different opinions. I shared my thoughts with my friends several times, but they did not believe it would work. What better way to prove I was right than to put my theory into practice myself? So yes, I woke up one day in 2009, and started detecting malware-infected computers and sending abuse notifications since then.

    I am not a hosting provider.

  • @bitninja_george said:

    @ricardo said:

    If you could change the 200 HTTP thing, I think that would be an improvement to your system. Cheers.

    Sure, it's on the way.

    Can you provide majestic the ip ranges which we should not bother you ?
    Or still want to loose time?

  • Master_BoMaster_Bo Member
    edited December 2016

    As @Maounique mentioned, all possibly dangerous traffic sources should be taken into account, regardless of.

    Personally, I collect IPs that are generously provided by would-be hackers and botnets that scan systems I monitor. I don't even need Snort or whatever, even logwatch alone gives enough to keep systems basically guarded against the majority of threats.

    So, if data mining part of BitNinja activity is used as basis for less intrusive 'Internet protection activity', that could be quite fine.

    Personally, I never pay whoever offering to keep my systems clean on paid basis. But I would perhaps use the provided IPs of known perpetrators, to keep their traffic watched.

  • @inthecloudblog said:

    Can you provide majestic the ip ranges which we should not bother you ?
    Or still want to loose time?

    BitNinja sets up these honeypots automatically on any users server. We have currently 1500+ servers so this is alone more than 5000 IP I think. And it changes every day. I mean there are new servers, there are servers stopped, IP-s assigned to a server or deactivated.

    Can you make sure to load the /robots.txt first before any other requests? It will solve the whole problem. We even have a line dedicated for you in the robots.txt :-)
    This is the content of the robots.txt currently:

    User-agent: *
    Disallow: /
    
    User-agent: MJ12bot
    Disallow: /
    
  • Here's the latest MUPPETRY from bitninja.

    I had a VPS suspended because of a so-called 'attack'. I explained that the bitninja guys are clearly high and got the VPS unsuspended.

    • I send 6 requests to my own shared server over a period of six hours

    • my own script (not a honeypot), over six hours, same user agent, same erroneous 200 response and captcha.

    • Same ridiculous "you are attacking other servers" email.

    @bitninja_george please include the actual IP that is supposedly being attacked, in your reports.

    Please also change your email from 'you are attacking other servers' to 'you are hosting someone who writes programs' or some other surreal (yet more correct) explanation. People have grabbed web pages and sent cross-domain requests from the command line since about 1993.

    It'd be appreciated if you could also list which hosting providers use your service somewhere, so I can avoid them.

    Sending requests as a 'non-human' once an hour to a page I wrote myself in itself is not "malicious" as your emails put it.

    Seething...

  • ricardo said: please include the actual IP that is supposedly being attacked, in your reports.

    That would actually help many.

  • AnthonySmithAnthonySmith Member, Patron Provider

    Complete amateurs that don't even know what they don't know, that is the most dangerous type.

    Its like having nothing but a handful of CCNA's managing a large scale network, they know just enough to destroy everything and no idea why.

    Thanked by 1Clouvider
  • hawchawc Moderator, LIR
    edited February 2017

    robots.txt is a disaster, and should be ignored. If you don't want someone accessing something.... DONT PUT IT ON THE INTERNET.

    Precisely one reason comes to mind to have ROBOTS.TXT, and it is, incidentally, stupid - to prevent robots from triggering processes on the website that should not be run automatically. A dumb spider or crawler will hit every URL linked, and if a site allows users to activate a link that causes resource hogging or otherwise deletes/adds data, then a ROBOTS.TXT exclusion makes perfect sense while you fix your broken and idiotic configuration.

    Thanked by 1inthecloudblog
  • edited February 2017

    Hi,

    @ricardo said:

    • I send 6 requests to my own shared server over a period of six hours

    • my own script (not a honeypot), over six hours, same user agent, same erroneous 200 response and captcha.

    Yeah, we experimented with changing the response from 200 to 5xx but then real users had difficulties with the captcha, as some browsers didn't render the page at all in case of a 5xx response.

    @bitninja_george please include the actual IP that is supposedly being attacked, in your reports.

    We worry about info leakage if we were including the IP-s. Bad guys could use it for malicious purposes. We have plans to setup a self service interface for requesting incident data, but currently we are a bit overwhelmed with some new features. It's currently planned for Q2, after WHD.

    It'd be appreciated if you could also list which hosting providers use your service somewhere, so I can avoid them.

    It is pretty easy to detect our CAPTCHA page. You can check it before doing the request. Or the robots.txt

  • ricardoricardo Member
    edited February 2017

    bitninja_george said: Yeah, we experimented with changing the response from 200 to 5xx but then real users had difficulties with the captcha, as some browsers didn't render the page at all in case of a 5xx response.

    So you're still hijacking legitimate content with your captcha and fooling user-agents into believing the response was 200 OK. Not good.

    We worry about info leakage if we were including the IP-s. Bad guys could use it for malicious purposes

    That's ridiculous and there's clearly an alternate reason, anyone who thinks about this for more than 10 seconds understands why. You're not hiding anything by not providing the IP, you're just making it more awkward for everyone involved. Attention seeking by most standards.

    Let me be blunt. You, your team either don't know what you're doing or you're not being honest here. I think it's the latter. Your emails are simply marketing spam and your filters are unrealistic. You're saying any non-human traffic is malicious, and have your own arbitrary list of what you think isn't.

    I have hundreds of shared hosts and hundreds of VPS and I get forwarded a report that malicious requests are being sent from a VPS. All you're providing in your 'report' is an obfuscated hostname, a hostname which I sign up with to practically every shared hosting provider (which helps me know they're not a reseller), so I have no information to go on about which of your servers are supposedly being 'attacked'.

    Since no attack is happening, all that's happening is your software is being a massive nuisance, and implying that something illegal is happening. Rather than get on a high horse about defamation, marketing under false pretenses or whatever else, I'd rather know which shared hosts run bitninja to avoid this waste of time.

    Thanked by 2Clouvider Cdoe
  • @hawc said:
    robots.txt is a disaster, and should be ignored. If you don't want someone accessing something.... DONT PUT IT ON THE INTERNET.

    I think robots.txt is a very good measure of a crawler. All well written crawlers honor it, so the rest.. well they are not that well written :-) or even harmful. In most of the cases we see they are harmful, and some of the cases they are poorly implemented.

  • @bitninja_george said:

    @hawc said:
    robots.txt is a disaster, and should be ignored. If you don't want someone accessing something.... DONT PUT IT ON THE INTERNET.

    I think robots.txt is a very good measure of a crawler. All well written crawlers honor it, so the rest.. well they are not that well written :-) or even harmful. In most of the cases we see they are harmful, and some of the cases they are poorly implemented.

    If you generate an abuse report for a crawler that's poorly implemented, causing the user to have his vps suspended, then I think something else is poorly implemented as well.

  • ricardoricardo Member
    edited February 2017

    @bitninja_george Thank you for the private message. I am not running a 'crawler', I am making requests to one page on one site, once an hour, from a VPS I own to a shared server I have, to a script I wrote.

    Please stop framing this as a 'poorly written crawler' and review the points I've raised.

    If there was truly malicious activity, you'd be suspending the shared server package instead of sending out your silly email because you don't like non-human traffic.

  • AnthonySmithAnthonySmith Member, Patron Provider

    ricardo said: Please stop framing this as a 'poorly written crawler' and review the points I've raised.

    That is literally his only defense, if he acknowledged anything else he would have to acknowledge his own inability which in return leads us back to https://en.wikipedia.org/wiki/Dunning–Kruger_effect

    Thanked by 1ATHK
  • ClouviderClouvider Member, Patron Provider

    Are you going to try to sell this thing during the WHD ? One stand to avoid.

  • ricardoricardo Member
    edited February 2017

    His only redemption at the moment is that he's not totally ignoring what's being written. I think there's definitely a mix of ineptitude and marketing.

    He doesn't want to admit that they're banning all crawlers/bots/automated requests apart from his whitelisted ones and that's about as sophisticated as their software gets. They clearly don't realise this breaks a lot of things and is more of a hindrance than a help. mod_security probably does a way better job. Add on top their inflammatory email... this stinks.

    I think some more pertinent information should rank for search engine queries relating to this product. A thread retitling to "Bitninja Abuse Reports - Marketing Scam?" would be a nice start.

  • Sorry Ricardo, I did not mean to offend you.
    At the moment bitninja has a set of rules, and if an IP break the rule, it generates an incident. In case of an incident the IP is flagged to be greylisted on that server. Every traffic from greylisted IPs are logged and generate further incidents. So if we find out what was the first incident and we remove your IP from the greylist, bitninja won't interfere with your requests. That's why I asked you in private message to send me the IP, so we can find out what was the initial reason.

    @bitninja_george Thank you for the private message. I am not running a 'crawler', I am making requests to one page on one site, once an hour, from a VPS I own to a shared server I have, to a script I wrote.

    We plan to introduce AI to make better decisions about incoming incidents.

    What you can do to solve the issue is either ask your shared hosting provider to delist form greylist your VPS IP on bitninja and whitelist the IP, or send it directly to me.

  • @bitninja_george how about an IP block ? that would be much much easier.

  • @ricardo Care to name the provider who suspended your VPS because of the bitninja report?

  • bitninja_george said: At the moment bitninja has a set of rules, and if an IP break the rule, it generates an incident.

    As this thread has demonstrated, those rules appear to be broken. If it's broken, you should probably fix it.

  • AnthonySmithAnthonySmith Member, Patron Provider

    bitninja_george said: We plan to introduce

    You have been saying crap like that for years, time to shut it down for now until you can fix it then, your causing havoc right now, at the very least stop all notices, just block and list.

  • OliverOliver Member, Host Rep

    Good thread. Would not read again but will not forward such "reports" to customers anymore and will send all upstreams here if necessary. +++

    Thanked by 1mikho
Sign In or Register to comment.