Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!


New Relic server monitoring will be no longer after November 14, 2017
New on LowEndTalk? Please Register and read our Community Rules.

All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.

New Relic server monitoring will be no longer after November 14, 2017

TomTom Member
edited July 2017 in General

Just got this email:

This is an important message related to the status of your account.


Dear Customer,

We are writing to let you know about some important changes that may affect some of the New Relic products and features you currently use.

Over the past 12 months, New Relic has expanded our Digital Intelligence Platform to infrastructure monitoring with the introduction of New Relic Infrastructure. We have also built out our alerting capabilities with the introduction of dynamic targeting, baselines, and many other updates. 

In order to continue to deliver continued platform innovations, New Relic Servers and our legacy alerting features will reach End-Of Life (EOL) on May 15th, 2018 if you have a paid subscription to one or more of our products. If you do not have any paid subscription to our products, the EOL date will be November 14, 2017.

After the EOL date, New Relic Servers and the legacy alerting features will no longer be available in the New Relic user interface and data processing will stop. Also, New Relic alerts will be available only to paying accounts and will be removed from non-paying accounts starting on the EOL date.

New Relic Infrastructure and New Relic Alerts represent much higher value to all our customers and are better suited to meet your needs both now and in the future. In addition, they represent the kind of innovation and forward thinking you have come to expect from New Relic. It is New Relic’s goal to make this process as seamless as possible for you, and to provide as much visibility as we can into what you can expect during this process.

We understand that technology changes can be difficult, so we have posted more information about this in our Online Technical Community. We have also prepared a Frequently Asked Questions document that will answer many of your questions.

If you have any further questions or concerns, simply reply to this email to get in touch with your dedicated account manager who will be able to help you plan for a smooth transition to New Relic Infrastructure and New Relic Alerts.

More information..

Frequently Asked Questions

Thank you,

The New Relic Team

From what I understand their server monitoring is EOLing - i.e. shutting down. Sucks, guess I need to try out NIXStats again, hopefully doesn't eat my server with the python agent :-)

Thanked by 1raindog308

Comments

  • leapswitchleapswitch Patron Provider, Veteran

    +1 for zabbix for people looking at a self hosted alternative.

  • The final push for Building my own dashboard grafana + prometheus.io TSDB

  • ZerpyZerpy Member

    Time to deploy Zabbix I guess :'D
    I've however asked for a quote for getting to the "infrastructure" product, let's see what they come up with.

    I do run nixstats on a few boxes, however I've experienced a few leaks in memory and moments where the agent suddenly stops reporting but still runs (goes defunct) - like I have one right now:

    It's been 4 days, 6 hours and 6 minutes since we've received data.
    

    And it seems to report wrong on a bunch of things - such as saying I have 35 terabyte of NVMe storage on a server.

  • TomTom Member

    such as saying I have 35 terabyte of NVMe storage on a server.

    I wish!

    Thanked by 1Zerpy
  • HxxxHxxx Member

    When stockholders start pushing.

  • sinsin Member

    What's one of the easiest self-hosted monitoring solutions to setup?

  • WSSWSS Member

    @sin said:
    What's one of the easiest self-hosted monitoring solutions to setup?

    ping

    Thanked by 1sin
  • edited July 2017

    Icinga/Zabbix or an ELK stack with Beats agents. Pick your definition of easy.

  • Pitty. It was a decent monitoring tool. Not that hard to replace with something homebrew though.

  • eva2000eva2000 Veteran

    I use nixstats, nodequery and nginx amplify as well as newrelic. Guess have to say bye to newrelic and say hello to netdata https://github.com/firehol/netdata

    working on netdata addon installer for my Centmin Mod LEMP stack https://community.centminmod.com/threads/addons-netdata-sh-new-system-monitor-addon-part-2.12239/ :)

    Thanked by 1NanoG6
  • NanoG6NanoG6 Member

    netdata looks awesome. gonna try it soon on my servers

  • SplitIceSplitIce Member, Host Rep

    netdata is awesome to use, horrible to secure.

  • M66BM66B Veteran

    @SplitIce said:
    netdata is awesome to use, horrible to secure.

    netdata looks interesting, so can you please tell why? (maybe it can be fixed somehow?)

  • SplitIceSplitIce Member, Host Rep

    The model it uses of a http server per physical server makes it hard to lock down.

  • ZerpyZerpy Member

    @SplitIce said:
    The model it uses of a http server per physical server makes it hard to lock down.

    No - it's not hard at all.
    There's something called a "Firewall" - it's used to restrict access to certain things :-)
    Have a default drop policy, and allow only certain IPs ;)

  • SplitIceSplitIce Member, Host Rep
    edited July 2017

    @Zerpy That's easy if you just to do per-IP level access. But gets complex hard for more control.

    Need centralised authentication for multiple staff? Not possible with L4 firewall.

    Need username, passwords in addtion? Not possible with L4 firewall.

  • ZerpyZerpy Member

    @SplitIce said:
    @Zerpy That's easy if you just to do per-IP level access. But gets complex hard for more control.

    Need centralised authentication for multiple staff? Not possible with L4 firewall.

    Need username, passwords in addtion? Not possible with L4 firewall.

    Then you can do basic auth :-) Still rather easy.

  • SplitIceSplitIce Member, Host Rep

    @Zerpy

    Please explain where netdata supports basic authentication, let alone centralised authentication sources. Hint - it doesnt: https://github.com/firehol/netdata/issues/70

  • ZerpyZerpy Member
    edited July 2017

    @SplitIce said:
    @Zerpy

    Please explain where netdata supports basic authentication, let alone centralised authentication sources. Hint - it doesnt: https://github.com/firehol/netdata/issues/70

    A lot of servers run some kind of webserver such as nginx, apache, litespeed, caddy or similar - all has the concept of basic auth.

    And even if you don't - then you can still make netdata send the data to a central location (they support multiple backends), and there you can do your complex authentication if you need it.

    There's so many ways of doing authentication in front of netdata.

    Credits to @maldovia for even finding https://github.com/firehol/netdata/wiki/Running-behind-nginx ;)

  • SplitIceSplitIce Member, Host Rep

    Yup you can deploy nginx to every server you wan't to monitor. I guess I don't want another two reasonably large services (3 if you include central authentication backed) to support on every server regardless of server role. Management & deployment nightmare.

    And unless they have made changes recently you are mistaken with the backend location. For netdata the dashboard is served from the monitored server.

  • ZerpyZerpy Member
    edited July 2017

    @SplitIce said:
    Yup you can deploy nginx to every server you wan't to monitor. I guess I don't want another two reasonably large services (3 if you include central authentication backed) to support on every server regardless of server role. Management & deployment nightmare.

    And unless they have made changes recently you are mistaken with the backend location. For netdata the dashboard is served from the monitored server.

    https://github.com/firehol/netdata/wiki/netdata-backends

    Now let me quote you word for word, just to you know, make you feel awful:

    @SplitIce said:
    I guess I don't want another two reasonably large services

    I don't know when nginx became a large service but anyway let me tell you ways to monitor using netdata and central authentication:

    • Servers might run some kind of webserver already making authentication easy since it's often built in.

    • If you don't run a webserver you could have a central server running nginx, apache, haproxy or any other piece of software that supports basic authentication (and central authentication if you can think out of the box)

    Second method you only have to allow connections to netdata to the proxy machine, since the users connecting to the proxy will probably connect over 443 or 80 (if people are cheap and do not wanna make the large service even larger)

    @SplitIce said:
    Management & deployment nightmare.

    There's no nightmare managing or deploying authentication in front of netdata - if you're having central authentication it means you're running a decent sized infrastructure that needs different kinds of ACLs:

    • You have probably automated deployments of your software and configuration using tools like Chef, puppet, ansible or similar
    • If using a proxy server you can even make the forwarding to each netdata instance dynamic (wildcard subdomains, do a bit of magic with nginx to then proxy to the actual hostnames, let's assume your FQDN of your infra is something like:
    dbm1.ams1.prod.corp.com
    dbs1.ams2.prod.corp.com
    web1.ams1.prod.corp.com
    web1.fra1.stg.corp.com
    

    you can setup your proxy server to be like:
    $machine.$pop.$env.netdata.corp.com

    then do an internal proxy_pass http://$machine.$pop.$env.corp.com:19999

    That means adding machines requires no configuration changes on your netdata proxy / authentication service.

    Also nginx is tiny anyway :-)

    You can also use haproxy and just do a new ansible/puppet/chef run to update your haproxy config.

    Really if you think the above is a lot of overhead and large services, then you clearly have never worked with infrastructure :--)

    Assuming you're just a small business, you might not have central authentication, but still basic auth - you can either still go for the proxy server, or make use of a minor binary such as caddy or nginx anyway, everything can be statically compiled so you can just wget the nginx or caddy binary and run of your configuration.

    I'm currently running nginx as a small SSL proxy in front of r1soft, it takes a wooping 2.1 megabyte of memory.

    My netdata agent consumes 16.4 times more memory than nginx does.

    You seem to have big concerns about something super simple - and I'm not sure if you know how big nginx actually is (1.3 megabyte binary) - it's like only .5 megabyte bigger than netdata itself.

    HUGE services lel.

  • eva2000eva2000 Veteran
    edited July 2017

    Zerpy said: My netdata agent consumes 16.4 times more memory than nginx does.

    my pidstat with netdata with linux kernel same page merging (KSM) enabled = ~75% reduction in memory usage

    Average:      UID       PID    %usr %system  %guest    %CPU   CPU  Command
    Average:        0         9    0.00    0.11    0.00    0.11     -  rcu_sched
    Average:        0        51    0.00    0.09    0.00    0.09     -  ksmd
    Average:        0        64    0.00    0.02    0.00    0.02     -  kworker/0:1
    Average:        0       442    0.00    0.05    0.00    0.05     -  xfsaild/dm-1
    Average:        0       687    0.00    0.02    0.00    0.02     -  haveged
    Average:        0       691    0.09    0.02    0.00    0.11     -  rngd
    Average:        0       692    0.02    0.00    0.00    0.02     -  irqbalance
    Average:      996      1500    0.18    0.32    0.00    0.50     -  mysqld
    Average:     1001      1945    0.00    0.02    0.00    0.02     -  memcached
    Average:        0     13833    0.00    0.02    0.00    0.02     -  kworker/4:1
    Average:        0     15746    0.00    0.02    0.00    0.02     -  kworker/1:0
    Average:        0     15897    0.00    0.02    0.00    0.02     -  sshd
    Average:      995     17563    1.04    0.41    0.00    1.45     -  netdata
    Average:      995     17590    0.30    0.30    0.00    0.59     -  python
    Average:      995     17615    0.43    0.14    0.00    0.57     -  apps.plugin
    Average:     1000     18670    0.09    0.20    0.00    0.30     -  nginx
    Average:     1000     18671    0.09    0.11    0.00    0.20     -  nginx
    Average:     1000     18673    0.18    0.32    0.00    0.50     -  nginx
    Average:        0     19080    0.27    1.11    0.00    1.39     -  pidstat
    Average:      995     19085    0.02    0.02    0.00    0.05     -  bash
    

    and

    Average:      UID       PID  minflt/s  majflt/s     VSZ    RSS   %MEM  Command
    Average:        0         1      0.02      0.00  190860   3816   0.20  systemd
    Average:        0       692      1.00      0.00   19168   1188   0.06  irqbalance
    Average:      996      1500    829.43      0.00  583848 175540   9.32  mysqld
    Average:      995     17563     30.66      0.00  198070  33549   1.78  netdata
    Average:      995     17590     68.88      0.00  182988  19068   1.01  python
    Average:     1000     18670      0.02      0.00  137960  52480   2.79  nginx
    Average:     1000     18673      0.05      0.00  137960  52480   2.79  nginx
    Average:        0     19080    434.39      0.00  108280   1178   0.06  pidstat
    Average:      995     19085     27.62      0.00    9636   1492   0.08  bash
    

    Netdata is very resource efficient and you can control its resource consumption. It will use:

    • some spare CPU cycles, usually just 1-3% of a single core (check Performance),
    • the RAM you want it have (check Memory Requirements), and
    • no disk I/O at all, apart its logging (check Log Files). Of course it saves its DB to disk when it exits and loads it back when it starts.

    i'm using netdata bind to 127.0.0.1 with nginx reverse proxy + http authentication + csf firewall via my netdata.sh installer script which auto enables KSM support when detected.

    SplitIce said: And unless they have made changes recently you are mistaken with the backend location. For netdata the dashboard is served from the monitored server.

    netdata used to be self-contained, so that all these functions were handled entirely by each server. The changes we made, allow each netdata to be configured independently for each function. So, each netdata can now act as:

    • a self contained system, much like it used to be.
    • a data collector, that collects metrics from a host and pushes them to another netdata (with or without a local database and alarms).
    • a proxy, that receives metrics from other hosts and pushes them immediately to other netdata servers. netdata proxies can also be store and forward proxies meaning that they are able to maintain a local database for all metrics passing through them (with or without alarms).
    • a time-series database node, where data are kept, alarms are run and queries are served to visualise the metrics.
    Thanked by 1Zerpy
  • SplitIceSplitIce Member, Host Rep
    edited July 2017

    @Zerpy

    That's quite the process.

    Netdata backends store historical data and are accessed by the netdata server on server.. It's the netdata model. Each server provides the netdata service, you access via $hostname:port and go.

    As a result each server needs nginx, needs central authentication, needs access to the VPN network running ldap etc. BTW these are all things that in themselves need monitoring.

    There are more definitions of big than just ram, or binary size. It's no classic standalone monitoring agent. It has dependencies, it has to be kept up to date (large security target) and it has quite the configuration file (esp. for Central authentication where you likely have lua and openresty).

    Anyway this is my 2c from someone who does manage a few hundred servers, all of which are monitored. Centralized monitoring is far better.

    Not that I am dissing netdata, the metrics it collects, performance of the engine and UI design are amazing. But to be adopted outside of hobby projects it needs to be easily securable. And one server (e.g monitoring server) is easier than hundreds.

    P.S Please don't perpetuate the practice of using service passwords on something that is distributed to multiple servers. Unless of course you enjoy changing them all when someone leaves.

    Thanked by 1vimalware
  • eva2000eva2000 Veteran

    SplitIce said: Anyway this is my 2c from someone who does manage a few hundred servers, all of which are monitored. Centralized monitoring is far better.

    very true indeed

    Thanked by 1vimalware
  • ZerpyZerpy Member

    @eva2000 said:

    Zerpy said: My netdata agent consumes 16.4 times more memory than nginx does.

    my pidstat with netdata with linux kernel same page merging (KSM) enabled = ~75% reduction in memory usage

    Yeah - it was more the point that nginx isn't exactly a "large service" as @SplitIce put it.

    @SplitIce said:
    As a result each server needs nginx, needs central authentication, needs access to the VPN network running ldap etc. BTW these are all things that in themselves need monitoring.

    No, each server doesn't need nginx to provide authentication..

    Central server with nginx, proxy_pass requests - job done.

    @SplitIce said:
    Anyway this is my 2c from someone who does manage a few hundred servers, all of which are monitored. Centralized monitoring is far better.

    Doesn't matter if you managed hundreds or thousands of servers (which I also have, but anyway) - sure central monitoring makes sense - you can get the same with Netdata by actually sending the data to a central storage.
    Automation makes it easy to deploy - and anyway, central monitoring systems needs an agent sending data as well, or you'd have to query using SNMP - so you'd still have security concerns regardless.
    Netdata exposes a tiny webserver, you can firewall it off and only allow certain systems to access it.

    @SplitIce said:
    But to be adopted outside of hobby projects it needs to be easily securable.

    It's super easy

    • have a default block policy on your firewalls
    • only allow IPs that are trusted (such as a central netdata server or netdata proxy)
    • Do authentication if you need to (using nginx or similar, which can run on a single server)
    • Keep stuff up to date, like you should with any software

    @SplitIce said:
    P.S Please don't perpetuate the practice of using service passwords on something that is distributed to multiple servers. Unless of course you enjoy changing them all when someone leaves.

    It's automated.

  • SplitIceSplitIce Member, Host Rep

    @Zerpy That's alot of work by my definition. And from past experience management too.

    I've said my piece and you are welcome to your opinion. This direction of conversation is detracting from the point of this thread. So alas adieu.

  • MikePTMikePT Veteran

    @Zerpy said:
    Time to deploy Zabbix I guess :'D
    I've however asked for a quote for getting to the "infrastructure" product, let's see what they come up with.

    I do run nixstats on a few boxes, however I've experienced a few leaks in memory and moments where the agent suddenly stops reporting but still runs (goes defunct) - like I have one right now:

    > It's been 4 days, 6 hours and 6 minutes since we've received data.
    > 

    And it seems to report wrong on a bunch of things - such as saying I have 35 terabyte of NVMe storage on a server.

    Have you reported this issue? Is the agent updated?

    Feel free to reach @vfuse and he will investigate this.

Sign In or Register to comment.