New on LowEndTalk? Please Register and read our Community Rules.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
All new Registrations are manually reviewed and approved, so a short delay after registration may occur before your account becomes active.
New Relic server monitoring will be no longer after November 14, 2017
Just got this email:
This is an important message related to the status of your account.
Dear Customer,
We are writing to let you know about some important changes that may affect some of the New Relic products and features you currently use.
Over the past 12 months, New Relic has expanded our Digital Intelligence Platform to infrastructure monitoring with the introduction of New Relic Infrastructure. We have also built out our alerting capabilities with the introduction of dynamic targeting, baselines, and many other updates.
In order to continue to deliver continued platform innovations, New Relic Servers and our legacy alerting features will reach End-Of Life (EOL) on May 15th, 2018 if you have a paid subscription to one or more of our products. If you do not have any paid subscription to our products, the EOL date will be November 14, 2017.
After the EOL date, New Relic Servers and the legacy alerting features will no longer be available in the New Relic user interface and data processing will stop. Also, New Relic alerts will be available only to paying accounts and will be removed from non-paying accounts starting on the EOL date.
New Relic Infrastructure and New Relic Alerts represent much higher value to all our customers and are better suited to meet your needs both now and in the future. In addition, they represent the kind of innovation and forward thinking you have come to expect from New Relic. It is New Relic’s goal to make this process as seamless as possible for you, and to provide as much visibility as we can into what you can expect during this process.
We understand that technology changes can be difficult, so we have posted more information about this in our Online Technical Community. We have also prepared a Frequently Asked Questions document that will answer many of your questions.
If you have any further questions or concerns, simply reply to this email to get in touch with your dedicated account manager who will be able to help you plan for a smooth transition to New Relic Infrastructure and New Relic Alerts.
More information..
Frequently Asked Questions
Thank you,
The New Relic Team
From what I understand their server monitoring is EOLing - i.e. shutting down. Sucks, guess I need to try out NIXStats again, hopefully doesn't eat my server with the python agent :-)
Thanked by 1raindog308
Comments
+1 for zabbix for people looking at a self hosted alternative.
The final push for Building my own dashboard grafana + prometheus.io TSDB
Time to deploy Zabbix I guess :'D
I've however asked for a quote for getting to the "infrastructure" product, let's see what they come up with.
I do run nixstats on a few boxes, however I've experienced a few leaks in memory and moments where the agent suddenly stops reporting but still runs (goes defunct) - like I have one right now:
And it seems to report wrong on a bunch of things - such as saying I have 35 terabyte of NVMe storage on a server.
I wish!
When stockholders start pushing.
What's one of the easiest self-hosted monitoring solutions to setup?
ping
Icinga/Zabbix or an ELK stack with Beats agents. Pick your definition of easy.
Pitty. It was a decent monitoring tool. Not that hard to replace with something homebrew though.
I use nixstats, nodequery and nginx amplify as well as newrelic. Guess have to say bye to newrelic and say hello to netdata https://github.com/firehol/netdata
working on netdata addon installer for my Centmin Mod LEMP stack https://community.centminmod.com/threads/addons-netdata-sh-new-system-monitor-addon-part-2.12239/
netdata looks awesome. gonna try it soon on my servers
netdata is awesome to use, horrible to secure.
netdata looks interesting, so can you please tell why? (maybe it can be fixed somehow?)
The model it uses of a http server per physical server makes it hard to lock down.
No - it's not hard at all.
There's something called a "Firewall" - it's used to restrict access to certain things :-)
Have a default drop policy, and allow only certain IPs
@Zerpy That's easy if you just to do per-IP level access. But gets complex hard for more control.
Need centralised authentication for multiple staff? Not possible with L4 firewall.
Need username, passwords in addtion? Not possible with L4 firewall.
Then you can do basic auth :-) Still rather easy.
@Zerpy
Please explain where netdata supports basic authentication, let alone centralised authentication sources. Hint - it doesnt: https://github.com/firehol/netdata/issues/70
A lot of servers run some kind of webserver such as nginx, apache, litespeed, caddy or similar - all has the concept of basic auth.
And even if you don't - then you can still make netdata send the data to a central location (they support multiple backends), and there you can do your complex authentication if you need it.
There's so many ways of doing authentication in front of netdata.
Credits to @maldovia for even finding https://github.com/firehol/netdata/wiki/Running-behind-nginx
Yup you can deploy nginx to every server you wan't to monitor. I guess I don't want another two reasonably large services (3 if you include central authentication backed) to support on every server regardless of server role. Management & deployment nightmare.
And unless they have made changes recently you are mistaken with the backend location. For netdata the dashboard is served from the monitored server.
https://github.com/firehol/netdata/wiki/netdata-backends
Now let me quote you word for word, just to you know, make you feel awful:
I don't know when nginx became a large service but anyway let me tell you ways to monitor using netdata and central authentication:
Servers might run some kind of webserver already making authentication easy since it's often built in.
If you don't run a webserver you could have a central server running nginx, apache, haproxy or any other piece of software that supports basic authentication (and central authentication if you can think out of the box)
Second method you only have to allow connections to netdata to the proxy machine, since the users connecting to the proxy will probably connect over 443 or 80 (if people are cheap and do not wanna make the large service even larger)
There's no nightmare managing or deploying authentication in front of netdata - if you're having central authentication it means you're running a decent sized infrastructure that needs different kinds of ACLs:
you can setup your proxy server to be like:
$machine.$pop.$env.netdata.corp.com
then do an internal
proxy_pass http://$machine.$pop.$env.corp.com:19999
That means adding machines requires no configuration changes on your netdata proxy / authentication service.
Also nginx is tiny anyway :-)
You can also use haproxy and just do a new ansible/puppet/chef run to update your haproxy config.
Really if you think the above is a lot of overhead and large services, then you clearly have never worked with infrastructure :--)
Assuming you're just a small business, you might not have central authentication, but still basic auth - you can either still go for the proxy server, or make use of a minor binary such as caddy or nginx anyway, everything can be statically compiled so you can just wget the nginx or caddy binary and run of your configuration.
I'm currently running nginx as a small SSL proxy in front of r1soft, it takes a wooping 2.1 megabyte of memory.
My netdata agent consumes 16.4 times more memory than nginx does.
You seem to have big concerns about something super simple - and I'm not sure if you know how big nginx actually is (1.3 megabyte binary) - it's like only .5 megabyte bigger than netdata itself.
HUGE services lel.
my pidstat with netdata with linux kernel same page merging (KSM) enabled = ~75% reduction in memory usage
and
Netdata is very resource efficient and you can control its resource consumption. It will use:
i'm using netdata bind to 127.0.0.1 with nginx reverse proxy + http authentication + csf firewall via my netdata.sh installer script which auto enables KSM support when detected.
@Zerpy
That's quite the process.
Netdata backends store historical data and are accessed by the netdata server on server.. It's the netdata model. Each server provides the netdata service, you access via $hostname:port and go.
As a result each server needs nginx, needs central authentication, needs access to the VPN network running ldap etc. BTW these are all things that in themselves need monitoring.
There are more definitions of big than just ram, or binary size. It's no classic standalone monitoring agent. It has dependencies, it has to be kept up to date (large security target) and it has quite the configuration file (esp. for Central authentication where you likely have lua and openresty).
Anyway this is my 2c from someone who does manage a few hundred servers, all of which are monitored. Centralized monitoring is far better.
Not that I am dissing netdata, the metrics it collects, performance of the engine and UI design are amazing. But to be adopted outside of hobby projects it needs to be easily securable. And one server (e.g monitoring server) is easier than hundreds.
P.S Please don't perpetuate the practice of using service passwords on something that is distributed to multiple servers. Unless of course you enjoy changing them all when someone leaves.
very true indeed
Yeah - it was more the point that nginx isn't exactly a "large service" as @SplitIce put it.
No, each server doesn't need nginx to provide authentication..
Central server with nginx, proxy_pass requests - job done.
Doesn't matter if you managed hundreds or thousands of servers (which I also have, but anyway) - sure central monitoring makes sense - you can get the same with Netdata by actually sending the data to a central storage.
Automation makes it easy to deploy - and anyway, central monitoring systems needs an agent sending data as well, or you'd have to query using SNMP - so you'd still have security concerns regardless.
Netdata exposes a tiny webserver, you can firewall it off and only allow certain systems to access it.
It's super easy
It's automated.
@Zerpy That's alot of work by my definition. And from past experience management too.
I've said my piece and you are welcome to your opinion. This direction of conversation is detracting from the point of this thread. So alas adieu.
Have you reported this issue? Is the agent updated?
Feel free to reach @vfuse and he will investigate this.