As many others here, I have a home lab at home, with various containers like FreshRSS, Ampache…

I also have a netdata dashboard to monitor CPU and temps, disk usage… that sometimes send me alerts without me having configured anything, eg too much CPU used for more than 15 minutes.

However it doesn’t seem to cover log monitoring, or at least not in the way I want. I have a job and can’t dedicate thousands of hours to building something myself, nor configuring deeply some software stack.

All I want is my services to be monitored log-wise, with a single docker where you could mount multiple log directories, and have a simple interface that filters through the logs (based on their type/name, eg nginx logs aren’t treated the same way as kernel or auth logs, but without me having to configure more than the source type), to tell me if something is weird or just bad (eg someone logged in).

Does it exist without installing grafana + Prometheus + this and that + doing a shit ton of configuration and crying?

  • Spooky Mulder@twun.io
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 day ago

    In years past, I’ve used Elasticsearch and Kibana. The learning curve is steep and the system resource requirements warrant a dedicated machine, but once you get it dialed, it’s really effective as a centralized logging server.

    Prometheus and Grafana are for time-series data (metrics), not logs. If you’re already getting that from netdata, don’t bother with these, as they’d be redundant with what you have.

    syslog is about as idiomatic as it gets for log management in linux, but i don’t have enough experience using it effectively to give any pointers there. If you don’t really know what you want, yet, and just want to collect logs from all the things and see them in one place so you can begin to try and make sense of them and make refinements from there, then syslog seems like an excellent place to start.