As many others here, I have a home lab at home, with various containers like FreshRSS, Ampache…
I also have a netdata dashboard to monitor CPU and temps, disk usage… that sometimes send me alerts without me having configured anything, eg too much CPU used for more than 15 minutes.
However it doesn’t seem to cover log monitoring, or at least not in the way I want. I have a job and can’t dedicate thousands of hours to building something myself, nor configuring deeply some software stack.
All I want is my services to be monitored log-wise, with a single docker where you could mount multiple log directories, and have a simple interface that filters through the logs (based on their type/name, eg nginx logs aren’t treated the same way as kernel or auth logs, but without me having to configure more than the source type), to tell me if something is weird or just bad (eg someone logged in).
Does it exist without installing grafana + Prometheus + this and that + doing a shit ton of configuration and crying?
One thing about grafana, though, is that you get logs, metrics and monitoring in the same package. You can use loki as the actual log store and it’s easy to integrate it with the likes of journald and docker.
Yes, you will have to spend more time learning LogQL, but it can be very handy where you don’t have metrics (or don’t want to implement them) and still want some useful data from logs.
After all, text logs are just very raw, unstructured events in time. You may think that you only look into them very occasionally when things break and you would be correct. But if you want to alert on them, oftentimes that means you’re going from raw logs to structured data. Loki’s LogQL does that, and it’s still ten times easier to manage than the elastic stack.
VictoriaMetrics has its own logging product too, now, and while I didn’t try it yet, VM for metrics is probably the best thing ever happened since Prometheus. Especially for resource constrained homelabs.