Using Grafana Loki as a Centralized Logging Solution
Today, I’ll explain how to use Grafana Loki as a centralized logging solution for all your Docker containers. As your infrastructure grows and more containers are added, troubleshooting via logs becomes increasingly important. You might need to diagnose issues like a database failure or an SSL certificate that didn’t renew. Personally, I used to check logs with the docker logs
command, but this approach isn’t efficient. Imagine trying to filter logs for a specific time window — like 12:00-12:05 UTC on May 5 — or investigating issues that span multiple containers, such as when a database failure causes an error on an Nginx container. Instead of manually piecing logs together from different machines, it’s more efficient to store all logs centrally, enabling simultaneous searches across all containers. With Loki, you can set up alerts, filter specific log entries using regex, and much more.
In this post, I’ll walk you through how I set up a centralized logging solution for all my Docker containers using Grafana Loki.
Step 1: Install Loki
I recommend installing Loki with Docker Compose. Here is Grafana’s default docker-compose.yaml
file for Loki:
Grafana Loki Docker Compose Documentation
And here is the code itself:
Grafana Loki Docker Compose YAML
Since we don’t need Promtail (Loki’s log collector), we can comment that part out. I’ll also add volume configurations:
Step 2: Configure Containers to Send Logs to Loki
Once the Loki container is running, configure your containers to push logs to Loki. First, install the Docker plugin and restart the Docker engine:
Verify that the plugin is installed:
You should see your newly installed Docker plugin:
Next, configure each container to send logs to Loki by adding these lines in your docker-compose
file (replace loki-ip
with the actual IP of your Loki server):
Alternatively, configure Docker to send logs from all containers by creating an /etc/docker/daemon.json
file (again, replace loki-ip
with the actual IP of your Loki server):
After making these changes, recreate your containers to start logging to Loki. With Docker Compose, run the following:
If you chose the daemon.json
approach, restart the Docker service:
Loki doesn’t pull logs; instead, Docker pushes logs to Loki. Ensure Docker can reach Loki on port 3100 (if using the default). Test connectivity with telnet
from the Docker host:
Viewing Logs in Grafana
Now you should be able to see logs in Grafana. Go to the “Explore” section, and make sure “Loki” is selected in the top-left dropdown menu:
Then, click on “Label Browser” and select the appropriate label. In this example, it’s compose_project => random-logger
. Then click “Show logs”:
After clicking “Show logs,” you should see your logs:
That’s it! At this point, you’ve successfully set up Grafana with Loki, and your Docker containers should be sending logs to it. For the next steps, you might consider setting up data retention policies in Loki and creating custom dashboards — I’ll leave that as a homework exercise.