CodeGym /Courses /Docker SELF /Setting up Monitoring and Logging

Setting up Monitoring and Logging

Docker SELF
Level 24 , Lesson 1
Available

7.1 Setting Up Monitoring with Prometheus

In this step, we'll check out how to set up monitoring and logging for our multi-container app. This will help us track the state and performance of services, as well as collect and analyze logs for troubleshooting.

Goal: Collect metrics from services and visualize them to monitor the performance and state of the application.

Installing and configuring Prometheus

In this example, Prometheus runs using Docker. This ensures cross-platform compatibility and allows us to deploy monitoring on any operating system that supports Docker.

1. Create a directory for Prometheus configuration:

Terminal

mkdir prometheus
cd prometheus

2. Create the configuration file prometheus.yml:

Yaml

global:
  scrape_interval: 15s

scrape_configs:
  - job_name: 'flask-app'
    static_configs:
      - targets: ['backend:5000']
  - job_name: 'node-app'
    static_configs:
      - targets: ['frontend:3000']

3. Create a Dockerfile for Prometheus:

dockerfile

FROM prom/prometheus
COPY prometheus.yml /etc/prometheus/prometheus.yml

4. Add Prometheus to compose.yaml:

Yaml

version: '3'

services:
  frontend:
    build: ./frontend
    ports:
      - "3000:3000"
    networks:
      - task-network

  backend:
    build: ./backend
    ports:
      - "5000:5000"
    depends_on:
      - database
    networks:
      - task-network
    environment:
      - DATABASE_URL=postgresql://taskuser:taskpassword@database:5432/taskdb

  database:
    image: postgres:13
    environment:
      - POSTGRES_DB=taskdb
      - POSTGRES_USER=taskuser
      - POSTGRES_PASSWORD=taskpassword
    networks:
      - task-network
    volumes:
      - db-data:/var/lib/postgresql/data

  nginx:
    build: ./nginx
    ports:
      - "80:80"
    depends_on:
      - frontend
      - backend
    networks:
      - task-network

  prometheus:
    build: ./prometheus
    ports:
      - "9090:9090"
    networks:
      - task-network

networks:
  task-network:
    driver: bridge

volumes:
  db-data:

7.2 Installing and Setting up Grafana

1. Creating a directory for Grafana configuration:

Terminal

mkdir grafana
cd grafana

2. Creating a Dockerfile for Grafana:

dockerfile

FROM grafana/grafana

3. Adding Grafana to docker-compose.yml:

Yaml

  grafana:
    image: grafana/grafana
    ports:
      - "3033:3000"
    depends_on:
      - prometheus
    networks:
      - task-network

4. Configuring Grafana to work with Prometheus:

  1. Run containers using docker compose up.
  2. Go to the Grafana web interface at http://localhost:3033.
  3. Log in using the default credentials (admin/admin).
  4. Navigate to "Configuration" -> "Data Sources" and add a new data source.
  5. Select "Prometheus" and specify URL http://prometheus:9090.
  6. Save the settings.

Creating Dashboards in Grafana

  1. Creating a new dashboard:
    • Go to "Create" -> "Dashboard".
    • Click "Add new panel".
    • In the "Query" section, select the data source Prometheus.
    • Enter a PromQL query to fetch metrics. For example, for monitoring CPU usage:
  2. Promql
    
    rate(container_cpu_usage_seconds_total[1m]) 
            
  3. Configuring the Panel:
    • Choose the type of graph (e.g., "Graph").
    • Adjust the panel settings (e.g., title, legend, axes, etc.).
    • Click "Apply" to save the panel.
  4. Creating additional panels:
    • Repeat the steps to create additional panels for other metrics, such as memory, network, and disk.

7.3 Setting Up ELK Stack

Setting up logging using ELK Stack (Elasticsearch, Logstash, Kibana).

Goal: collect, store, and analyze logs from our services.

Installing and configuring Elasticsearch

Adding Elasticsearch to compose.yaml:

Yaml

  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.15.0
    environment:
      - discovery.type=single-node
    ports:
      - "9200:9200"
    networks:
      - task-network

Installing and configuring Logstash

Step 1. Create a directory for Logstash configuration:

Terminal

mkdir logstash
cd logstash

Step 2. Create the logstash.conf configuration file:

Text

input {
  beats {
    port => 5044
  }
}
filter {
  json {
    source => "message"
  }
}
output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
    index => "docker-logs-%{+YYYY.MM.dd}"
  }
}

Step 3. Create a Dockerfile for Logstash:

dockerfile

FROM docker.elastic.co/logstash/logstash:8.15.0
COPY logstash.conf /usr/share/logstash/pipeline/logstash.conf

Step 4. Add Logstash to compose.yaml:

Yaml

  logstash:
    build: ./logstash
    ports:
      - "5044:5044"
    depends_on:
      - elasticsearch
    networks:
      - task-network

Installing and configuring Kibana

Add Kibana to compose.yaml:

Yaml

  kibana:
    image: docker.elastic.co/kibana/kibana:8.15.0
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    networks:
      - task-network

Installing Filebeat to gather logs

Step 1. Create a directory for Filebeat configuration:

Terminal

mkdir filebeat
cd filebeat

Step 2. Create the filebeat.yml configuration file:

Yaml

filebeat.inputs:
- type: docker
  containers.ids:
    - '*'
processors:
  - add_docker_metadata: ~
output.logstash:
  hosts: ["logstash:5044"]

Step 3. Create a Dockerfile for Filebeat:

dockerfile

FROM docker.elastic.co/beats/filebeat:8.15.0
COPY filebeat.yml /usr/share/filebeat/filebeat.yml

Step 4. Add Filebeat to compose.yaml:

Yaml

filebeat:
    build: ./filebeat
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
    depends_on:
      - logstash
    networks:
      - task-network

7.4 Full File

Full compose.yaml file

Yaml

version: '3'

services:
  frontend:
    build: ./frontend
    ports:
      - "3000:3000"
    networks:
      - task-network
        
  backend:
    build: ./backend
    ports:
      - "5000:5000"
    depends_on:
      - database
    networks:
      - task-network
    environment:
      - DATABASE_URL=postgresql://taskuser:taskpassword@database:5432/taskdb
        
  database:
    image: postgres:13
    environment:
      - POSTGRES_DB=taskdb
      - POSTGRES_USER=taskuser
      - POSTGRES_PASSWORD=taskpassword
    networks:
      - task-network
    volumes:
      - db-data:/var/lib/postgresql/data
        
  nginx:
    build: ./nginx
    ports:
      - "80:80"
    depends_on:
      - frontend
      - backend
    networks:
      - task-network
        
  prometheus:
    build: ./prometheus
    ports:
      - "9090:9090"
    networks:
      - task-network
        
  grafana:
    image: grafana/grafana
    ports:
      - "3033:3000"
    depends_on:
      - prometheus
    networks:
      - task-network
        
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
    environment:
      - discovery.type=single-node
    ports:
      - "9200:9200"
    networks:
      - task-network
        
  logstash:
    build: ./logstash
    ports:
      - "5044:5044"
    depends_on:
      - elasticsearch
    networks:
      - task-network
        
  kibana:
    image: docker.elastic.co/kibana/kibana:7.9.2
    ports:
      - "5601:5601"
    depends_on:
      - elasticsearch
    networks:
      - task-network
        
  filebeat:
    build: ./filebeat
    volumes:
      - "/var/run/docker.sock:/var/run/docker.sock:ro"
    depends_on:
      - logstash
    networks:
      - task-network
        
networks:
  task-network:
    driver: bridge
        
volumes:
  db-data:
3
Task
Docker SELF, level 24, lesson 1
Locked
Configuring Prometheus Monitoring
Configuring Prometheus Monitoring
3
Task
Docker SELF, level 24, lesson 1
Locked
Logging with ELK Stack
Logging with ELK Stack
Comments
TO VIEW ALL COMMENTS OR TO MAKE A COMMENT,
GO TO FULL VERSION