Promethus inbuilt basic authentication and TLS

github : https://github.com/prometheus/prometheus/pull/8316

from v2.24.0 basic authentication and TLS is inbuilt into prometheus.

webconfig.yml

tls_server_config:
  cert_file: /etc/prometheus/prometheus.cert
  key_file: /etc/prometheus/prometheus.key

basic_auth_users:
  admin: $2y$12$/B1Z0Ohq/g9z/BlD30mi/uRDNdBRs/VrtAZrJDtY73Ttjc8RYHJ2O
  • Start prometheus with webconfig file
./prometheus --web.config.file=webconfig.yml
  • Prometheus will be accessible on https and with basic auth (admin/admin)
  • Password should be bcrypt encrypted – https://bcrypt-generator.com

More : https://github.com/roidelapluie/prometheus/blob/5b4f46a348ae3bc143629f25f0f997f39f30c2c2/docs/configuration/https.md

prometheus blackbox exporter in Kubernetes

prometheus-blackbox.yml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-blackbox-exporter
  labels:
    app: prometheus-blackbox-exporter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus-blackbox-exporter
  template:
    metadata:
      labels:
        app: prometheus-blackbox-exporter
    spec:
      restartPolicy: Always
      containers:
        - name: blackbox-exporter
          image: "prom/blackbox-exporter:v0.15.1"
          imagePullPolicy: IfNotPresent
          args:
            - "--config.file=/config/blackbox.yaml"
          ports:
            - containerPort: 9115
          volumeMounts:
            - mountPath: /config
              name: prometheus-config
      volumes:
        - name: prometheus-config
          configMap:
            name: prometheus-blackbox-exporter

---
kind: Service
apiVersion: v1
metadata:
  name: prometheus-blackbox-exporter
  labels:
    app: prometheus-blackbox-exporter
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 9115
      protocol: TCP
  selector:
    app: prometheus-blackbox-exporter

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-blackbox-exporter
  labels:
    app: prometheus-blackbox-exporter
data:
  blackbox.yaml: |
    modules:
      http_2xx:
        http:
          no_follow_redirects: false
          preferred_ip_protocol: ip4
          valid_http_versions:
          - HTTP/1.1
          - HTTP/2
          valid_status_codes: []
        prober: http
        timeout: 5s

2. in prometheus update prometheus.yml file as below

3. Prometheus query

probe_http_status_code{job="web1"}

Setup elasticsearch cluster with 3 nodes

1. Up the /etc/hosts on all 3 nodes

192.168.0.50 elk1.local
192.168.0.51 elk2.local
192.168.0.52 elk3.local

Note : Minimum 2 nodes should be up to make cluster healthy.

2.Install elasticsearch on all 3 nodes

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

echo '[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md' > /etc/yum.repos.d/elasticsearch.repo

###install elasticsearch
yum -y install elasticsearch

###Enable elasticsearch
systemctl enable elasticsearch

3. Edit /etc/elasticsearch/elasticsearch.yml as per cluster name (eg. elk-cluster)

cluster.name: elk-cluster
node.name: elk1.local
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.0.50
discovery.seed_hosts: ["elk1.local", "elk2.local", "elk3.local"]
cluster.initial_master_nodes: ["elk1.local", "elk2.local", "elk3.local"]

change the only node.name and network.host for other 2 elasticsearch nodes

4. Restart elasticsearch service on all 3 elasticsearch node

systemctl restart elasticsearch

After restart 1 master node will be elected.

5. Check master node in elasticsearch cluster

curl -X GET "192.168.0.50:9200/_cat/master?v&pretty"

More : https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-master.html

More information about setting up cluster : https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html

Prometheus pushgateway to monitor running proccess (docker ps)

1.Deploy pushgateway to kubernetes

pushgateway.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: pushgateway-deployment
  labels:
    app: pushgateway
    env: prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pushgateway
      env: prod
  template:
    metadata:
      labels:
        app: pushgateway
        env: prod
    spec:
      containers:
      - name: pushgateway-container
        image: prom/pushgateway
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: "128Mi"
            cpu: "200m"
          limits:
            memory: "256Mi"
            cpu: "200m"
        ports:
        - containerPort: 9091
---
kind: Service
apiVersion: v1
metadata:
  name: pushgateway-service
  labels:
    app: pushgateway
    env: prod
spec:
  selector:
    app: pushgateway
    env: prod
  ports:
  - name: pushgateway
    protocol: TCP
    port: 9091
    targetPort: 9091
    nodePort: 30191
  type: NodePort

2. Add pushgateway in /etc/prometheus/prometheus.yml

3. Push running docker status to pushgateway using below bash script and add it to crontab

job="docker_status"

running_docker=$(docker ps | wc -l)
docker_images=$(docker images | wc -l)

cat <<EOF | curl --data-binary @- http://192.168.0.183:30191/metrics/job/$job/instance/$(hostname)
# TYPE running_docker counter
running_docker $running_docker
docker_images $docker_images
EOF

4. Data visualization in prometheus and pushgateway server

Python code:

job_name='cpuload'
instance_name='web1'
payload_key='cpu'
payload_value='10'
#print("{k} {v} \n".format(k=payload_key, v=payload_value))
#print('http://192.168.0.183:30191/metrics/job/{j}/instance/{i}'.format(j=job_name, i=instance_name))
response = requests.post('http://192.168.0.183:30191/metrics/job/{j}/instance/{i}'.format(j=job_name, i=instance_name), data="{k} {v}\n".format(k=payload_key, v=payload_value))
#print(response.text)

pushgateway powershell command:

Invoke-WebRequest "http://192.168.0.183:30191/metrics/job/jenkins/instance/instace_name -Body "process 1`n" -Method Post
$process1 = (tasklist /v | Select-String -AllMatches 'Jenkins' | findstr 'java' | %{ $_.Split('')[0]; }) | Out-String
if($process1 -like "java.exe*"){
   write-host("This is if statement")
   Invoke-WebRequest "http://192.168.0.183:30191/metrics/job/jenkins/instance/instace_name" -Body "jenkins_process 1`n" -Method Post
}else {
   write-host("This is else statement")
   Invoke-WebRequest "http://192.168.0.183:30191/metrics/job/jenkins/instance/instace_name" -Body "jenkins_process 0`n" -Method Post
}

Prometheus and Grafana installation and configuration

  1. Download the latest stable version of Prometheus from https://prometheus.io/download/
wget https://github.com/prometheus/prometheus/releases/download/v2.16.0/prometheus-2.16.0.linux-amd64.tar.gz

tar -xzf prometheus-2.16.0.linux-amd64.tar.gz
mv prometheus-2.16.0.linux-amd64 /etc/prometheus

mv prometheus /usr/local/bin/
mv promtool /usr/local/bin/
mv tsdb /usr/local/bin/

mv consoles /etc/prometheus/consoles
mv console_libraries  /etc/prometheus/console_libraries
mkdir -p /var/lib/prometheus

2. Create Prometheus service

echo '[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
Wants=network-online.target
After=network-online.target

[Service]
User=root
Group=root
Type=simple
Restart=on-failure

ExecStart=/usr/local/bin/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/var/lib/prometheus \
  --web.console.templates=/etc/prometheus/consoles \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --storage.tsdb.retention.time=30d

[Install]
WantedBy=multi-user.target' > /etc/systemd/system/prometheus.service

3. Configure /etc/prometheus/prometheus.yml

global:
  scrape_interval:     30s
  evaluation_interval: 30s

scrape_configs:
  - job_name: 'server1'
    static_configs:
    - targets: ['localhost:9100']

  - job_name: 'server2'
    static_configs:
    - targets: ['192.168.0.150:9100']

4. Download the latest stable node_exporter from https://prometheus.io/download/

wget https://github.com/prometheus/node_exporter/releases/download/v1.0.0-rc.0/node_exporter-1.0.0-rc.0.linux-amd64.tar.gz

tar -xzf node_exporter-1.0.0-rc.0.linux-amd64.tar.gz

mv node_exporter-1.0.0-rc.0.linux-amd64/node_exporter /usr/local/bin/

5. Create service for node_exporter

echo '[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=root
Group=root
Type=simple
Restart=on-failure
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target
' >/etc/systemd/system/node_exporter.service

6. Enable and start prometheus and node_exporter service


systemctl daemon-reload
systemctl enable node_exporter
systemctl enable prometheus
systemctl start prometheus
systemctl start node_exporter

7. Prometheus will be available at http://server_ip_address:9090

8. Install Grafana

echo '[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt' >  /etc/yum.repos.d/grafana.repo

yum install grafana

systemctl start grafana-server
systemctl enable grafana-server

9. Grafana will be accessiable at http://server_ip_address:3000

10. CPU, RAM, DISK Usage query for prometheus and grafana

#CPU:

100 - (avg by (job) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)

#RAM:

100 * avg by (job) (1 - ((avg_over_time(node_memory_MemFree_bytes[10m]) + avg_over_time(node_memory_Cached_bytes[10m]) + avg_over_time(node_memory_Buffers_bytes[10m])) / avg_over_time(node_memory_MemTotal_bytes[10m])))

#DISK usage:

100 - avg by (job) ((node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} * 100) /            node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"})

#DISK read IO:
avg by (job) (irate(node_disk_read_bytes_total{device="sda"}[5m]) / 1024 / 1024)

#DISK WRITE IO:
avg by (job) (irate(node_disk_written_bytes_total{device="sda"}[1m]) / 1024 / 1024)