Jenkins build id and name with git commit message

  1. Install build-name-setter plugin https://plugins.jenkins.io/build-name-setter/
pipeline
{
    agent any
    stages
    {
        stage('Git-checkout')
        {
            steps
            {
                sh '''
                rm  -rf simple-storage-solution 2>&1
                git clone https://github.com/initedit-project/simple-storage-solution.git
                cd simple-storage-solution
                git log -1 --pretty=format:"%s" > ../gitcommitmsg.txt
                
            '''
            script
            {
                def gitcommitmsg = readFile(file: 'gitcommitmsg.txt')
                buildDescription gitcommitmsg
            }
            }
            
        }
        
    }
}

Output:

NFS server in linux

apt-get install nfs-kernel-server
systemctl start nfs-server
systemctl enable nfs-server

yum install nfs-utils - for centos
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

nano /etc/exports
### For specific ip
/var/html 192.168.0.150(rw,sync,no_root_squash)
### For all ip
/var/html *(rw,sync,no_root_squash)

exportfs -r
exportfs -a
exportfs  

mount -t nfs 192.168.0.150:/var/html  /var/html

###  for showing mounts available 
showmount -e 192.168.0.150

Docker Swarm cluster configuration

swarm-master-01 = 192.168.0.150
swarm-node-01 = 192.168.0.151
swarm-node-02 = 192.168.0.152

swarm-master-01

yum install docker
systemctl disable firewalld
systemctl stop firewalld

docker swarm init --advertise-addr 192.168.0.150

#after this command it will genrate join command

docker swarm join --token SWMTKN-1-3xrfrgwy67vm0dmel94fveuqvg9ngsv8qt5jysl31xfv16c0gq-55tzlxjtezu59l4mw4hxjo3h9 192.168.0.150:2377

On swarm-node1,2

yum install docker
systemctl disable firewalld
systemctl stop firewalld

docker swarm join --token SWMTKN-1-3xrfrgwy67vm0dmel94fveuqvg9ngsv8qt5jysl31xfv16c0gq-55tzlxjtezu59l4mw4hxjo3h9 192.168.0.150:2377

Install swarm dashboard

https://github.com/charypar/swarm-dashboard

# compose.yml
version: "3"

services:
  dashboard:
    image: charypar/swarm-dashboard
    volumes:
    - "/var/run/docker.sock:/var/run/docker.sock"
    ports:
    - 8080:8080
    environment:
      PORT: 8080
    deploy:
      replicas: 1
      placement:
        constraints:
          - node.role == manager

#deploy swarm dashboard

docker stack deploy -c compose.yml svc

#Dashboard will be accessible on http://master_ip:8080

Deploy service in swarm cluster

docker service create -p 8881:80 --name httpd --replicas 2 httpd

Prometheus and Grafana installation and configuration

  1. Download the latest stable version of Prometheus from https://prometheus.io/download/
wget https://github.com/prometheus/prometheus/releases/download/v2.16.0/prometheus-2.16.0.linux-amd64.tar.gz

tar -xzf prometheus-2.16.0.linux-amd64.tar.gz
mv prometheus-2.16.0.linux-amd64 /etc/prometheus

mv prometheus /usr/local/bin/
mv promtool /usr/local/bin/
mv tsdb /usr/local/bin/

mv consoles /etc/prometheus/consoles
mv console_libraries  /etc/prometheus/console_libraries
mkdir -p /var/lib/prometheus

2. Create Prometheus service

echo '[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
Wants=network-online.target
After=network-online.target

[Service]
User=root
Group=root
Type=simple
Restart=on-failure

ExecStart=/usr/local/bin/prometheus \
  --config.file=/etc/prometheus/prometheus.yml \
  --storage.tsdb.path=/var/lib/prometheus \
  --web.console.templates=/etc/prometheus/consoles \
  --web.console.libraries=/etc/prometheus/console_libraries \
  --storage.tsdb.retention.time=30d

[Install]
WantedBy=multi-user.target' > /etc/systemd/system/prometheus.service

3. Configure /etc/prometheus/prometheus.yml

global:
  scrape_interval:     30s
  evaluation_interval: 30s

scrape_configs:
  - job_name: 'server1'
    static_configs:
    - targets: ['localhost:9100']

  - job_name: 'server2'
    static_configs:
    - targets: ['192.168.0.150:9100']

4. Download the latest stable node_exporter from https://prometheus.io/download/

wget https://github.com/prometheus/node_exporter/releases/download/v1.0.0-rc.0/node_exporter-1.0.0-rc.0.linux-amd64.tar.gz

tar -xzf node_exporter-1.0.0-rc.0.linux-amd64.tar.gz

mv node_exporter-1.0.0-rc.0.linux-amd64/node_exporter /usr/local/bin/

5. Create service for node_exporter

echo '[Unit]
Description=Node Exporter
Wants=network-online.target
After=network-online.target

[Service]
User=root
Group=root
Type=simple
Restart=on-failure
ExecStart=/usr/local/bin/node_exporter

[Install]
WantedBy=multi-user.target
' >/etc/systemd/system/node_exporter.service

6. Enable and start prometheus and node_exporter service


systemctl daemon-reload
systemctl enable node_exporter
systemctl enable prometheus
systemctl start prometheus
systemctl start node_exporter

7. Prometheus will be available at http://server_ip_address:9090

8. Install Grafana

echo '[grafana]
name=grafana
baseurl=https://packages.grafana.com/oss/rpm
repo_gpgcheck=1
enabled=1
gpgcheck=1
gpgkey=https://packages.grafana.com/gpg.key
sslverify=1
sslcacert=/etc/pki/tls/certs/ca-bundle.crt' >  /etc/yum.repos.d/grafana.repo

yum install grafana

systemctl start grafana-server
systemctl enable grafana-server

9. Grafana will be accessiable at http://server_ip_address:3000

10. CPU, RAM, DISK Usage query for prometheus and grafana

#CPU:

100 - (avg by (job) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)

#RAM:

100 * avg by (job) (1 - ((avg_over_time(node_memory_MemFree_bytes[10m]) + avg_over_time(node_memory_Cached_bytes[10m]) + avg_over_time(node_memory_Buffers_bytes[10m])) / avg_over_time(node_memory_MemTotal_bytes[10m])))

#DISK usage:

100 - avg by (job) ((node_filesystem_avail_bytes{mountpoint="/",fstype!="rootfs"} * 100) /            node_filesystem_size_bytes{mountpoint="/",fstype!="rootfs"})

#DISK read IO:
avg by (job) (irate(node_disk_read_bytes_total{device="sda"}[5m]) / 1024 / 1024)

#DISK WRITE IO:
avg by (job) (irate(node_disk_written_bytes_total{device="sda"}[1m]) / 1024 / 1024)

Install k8 with 2 nodes on centos7

Prerequisites:
– Disable swap
– Disable SElinux
– Disable Firewalld(optional if all k8 rule added)

Servers IP:
kmaster1 = 192.168.0.10
knode1 = 192.168.0.11
knode2 = 192.168.0.12

#Disable swap
swapoff -a

swapline=$(cat -n /etc/fstab | grep swap | awk '{print $1}')
if [ $(cat /etc/fstab | grep swap | awk '{print substr($0,0,1)}') != "#" ]
then
sed -i ""$swapline"s/.*/#&/" /etc/fstab
fi


#Disable SElinux
setenforce 0
sed -i  's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

#Disable firewalld
systemctl stop firewalld
systemctl disable firewalld

#iptables conf
echo 'net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1' > /etc/sysctl.d/k8.conf

sysctl --system

Add Kubernetes Repository

echo '[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
' >  /etc/yum.repos.d/kubernetes.repo

On kmaster01

yum install -y kubelet kubeadm kubectl docker
systemctl enable kubelet
systemctl start kubelet
systemctl enable docker
systemctl start docker

hostnamectl set-hostname kmaster01

kubeadm init

At end of above command run below command and save the join command.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

AND note the join command

Apply weave network:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

On knode1:

yum install -y kubelet kubeadm kubectl docker
systemctl enable kubelet
systemctl start kubelet
systemctl enable docker
systemctl start docker

hostnamectl set-hostname knode1

#Join command
kubeadm join 192.168.0.10:6443 --token lfh49o.f9na1435g8vs1fmt \
     --discovery-token-ca-cert-hash sha256:0064f08a4c0ef36e454b683f61a68e0bf78d9cdb45f7905128c68b28fc2a5b3e

On knode2:

yum install -y kubelet kubeadm kubectl docker
systemctl enable kubelet
systemctl start kubelet
systemctl enable docker
systemctl start docker

hostnamectl set-hostname knode2

#Join command
kubeadm join 192.168.0.10:6443 --token lfh49o.f9na1435g8vs1fmt \
     --discovery-token-ca-cert-hash sha256:0064f08a4c0ef36e454b683f61a68e0bf78d9cdb45f7905128c68b28fc2a5b3e

nginx-app.yml


apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  selector:
    matchLabels:
      run: my-nginx
  replicas: 2
  template:
    metadata:
      labels:
        run: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx
        ports:
        - containerPort: 80

nginx-svc.yml

apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
    run: nginx-svc
spec:
  type: NodePort
  ports:
  - port: 80
    protocol: TCP
  selector:
    run: my-nginx

kubectl apply -f ngix-app.yml
kubectl apply -f ngix-svc.yml

Jenkins pipeline using secret credentials and password


pipeline
{
    agent { label 'master' }
    stages
    {
        
        stage('check_credentials')
        {
            steps
            {
            
              withCredentials([usernamePassword(credentialsId: 'c1', usernameVariable: 'uname', passwordVariable: 'upass' )]) 
              {
                sh '''
                   echo "usernamePassword = $uname $upass" > test.txt
                '''
              }
               
              withCredentials([usernameColonPassword(credentialsId: 'c1', variable: 'uname')]) 
               {
                sh '''
                  echo "usernameColonPassword = $uname" >> test.txt
                '''
              }
              withCredentials([string(credentialsId: 't1', variable: 'TOKEN')]) 
              {
                sh '''
                  echo "string = $TOKEN" >> test.txt
                '''
              }
              sh 'cat test.txt'
            }
        }
        
        
    }
}

simple custom fields in logstash – ELK

sample log :

abc-xyz 123
abc-xyz 123
abc-xyz 123

Sending logs to sample log ELK server:

echo -n "abc-xyz 123" >/dev/tcp/ELK_SERVER_IP/2001

custom1.conf

input {
    tcp
    {
        port => 2001
        type => "syslog"
    }
}

filter {
  grok {
    match => {"message" => "%{WORD:word1}-%{WORD:word2} %{NUMBER:number1}"}
  }

 #add tag
  mutate {
    add_tag => { "word1" => "%{word1}" }
  }

#add custom field
  mutate {
    add_field => { "logstype=" => "sample" }
  }
}

output {

    if [type] == "syslog" {
    elasticsearch {
        hosts => ["localhost:9200"]
        index => "custom1-%{+YYYY.MM.dd}"
    }
#for debug
#stdout {codec => rubydebug} 
    }

}

Run logstash to collect the logs

logstash -f custom1.conf

NOTE:
– if number1 was already indexed as string then you have to delete the old index
– if use add_field again with same name after grok it will show double value