Kong Functions (Pre-Plugins)

lets you dynamically run Lua code from Kong, before other plugins in each phase.

for eg. removing the header if it is blank.

local k_request = kong.request.get_header("Content-Type")

if k_request and (k_request == "application/json") then
  local check_head = kong.request.get_raw_body()
  if (not check_head) or (check_head == "") then
    kong.service.request.clear_header("Content-Type")
  end
end

Rabbitmq shovel – sync/move queue message

  • Export the queue definition and Import into destination rabbitmq
  • Enable Shovel plugin by running these command inside container
rabbitmq-plugins enable rabbitmq_shovel
rabbitmq-plugins enable rabbitmq_shovel_management
  • Admin > Shovel Management

Note: Source should not include / at the end.

  • Destination: amqp://
  • queue: k1

Error:

2023-11-09 15:24:08.911950+00:00 [error] <0.7052.0>                {child_type,worker}]
2023-11-09 15:24:08.928523+00:00 [error] <0.7502.0> Shovel 's2' failed to connect (URI: amqp://rabbitmqcluster-prod.test/): access to target virtual host was refused
2023-11-09 15:24:08.928734+00:00 [error] <0.7502.0> Shovel 's2' has no more URIs to try for connection
2023-11-09 15:24:08.929101+00:00 [error] <0.7502.0> Shovel 's2' could not connect to source

References:

AD integration with linux ssh login and sudo access

ad_join.sh

#!/bin/bash

#check if already joined to domain

if [[ $(realm list) != "" ]]
then
echo "This server is already joined to domain."
realm list | head -n 1
exit
fi

function update_sssd_config() {
    sed -i 's/use_fully_qualified_names = True/use_fully_qualified_names = False/g' /etc/sssd/sssd.conf
    sed -i 's|/home/%u@%d|/home/%u|g' /etc/sssd/sssd.conf
    systemctl restart sssd
}

function restrict_ssh_access_group() {
    if [[ $(cat /etc/ssh/sshd_config | grep -o "updated_by_ad_join") != "updated_by_ad_join" ]]
    then
    echo "###############updated_by_ad_join.sh###############" >> /etc/ssh/sshd_config
    echo "AllowGroups root ssh-access-group" >> /etc/ssh/sshd_config
    systemctl restart sshd
    fi
}

function sudo_access_level_group() {
    if [[ $(cat /etc/sudoers | grep -o "updated_by_ad_join") != "updated_by_ad_join" ]]
    then
    echo "###############updated_by_ad_join.sh###############" >> /etc/sudoers
    echo "Cmnd_Alias SUDO_ACCESS_LEVEL1 = /usr/bin/ls, /usr/bin/cat " >> /etc/sudoers
    echo "Cmnd_Alias SUDO_ACCESS_LEVEL2 = /usr/bin/vi, /usr/bin/nano " >> /etc/sudoers

    echo "%sudo-group-level1 ALL=(ALL) NOPASSWD: SUDO_ACCESS_LEVEL1"  >> /etc/sudoers
    echo "%sudo-group-level2 ALL=(ALL) NOPASSWD: SUDO_ACCESS_LEVEL2"  >> /etc/sudoers
    echo "%sudo-group-full-access ALL=(ALL) NOPASSWD: ALL"  >> /etc/sudoers
    fi
}

#check os 
if [[ $(cat /etc/os-release | egrep "centos|redhat|fedora|rhel|oracle|rocky") != "" ]]
then
yum install sssd realmd oddjob oddjob-mkhomedir adcli samba-common samba-common-tools krb5-workstation openldap-clients  -y
realm join -vvv --user=administrator ad.example.com

#call function
update_sssd_config

restrict_ssh_access_group

sudo_access_level_group

fi

Build haskell static binary with docker

Why?
– Reduce surface attack
– Reduce docker image size

hola.sh

{-# LANGUAGE OverloadedStrings #-}
import Web.Scotty

import Data.Monoid (mconcat)

main = scotty 3000 $
    get "/:word" $ do
        beam <- param "word"
        html $ mconcat ["<h1>Scotty, ", beam, " me up!</h1>"]

alpine.Dockerfile

FROM haskell:8 AS build
WORKDIR /opt
RUN cabal update
RUN cabal install --lib scotty
COPY hola.hs .
#RUN ghc --make -threaded hola.hs  -o hola
RUN ghc --make -threaded -optl-static -optl-pthread hola.hs -o hola

FROM alpine:3.15.0
RUN addgroup -S group1 && adduser -S user1 -G group1
USER user1
WORKDIR /opt
COPY --from=build /opt/hola .
EXPOSE 3000
CMD ["/opt/hola"]

More on haskell static binary –

ELK on docker-compose

version: '2.2'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0
    container_name: elasticsearch
    environment:
      discovery.type: "single-node"
    volumes:
      - /root/elasticsearch:/usr/share/elasticsearch/data
    ports:
      - 9200:9200

  kibana:
    image: docker.elastic.co/kibana/kibana:7.12.0
    container_name: kibana
    environment:
      elasticsearch.hosts: "elasticsearch:9200"
    ports:
      - 5601:5601

Promethus inbuilt basic authentication and TLS

github : https://github.com/prometheus/prometheus/pull/8316

from v2.24.0 basic authentication and TLS is inbuilt into prometheus.

webconfig.yml

tls_server_config:
  cert_file: /etc/prometheus/prometheus.cert
  key_file: /etc/prometheus/prometheus.key

basic_auth_users:
  admin: $2y$12$/B1Z0Ohq/g9z/BlD30mi/uRDNdBRs/VrtAZrJDtY73Ttjc8RYHJ2O
  • Start prometheus with webconfig file
./prometheus --web.config.file=webconfig.yml
  • Prometheus will be accessible on https and with basic auth (admin/admin)
  • Password should be bcrypt encrypted – https://bcrypt-generator.com

More : https://github.com/roidelapluie/prometheus/blob/5b4f46a348ae3bc143629f25f0f997f39f30c2c2/docs/configuration/https.md

update jenkins timezone

There many ways to update jenkins timezone

Verify:

cat /etc/timezone
cat ls -ltr /etc/localtime
date

Centos/Redhat:

#update /etc/sysconfig/jenkins

#JENKINS_JAVA_OPTIONS="-Dorg.apache.commons.jelly.tags.fmt.timeZone=Asia/Calcutta"
#JENKINS_JAVA_OPTIONS="-Duser.timezone=Asia/Calcutta"

Debian/ubuntu :

#update /etc/default/jenkins

JAVA_ARGS="-Dorg.apache.commons.jelly.tags.fmt.timeZone=Asia/Calcutta"
JAVA_ARGS="-Duser.timezone=Asia/Calcutta"

Jenkins script console:


System.setProperty('user.timezone', 'Asia/Calcutta')
System.setProperty('org.apache.commons.jelly.tags.fmt.timeZone', 'Asia/Calcutta')

Install metric server in kubernetes

WHY?
– Get node CPU/RAM usages
– Can create Horizontal Pod Autoscaler (HPA)
– Light weight

git clone https://github.com/kubernetes-sigs/metrics-server.git
  • Edit metrics-server/manifests/base/deployment.yaml and add below lines to args
args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
          - --kubelet-use-node-status-port #Deprecated metrics-server:v0.3.7
          - --kubelet-insecure-tls
kubectl apply -f metrics-server/manifests/base
  • To get node metrics run kubectl get top node

Jenkins skip stages using git branch name regex


def git_url = 'https://github.com/initedit/simple-storage-solution.git'
def git_branch = 'master'

pipeline
{
    agent
    {
        label 'master'
    }

    stages
    {
        stage('skip1')
        {
            when {
            expression {
                        echo git_branch
                        isDev = !(git_branch =~ /^dev*([a-zA-Z0-9]*)/)
                        return isDev
                       }
            }
            steps{
                   echo "if dev branch it will skip"
               }
        
        }
    }
}

Regex:
^dev*([a-zA-Z0-9]* = Start with dev
dev*([a-zA-Z0-9]* = contains dev

More : https://e.printstacktrace.blog/groovy-regular-expressions-the-definitive-guide/

Jenkins pipeline timeout and buildDiscarder – best practice

1. Add timeout – To stop pipeline run in infinitely
2. Add build Discard – To stop build to consume disk space

pipeline 
{

    agent { label 'master' }

    options {
        buildDiscarder(logRotator(numToKeepStr: '5'))
        timeout(time: 10, unit: 'SECONDS')
        timestamps()
    }

    stages {
        stage('sleep'){
            steps {
                    sh '''
                    echo sleeping
                    sleep 60
                    '''
                }
            }        
        }
    }