DevOps questions

Docker

  • Explain Docker life cycle?
  • Explain/Write simple Dockerfile start to end.
  • What are layers in docker? how to reduce layer?
  • What is CMD and entry point? can we use both? When to use?
  • What is multistage dockerfile?
  • What’s Difference between ADD vs COPY, RUN vs CMD?
  • How to optimize docker image size?
  • How to optimize security in dockerfile?
  • Tools used for scanning docker file and image?
  • What is distroless image? when to use?
  • How to verify distroless image with cosign?
  • have you used tini ? why?
  • Can you run container with different kernel version than base kernel host?
  • What are the best practices to write Dockerfile?
  • what is docker buildkit?
  • What is docker-compose?
  • How container talks to other container in docker-compose?
  • Types of networks in docker-compose?
  • how to auto restart docker container?

Jenkins

  • Explain Your jenkins CICD pipeline?
  • Explain Jenkins DevSecOps Pipleine?
  • How you integrated sonarqube with your pipeline?
  • What are the ways we can start the build? OR type of build triggers?
  • How you add jenkins worker node? what are other ways to do?
  • How you manage credentials in Jenkins?
  • Diff between webhook and pool scm?
  • What plugins you used in Jenkins?
  • How do you take backup of Jenkins?
  • Have you written declarative or scripted pipeline? what’s difference?
  • How you pass env variable from one stage to another stage?

Kubernetes

  • What are the components of kuberntes?
  • What is pod and deployment, Daemonset
  • When do you use Daemonset?
  • What is sidecar container? when to use?
  • What is static pod?
  • How you have integrated CICD with Jenkins?
  • What is Kubeproxy? CoreDNS?
  • How you expose your service externally?
  • What will happen if k8 control node goes down?
  • have you upgraded your k8 cluster? what are the steps? explain
  • How you add security to k8 cluster?
  • How you scan your k8 deployment file?
  • Best practice to write k8 deployment file, explain componets?
  • What is taints, tolerance, node affinity, pod affinity , anti-affinity?
  • What is CNI? What CNI you have used? Why we choose different CNI?
  • Default scaling method available in kubernetes? What task are require to setup HPA?
  • What VPA? what kind of metrics we can use? from where?
  • How pod communicate with each other?
  • How kubernetes kube metrics works?
  • How you troubleshoot if pod is in pending/imagepullbackoff state?
  • what is service mesh ? when to use?
  • What is mTLS? how do you secure your pod to pod communication?
  • What is PV, PVC? what is storageclass?
  • If PV is 50GB and created 20GB PVC using that PV. can 30GB used for new PVC?
  • what is PVC accessModes and Reclaim Policy?

Terraform

  • what is terraform backend?
  • Where do you store your terraform state file? why?
  • Have you created custom module?
  • Explain your Iaac terraform pipeline

Ansible

  • What is ansible playbook?
  • What is ansible role?
  • What module you used in ansible?
  • What is register in ansible?
  • Difference between static and dynamic inventory?
  • How ansible works? how it do authentication?
  • What is ansible galaxy?
  • what is ansible vault? how you store your creds?
  • write playbook to install httpd and enabled it?
  • What is ansible tower?
  • what are handlers in ansible?

General questions

  • What do you mean by CI and CD?
  • What is continue development , continue deployment?
  • what is your git branch strategy? how do you resolve merge conflict?
  • How you optimize cost on aws? (amd, arm, cloudcustodian, aws lambda, s3 static website, s3 lifecycle)
  • How you will host static website?
  • Application is receiving millions of request ? how you will manage it?
  • What is blue-green deployment? tools required? strategy?
  • What is unit testing? how do you configure quality gates, code coverage in sonarqube?

AWS questions

  • AWS lambda time limit?
  • How many VPC and subnet we can create?
  • How do you secure your website, endpoints?
  • What is cloudfront distribution point?
  • How you can recover from wrong fstab configured ec2 which is not starting at all?

Linux questions

  • How do extend space in Linux? LVM? LVM striping?
  • RAID types?
  • Top 10 command that use for daily task?
  • SUID, GUID, Sticky Bit, special permission in linux?
  • which is faster cp or mv? why?
  • command not found? troubleshooting?
  • what is swap memory? why? how?
  • update time in linux?
  • linux boot process?

Install Linkerd in kubernetes

-Install linkerd

curl -sL https://run.linkerd.io/install | sh
export PATH=$PATH:$HOME/.linkerd2/bin
echo "export PATH=$PATH:$HOME/.linkerd2/bin
" > ~/.bashrc
linkerd version
linkerd check --pre

-Install linkerd on kubernetes

linkerd install | kubectl apply -f -

#It will take some time to apply

kubectl -n linkerd get deploy

– Linkerd dashboard

update linkerd-web deployment and add your host ip(eg. 192.168.0.183)

 containers:
      - args:
        - -api-addr=linkerd-controller-api.linkerd.svc.cluster.local:8085
        - -grafana-addr=linkerd-grafana.linkerd.svc.cluster.local:3000
        - -controller-namespace=linkerd
        - -log-level=info
        - -enforced-host=^(192\.168\.0\.183|localhost|127\.0\.0\.1|linkerd-web\.linkerd\.svc\.cluster\.local|linkerd-web\.linkerd\.svc|\[::1\])(:\d+)?$

– update Linkerd service to NodePort

  • Inject linkerd
# Inject all the deployments in the default namespace.
kubectl get deploy -o yaml | linkerd inject - | kubectl apply -f -

adds a linkerd.io/inject: enabled annotation

Create AWS ec2, alb with terraform – userdata

– Download terraform from https://www.terraform.io/downloads.html

unzip terraform_0.13.4_linux_amd64.zip
mv terraform /usr/bin/

– Setup and configure aws cli

– Create a file ec2.tf

provider "aws" {
  region = "ap-south-1"
}

resource "aws_key_pair" "ap-web-01" {
  key_name   = "ap-web-01"
  public_key = "YOUR_SSH_PUB_KEY"
}

resource "aws_instance" "ap-web-01" {
  ami = "ami-086c142842468ba9d"
  instance_type = "t4g.micro"
  key_name = "ap-web-01"
  security_groups = ["ap-web-01"]
  user_data = "${file("userdata.sh")}"

  tags = {
    Name = "ap-web-01"
    env = "prod"
    owner = "admin"
  }

}

resource "aws_security_group" "ap-web-01" {
  name        = "ap-web-01"
  description = "ap-web-01 inbound traffic"

  ingress {
    description = "all"
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = {
    Name = "ap-web-01"
  }
}

alb.tf

#target group
resource "aws_lb_target_group" "web1-tg" {
  name     = "web1-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = "vpc-01cf98f5afb156c90"
  target_type = "instance"
}

#target group attachment
resource "aws_lb_target_group_attachment" "web1-tg-attach" {
  target_group_arn = aws_lb_target_group.web1-tg.arn
  target_id        = aws_instance.ap-web-01.id
  port             = 80
}

#alb
resource "aws_lb" "web1-alb" {
  name               = "web1-alb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.ap-web-01.id]
  subnets            = ["subnet-093a2ddfcb7bc30b1", "subnet-0475d9e26dfdc9d00", "subnet-0274975b4af3513ee"]

  tags = {
    Environment = "web1-alb"
  }
}

#alb-listner
resource "aws_lb_listener" "web1-alb-listner" {
  load_balancer_arn = aws_lb.web1-alb.arn
  port              = "80"
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.web1-tg.arn
  }
}

userdata.sh

#! /bin/bash
sudo apt-get update
sudo apt-get install -y nginx
sudo systemctl start nginx
sudo systemctl enable nginx
echo "<h1>hola Terraform</h1>" | sudo tee /var/www/html/index.html
terraform init
terraform plan
terraform apply -auto-approve

terraform destory

More : https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/instance

Jenkins pipeline timeout and buildDiscarder – best practice

1. Add timeout – To stop pipeline run in infinitely
2. Add build Discard – To stop build to consume disk space

pipeline 
{

    agent { label 'master' }

    options {
        buildDiscarder(logRotator(numToKeepStr: '5'))
        timeout(time: 10, unit: 'SECONDS')
        timestamps()
    }

    stages {
        stage('sleep'){
            steps {
                    sh '''
                    echo sleeping
                    sleep 60
                    '''
                }
            }        
        }
    }

Simple cicd pipeline in Gitlab with runner

1.Install gitlab runner on centos7

wget https://gitlab-runner-downloads.s3.amazonaws.com/latest/rpm/gitlab-runner_amd64.rpm

rpm -ivh gitlab-runner_amd64.rpm

systemctl status gitlab-runner

More : https://docs.gitlab.com/runner/install/

2.Get Gitlab URL and token for runner

https://gitlab.com/<username>/<project_name> > setting > CI / CD > Runners

Note: This token has been revoked. you will have different token

3.Register Runner with gitlab-runner register command as below

4.Create .gitlab-ci.yml in your gitproject root directory

stage1:
  tags:
  - ci
  script:
    - echo stage 1

stage2:
  tags:
  - ci
  script:
    - echo stage 2

tags: it’s should be same as we used in runner registration

Scale to Zero with Openfaas serverless deployment

Create openfass serverless deployment

1.update faas-idler deployment in k8 from dryRun=true to dryRun=false

2.While deploying the openfass faas-idler add “com.openfaas.scale.zero=true”

sudo faas-cli deploy -f python-fn.yml  --label "com.openfaas.scale.zero=true"

https://docs.openfaas.com/architecture/autoscaling/

kubernetes Highly Available clusters Using 3 Master node

1. Setup TCP load balancer using nginx (192.168.0.50)

load_module /usr/lib64/nginx/modules/ngx_stream_module.so;
events { }

stream {

upstream kapi {
	server 192.168.0.51:6443;
	server 192.168.0.52:6443;
	server 192.168.0.53:6443;
}

server {
	listen 8888;
	proxy_pass kapi;
    }

}

2. Run below command on all master node

yum install -y kubelet kubeadm kubectl docker
systemctl enable kubelet
systemctl start kubelet
systemctl enable docker
systemctl start docker

3. Run below command on 192.168.0.51

kubeadm init --control-plane-endpoint "192.168.0.50:8888" --upload-certs

It will generate command to add other master node and worker node

4.Join other 2 master (192.168.0.51, 192.168.0.52)

 kubeadm join 192.168.0.50:8888 --token hvlnv8.6r90i8d04cs23sii \
    --discovery-token-ca-cert-hash sha256:bc6fe39f98c7ae6cd8434bd8ade4eb3b15b45e151af37595e4be0a9fdfcfdcc4 \
    --control-plane --certificate-key 3659353b0a256650fb0c1a0357cb608d07e3bdc8ce8b64fa995bcb814c131fa6

Note : Token will be differ

5.Get the info of cluster

kubectl cluster-info

kubectl get node

Skip stages in Jenkins

stages {

def skip_stage = 'skip'
pipeline
{
agent {
    label 'master'
}
parameters {
string(name: 'skip_stage', description: 'skip stage', defaultValue: 'skip')
}
  stages
{
  stage("test") {
     when {
        expression { skip_stage != "skip" }
     }
     steps {
        echo 'This will never run'
     }
  }
}
}

More when condition : https://gist.github.com/merikan/228cdb1893fca91f0663bab7b095757c

Setup elasticsearch cluster with 3 nodes

1. Up the /etc/hosts on all 3 nodes

192.168.0.50 elk1.local
192.168.0.51 elk2.local
192.168.0.52 elk3.local

Note : Minimum 2 nodes should be up to make cluster healthy.

2.Install elasticsearch on all 3 nodes

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

echo '[elasticsearch-7.x]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md' > /etc/yum.repos.d/elasticsearch.repo

###install elasticsearch
yum -y install elasticsearch

###Enable elasticsearch
systemctl enable elasticsearch

3. Edit /etc/elasticsearch/elasticsearch.yml as per cluster name (eg. elk-cluster)

cluster.name: elk-cluster
node.name: elk1.local
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 192.168.0.50
discovery.seed_hosts: ["elk1.local", "elk2.local", "elk3.local"]
cluster.initial_master_nodes: ["elk1.local", "elk2.local", "elk3.local"]

change the only node.name and network.host for other 2 elasticsearch nodes

4. Restart elasticsearch service on all 3 elasticsearch node

systemctl restart elasticsearch

After restart 1 master node will be elected.

5. Check master node in elasticsearch cluster

curl -X GET "192.168.0.50:9200/_cat/master?v&pretty"

More : https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-master.html

More information about setting up cluster : https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery.html

Jenkins build id and name with git commit message

  1. Install build-name-setter plugin https://plugins.jenkins.io/build-name-setter/
pipeline
{
    agent any
    stages
    {
        stage('Git-checkout')
        {
            steps
            {
                sh '''
                rm  -rf simple-storage-solution 2>&1
                git clone https://github.com/initedit-project/simple-storage-solution.git
                cd simple-storage-solution
                git log -1 --pretty=format:"%s" > ../gitcommitmsg.txt
                
            '''
            script
            {
                def gitcommitmsg = readFile(file: 'gitcommitmsg.txt')
                buildDescription gitcommitmsg
            }
            }
            
        }
        
    }
}

Output: