kubectl custom query for CPU Memory Request Limit

kubectl custom query to get cpu , memory request and limit

kubectl get deploy -A -o=custom-columns='Namespace:.metadata.namespace,Name:.metadata.name,Request_CPU:.spec.template.spec.containers[0].resources.requests.cpu,Limit_CPU:.spec.template.spec.containers[0].resources.limits.cpu,Request_Memory:.spec.template.spec.containers[0].resources.requests.memory,Limit_Memory:.spec.template.spec.containers[0].resources.limits.memory' | sed 1d | tr -s '[:blank:]' ','

kubect replace variabe

deployments=$(kubectl get deploy | awk '{print $1}' | sed 1d)

for deploy in $deployments
do
    deploy_raw_yml=$(kubectl get deploy $deploy -o yaml)
    kubectl get deploy $deploy -o yaml > _tmp_store.yml
    value_to_be_replaced=$(kubectl get deploy $deploy -o yaml | grep -A 1 'NEW_RELIC_APP_NAME' | grep value | awk -F 'value: ' '{print $2}')
    echo "value_to_be_replaced: $value_to_be_replaced"
    if [[ $value_to_be_replaced == "" ]]; then
        echo "=====================$deploy no change =========================="
    else
        replaced_value=$(echo $value_to_be_replaced | sed 's/stage/perf/g')
         echo "replaced_value: $replaced_value"
        cat _tmp_store.yml| sed "s/$value_to_be_replaced/$replaced_value/g"  | kubectl apply -f -
        echo "=====================$deploy done =========================="
    fi
    
done

In Place Pod VerticalScaling in k8

  • Enable feature gate InPlacePodVerticalScaling

/etc/kubernetes/manifests/kube-apiserver.yaml

nginx.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
        resizePolicy:
        - resourceName: cpu
          restartPolicy: NotRequired
        - resourceName: memory
          restartPolicy: NotRequired
        resources:
          limits:
            memory: "100Mi"
            cpu: "100m"
          requests:
            memory: "100Mi"
            cpu: "100m"

patch.yml [only applied to pod]

kubectl patch pod nginx-94675f6cf-9bxzj --patch '{"spec":{"containers":[{"name":"nginx", "resources":{"requests":{"cpu":"200m"}, "limits":{"cpu":"200m"}}}]}}'

rancher desktop – libraries: libbz2.so.1.0: cannot open shared object

  • look for libbz2.so.1 file on your system
sudo find / -name libbz2.so.1
  • list the file to get the correct version
[root@fedora]# ls /usr/lib64/libbz2.so.1.0
libbz2.so.1.0    libbz2.so.1.0.8  
  • create symlink
ln -s /usr/lib64/libbz2.so.1.0.8 /usr/lib64/libbz2.so.1.0

Error on fedora:

libraries: libbz2.so.1.0: cannot open shared object file: No such file or directory\\n\”: exit status 127″\n’

kubernetes deployment scale up/down with bash

scale down deploy on weeknend:

####scale down####
namespaces="test,test2"
IFS=","

for namespace in $namespaces
do
    deployments=$(kubectl get deploy -n $namespace | grep -v '0/0' | awk '{print $1}' | sed 1d | tr '\n' ' ')
    IFS=" "
    for deploy in $deployments
    do
        replicas="$(kubectl get deploy $deploy -o=custom-columns='REPLICAS:spec.replicas' -n $namespace | sed 1d | tr '\n' ' ')"
        echo "namespace: $namespace deploy: $deploy replicas: $replicas"
        kubectl label deploy $deploy weekdays-replicas=$replicas -n $namespace --overwrite=true
        kubectl scale --replicas=0 statefulset $deploy -n "$namespace" || true
    done
done

scale-up:

####scale up####
namespaces="test,test2"
IFS=","
for namespace in $namespaces
do
    deployments=$(kubectl get deploy -n $namespace | awk '{print $1}' | sed 1d | tr '\n' ' ')
    IFS=" "
    for deploy in $deployments
    do
        replicas="$(kubectl get deploy $deploy -o=custom-columns='REPLICAS:metadata.labels.weekdays-replicas' -n $namespace | sed 1d | tr '\n' ' ')"
        echo "kubectl scale --replicas=$replicas statefulset $deploy -n "$namespace" || true"
    done
done

Read the secrets data from etcd of kubernetes

  • Find out etcd procecss id
ps -ef | grep etcd
  • Go to process directory of ectd
cd /proc/2626577/fd
  • List the files and look for “/var/lib/etcd/member/snap/db
ls -ltr | grep db
  • To read any secret that is currently created by user in k8
#create secret

kubectl create secret generic secret1 --from-literal=secretname=helloworld

#read secret directly from etcd

cat /var/lib/etcd/member/snap/db | strings | grep secret1 -C 10

Encrypting Secret Data at Rest https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/

https://jonathan18186.medium.com/certified-kubernetes-security-specialist-cks-preparation-part-8-runtime-security-system-9f705872c17

CKS Practice questions 2023

  • Create runtimeclass named sandboxed with handler runsc and run new pod using runtime as sandboxed with image nginx.
  • Set min TLS version to VersionTLS12 and cipher to TLS_AES_128_GCM_SHA256 for Kubelet nad kubeapi server
  • etcd with –-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
  • Node-authrization to minimize the cluster role and remove clusterrole and anonymous access.
  • ImagePolicyWebhook with default deny add correct app endpoint url in kubeconfig file
  • auditing with maxage=10, rotate=5
  • falco runtime format %evt,%user.name,%user.id,%proc.name
  • network policy default deny, pod with name and namespace selector
  • create service account bind with role/clusterrole binding and create a pod, delete unsed sa
  • create a secret and mount to pod with readonly
  • Create service account with automounttoken off
  • create a pod with /root/profile using apparmor. podname=xyz, image=nginx
  • analyse 2 issues in Dockerfile and Deployment file
  • scan image with trivy and delete critical severity pod
  • fix kube-bench report for kube-api , kubelet, kube-controler
  • Upgrade k8 cluster from 1.25.4 to 1.26.0

Backup and Restore etcd snapshot for Kubernetes

  1. Create a deployment to verify the restore in the end
k create deploy nginx-test --image=nginx

2. Update the cert path as per /etc/kubernetes/mainifest/etcd.yaml

ETCDCTL_API=3 etcdctl --endpoints=https://127.0.0.1:2379 \
--cacert=<trusted-ca-file> \
--cert=<cert-file> --key=<key-file> \
  snapshot save /tmp/etcd.backup

2. Stop kubelet

systemctl stop kubelet

3. Stop kube-api and etcd

mv /etc/kubernetes/manifests/kube-apiserver.yaml /root/
mv /etc/kubernetes/manifests/etcd.yaml /root/

4. Restore the etcd.backup

ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 snapshot restore etcd.backup

It will create “default.etcd” directory in current directory

[root@lp-k8control-1 etcd]# ls default.etcd/
member

5. Look at /etc/kubernetes/manifests/etcd.yaml etcd-data (/var/lib/etcd) directory path

[root@lp-k8control-1 default.etcd]# ls /var/lib/etcd
member

6. Copy member directory content from default.etcd to /var/lib/etcd

7. Start kube-api and etcd

mv /root/kube-apiserver.yaml /etc/kubernetes/manifests/kube-apiserver.yaml
mv /root/etcd.yaml /etc/kubernetes/manifests/etcd.yaml

8. Restart kubelet service

systemctl restart kubelet

9. Verify if nginx deployment we created in step 1 is restored

k get deploy

Custom Daemonset command based on host_ip in kubernetes

Why?
– When we need to add some extra functionally to daemonset based on which worker node it’s running on

apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: custom-daemonset
  name: custom-daemonset
spec:
  selector:
    matchLabels:
      app: custom-daemonset
  template:
    metadata:
      labels:
        app: custom-daemonset
    spec:
      containers:
      - command:
        - /bin/bash
        - -c
        - |
          echo "$STARTUP_SCRIPT" > /tmp/STARTUP_SCRIPT.sh
          /bin/bash /tmp/STARTUP_SCRIPT.sh
        env:
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: STARTUP_SCRIPT
          value: |
            #!/bin/bash
            if [ $HOST_IP == "192.168.0.184" ]; then
              echo "HOST_IP is $HOST_IP"
            else
              echo "HOST_IP does not match $HOST_IP"
            fi
            sleep 600
        image: nginx
        imagePullPolicy: IfNotPresent
        name: custom-daemonset

Ref : https://github.com/kubernetes/kubernetes/issues/24657#issuecomment-577747926