Pod affinity, readiness, liveness in kubernetes

why:

pod affinity: Attracts pods with with matching label.
readiness : checks pod health before sending any traffic
liveness : checks health of pod

kubectl get nodes --show-labels

kubectl label nodes <node-name> <label-key>=<label-value>

kubectl label nodes lp-knode-02 disk=ssd
kubectl label nodes lp-knode-02 nodename=lp-knode-02
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd-affinity-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: httpd-affinity
  template:
    metadata:
      name: httpd-affinity-deployment
      labels:
        app: httpd-affinity
        env: prod
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: disk
                operator: In
                values:
                - ssd
      containers:        
      - name: httpd-node-affinity
        image: httpd
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "256Mi"
            cpu: "100m"
        ports:
        - name: httpd-port
          containerPort: 80
        livenessProbe:
          httpGet:
            path: /index.html
            port: 80
            httpHeaders:
            - name: Custom-Header
              value: custom1
          initialDelaySeconds: 10
          periodSeconds: 10
          timeoutSeconds: 5
          failureThreshold: 3
        readinessProbe:
          exec:
            command:
            - cat
            - /usr/local/apache2/htdocs/index.html
          initialDelaySeconds: 10
          periodSeconds: 10

More :
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

Install metric server in kubernetes

WHY?
– Get node CPU/RAM usages
– Can create Horizontal Pod Autoscaler (HPA)
– Light weight

git clone https://github.com/kubernetes-sigs/metrics-server.git
  • Edit metrics-server/manifests/base/deployment.yaml and add below lines to args
args:
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
          - --kubelet-use-node-status-port #Deprecated metrics-server:v0.3.7
          - --kubelet-insecure-tls
kubectl apply -f metrics-server/manifests/base
  • To get node metrics run kubectl get top node

Elastic APM monitoring for javascript app on kubernetes

1.apm-server.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: apm-deployment
  labels:
    app: apm-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: apm-deployment
  template:
    metadata:
      labels:
        app: apm-deployment
        env: prod
    spec: 
      containers:
        - name: apm-deployment
          image: "elastic/apm-server:7.9.0"
          imagePullPolicy: IfNotPresent
          env:
          - name: REGISTRY_STORAGE_DELETE_ENABLED
            value: "true"
          volumeMounts:
          - name: apm-server-config
            mountPath: /usr/share/apm-server/apm-server.yml
            subPath: apm-server.yml    
          ports:
            - containerPort: 8200
      volumes:
        - name: apm-server-config
          configMap:
            name: apm-server-config


---
kind: Service
apiVersion: v1
metadata:
  name: apm-deployment-svc
  labels:
    app: apm-deployment-svc
spec:
  type: NodePort
  ports:
    - name: http
      port: 8200
      protocol: TCP
      nodePort: 30010
  selector:
    app: apm-deployment

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: apm-server-config
  labels:
    app: apm-server
data:
  apm-server.yml: |-
    apm-server:
      host: "0.0.0.0:8200"
      rum:
        enabled: true  
    output.elasticsearch:
      hosts: elasticsearch-service:9200

Note:
1. Replace elasticsearch host as per your config
2. Only RUM js module is enabled

2. Add below code to your js file which is called in everyfile for eg. index.html

<script src="elastic-apm-rum.umd.min.js" crossorigin></script>
<script>
  elasticApm.init({
    serviceName: 'test-app1',
    serverUrl: 'http://192.168.0.183:30010',
  })
</script>

<body>
    This is test-app1
</body>

Note:
1. Replace serverUrl
2. Download elastic-apm-rum.umd.min.js from github

3. Kibana dashboard for APM

We can also monitor other languages apps performance

prometheus blackbox exporter in Kubernetes

prometheus-blackbox.yml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-blackbox-exporter
  labels:
    app: prometheus-blackbox-exporter
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus-blackbox-exporter
  template:
    metadata:
      labels:
        app: prometheus-blackbox-exporter
    spec:
      restartPolicy: Always
      containers:
        - name: blackbox-exporter
          image: "prom/blackbox-exporter:v0.15.1"
          imagePullPolicy: IfNotPresent
          args:
            - "--config.file=/config/blackbox.yaml"
          ports:
            - containerPort: 9115
          volumeMounts:
            - mountPath: /config
              name: prometheus-config
      volumes:
        - name: prometheus-config
          configMap:
            name: prometheus-blackbox-exporter

---
kind: Service
apiVersion: v1
metadata:
  name: prometheus-blackbox-exporter
  labels:
    app: prometheus-blackbox-exporter
spec:
  type: ClusterIP
  ports:
    - name: http
      port: 9115
      protocol: TCP
  selector:
    app: prometheus-blackbox-exporter

---

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-blackbox-exporter
  labels:
    app: prometheus-blackbox-exporter
data:
  blackbox.yaml: |
    modules:
      http_2xx:
        http:
          no_follow_redirects: false
          preferred_ip_protocol: ip4
          valid_http_versions:
          - HTTP/1.1
          - HTTP/2
          valid_status_codes: []
        prober: http
        timeout: 5s

2. in prometheus update prometheus.yml file as below

3. Prometheus query

probe_http_status_code{job="web1"}

Rancher proxy rule in httpd with websocket secure (wss)

<VirtualHost *:80>
	ServerName rancher.initedit.com
	Redirect permanent / https://rancher.initedit.com/
	RewriteEngine on
	RewriteCond %{SERVER_NAME} =rancher.initedit.com [OR]
	RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]
</VirtualHost>

<VirtualHost *:443>
    ServerName rancher.initedit.com
    AllowEncodedSlashes on
    SSLEngine On
    SSLProxyEngine On
    RewriteEngine on
    SSLProxyVerify none
    SSLProxyCheckPeerCN off
    SSLProxyCheckPeerName off
    SSLProxyCheckPeerExpire off
    RequestHeader set X-Forwarded-Proto "https"
    RewriteCond %{HTTP:Upgrade} =websocket [NC]
    RewriteRule /(.*)   wss://192.168.0.183:8443/$1 [P,L]
    RewriteCond %{HTTP:Upgrade} !=websocket [NC]
    RewriteRule /(.*)   https://192.168.0.183:8443/$1 [P,L]
    ProxyPassReverse / https://192.168.0.183:8443/
    ProxyPreserveHost On
    SSLCertificateFile /etc/letsencrypt/live/rancher.initedit.com/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/rancher.initedit.com/privkey.pem
    SSLCertificateChainFile /etc/letsencrypt/live/rancher.initedit.com/fullchain.pem
    Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>

More info : https://stackoverflow.com/questions/27526281/websockets-and-apache-proxy-how-to-configure-mod-proxy-wstunnel

Docker secure private registry with https behind Apache proxy using letsencrypt

1.docker-ui.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: docker-registry-deployment
  labels:
    app: registry
    env: prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: docker-registry
      env: prod
  template:
    metadata:
      labels:
        app: docker-registry
        env: prod
    spec:
      containers:
      - name: docker-registry-container
        image: registry:2
        imagePullPolicy: IfNotPresent
        env:
          - name: REGISTRY_STORAGE_DELETE_ENABLED
            value: "true"
        resources:
          requests:
            memory: "256Mi"
            cpu: "200m"
          limits:
            memory: "512Mi"
            cpu: "200m"
        volumeMounts:
          - name: registry-data
            mountPath: /var/lib/registry
          - name: config-yml
            mountPath: /etc/docker/registry/config.yml
            subPath: config.yml   
        ports:
        - containerPort: 5000
      volumes:
        - name: registry-data
          nfs:
            server: 192.168.0.184
            path: "/opt/nfs1/docker_registry"
        - name: config-yml
          configMap:
           name: docker-registry-conf     
              

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: docker-registry-conf
data:
  config.yml: |+
    version: 0.1
    log:
      fields:
        service: registry
    storage:
      filesystem:
        rootdirectory: /var/lib/registry
    http:
      addr: 0.0.0.0:5000
      secret: asecretforlocaldevelopment
      headers:
        X-Content-Type-Options: [nosniff]
    health:
      storagedriver:
        enabled: true
        interval: 10s
        threshold: 3

---
kind: Service
apiVersion: v1
metadata:
  name: docker-registry-service
  labels:
    app: docker-registry
    env: prod
spec:
  selector:
    app: docker-registry
    env: prod
  ports:
  - name: docker-registry
    protocol: TCP
    port: 5000
    targetPort: 5000

#Docker ui
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: docker-ui-deployment
  labels:
    app: dockerui
    env: prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dockerui
      env: prod
  template:
    metadata:
      labels:
        app: dockerui
        env: prod
    spec:
      containers:
      - name: dockerui-container
        image: joxit/docker-registry-ui:static
        imagePullPolicy: IfNotPresent
        env:
          - name: REGISTRY_URL
            value: "http://docker-registry-service:5000"
          - name: DELETE_IMAGES
            value: "true"
          - name: REGISTRY_TITLE
            value: "Docker-UI"
        resources:
          requests:
            memory: "512Mi"
            cpu: "200m"
          limits:
            memory: "512Mi"
            cpu: "200m"
        ports:
        - containerPort: 80 

---
kind: Service
apiVersion: v1
metadata:
  name: dockerui-service
  labels:
    app: dockerui
    env: prod
spec:
  selector:
    app: dockerui
    env: prod
  ports:
  - name: dockerui
    protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30005
  type: NodePort

2.Apache proxy rule

htpasswd -c /etc/httpd/admin-htpasswd admin

<VirtualHost *:80>
ServerName docker.initedit.com
RewriteEngine on
RewriteCond %{SERVER_NAME} =docker.initedit.com
RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]
</VirtualHost>

<VirtualHost *:443>
<Location />
    AuthName authorization
    AuthType Basic
    require valid-user
    AuthUserFile '/etc/httpd/admin-htpasswd'
</Location>
    ServerName docker.initedit.com
    AllowEncodedSlashes on
    RewriteEngine on
    SSLEngine On
    SSLProxyEngine On
    ProxyPreserveHost On
    RequestHeader set X-Forwarded-Proto "https"
    ProxyPass /  http://192.168.0.183:30005/
    ProxyPassReverse / http://192.168.0.183:30005/
    SSLCertificateFile /etc/letsencrypt/live/docker.initedit.com/cert.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/docker.initedit.com/privkey.pem
    SSLCertificateChainFile /etc/letsencrypt/live/docker.initedit.com/fullchain.pem
    Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>

Note: Add htpasswd for basic authentication

docker tag docker.io/busybox docker.initedit.com/busybox1
docker push docker.initedit.com/busybox1

3. You can delete from UI and also using docker_reg_tool. After deleting you need to run below command inside registry container to remove it completely

docker exec -it name_of_registory_container bin/registry garbage-collect /etc/docker/registry/config.yml

Scale to Zero with Openfaas serverless deployment

Create openfass serverless deployment

1.update faas-idler deployment in k8 from dryRun=true to dryRun=false

2.While deploying the openfass faas-idler add “com.openfaas.scale.zero=true”

sudo faas-cli deploy -f python-fn.yml  --label "com.openfaas.scale.zero=true"

https://docs.openfaas.com/architecture/autoscaling/

setup glusterfs on centos7 and use in kubernetes

glusterfs1 – 10.10.10.1
glusterfs2 – 10.10.10.2

add below entry into /etc/hosts
10.10.10.2 glusterfs1
10.10.10.1 glusterfs2

– Add 10GB of disk to both server(for eg. /dev/sdb)

– On both server

yum install centos-release-gluster -y 

mkdir -p /bricks/brick1
mkfs.xfs  /dev/sdb

echo "/dev/sdb /bricks/brick1 xfs defaults 1 2" >> /etc/fstab
mount -a 

yum install glusterfs-server -y
systemctl enable glusterd
systemctl start glusterd
systemctl status glusterd

-On glusterfs1

gluster peer probe glusterfs2

-On glusterfs2

gluster peer probe glusterfs1

-On any of one server

gluster volume create gv0 replica 2 glusterfs1:/bricks/brick1/gv0 glusterfs2:/bricks/brick1/gv0

gluster volume start gv0

gluster volume info

-Verify glusterfs mount

mkdir /mnt/gv0 
mount -t glusterfs glusterfs1:/gv0 /mnt/gv0 

Use glusterfs in Kubernetes-deployment

-install glusterfs client on all kubernetes node

yum install centos-release-gluster -y 
yum install glusterfs -y

glusterfs-nginx-deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gluster-nginx
spec:
  selector:
    matchLabels:
      run: gluster-nginx
  replicas: 1
  template:
    metadata:
      labels:
        run: gluster-nginx
    spec:
      containers:
      - name: gluster-nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: "/mnt/glusterfs"
          name: glusterfsvol
      volumes:
      - name: glusterfsvol
        glusterfs:
          endpoints: glusterfs-cluster
          path: 10.10.10.1:/gv0

Running sonarqube with postgress on kubernetes

Note:
– for sonarqube 8 set sysctl -w vm.max_map_count=262144 on host machine
– Move all extensions jars from container to your extensions dir

1.postgress.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres-deployment
  labels:
    app: postgres
    env: prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
      env: prod
  template:
    metadata:
      labels:
        app: postgres
        env: prod
    spec:
      containers:
      - name: postgres-container
        image: postgres
        imagePullPolicy: IfNotPresent
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "200m"
        env:
          - name: POSTGRES_PASSWORD
            value: "PASSWORD"  
        volumeMounts:
          - name: postgres-data
            mountPath: /var/lib/postgresql/data
        ports:
        - containerPort: 5432    
      volumes:
        - name: postgres-data
          nfs:
            server: 192.168.0.184
            path: "/opt/nfs1/postgres/data"
---
kind: Service
apiVersion: v1
metadata:
  name: postgres-service
  labels:
    app: postgres
    env: prod
spec:
  selector:
    app: postgres
    env: prod
  ports:
  - name: postgres
    protocol: TCP
    port: 5432
    targetPort: 5432
    nodePort: 30432
  type: NodePort

2.sonarqube.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sonarqube-deployment
  labels:
    app: sonarqube
    env: prod
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sonarqube
      env: prod
  template:
    metadata:
      labels:
        app: sonarqube
        env: prod
    spec:
      containers:
      - name: sonarqube-container
        image: sonarqube:7.7-community
        imagePullPolicy: IfNotPresent
        env:
          - name: SONARQUBE_JDBC_USERNAME
            value: postgres
          - name: SONARQUBE_JDBC_PASSWORD
            value: "PASSWORD"
          - name: SONARQUBE_JDBC_URL
            value: jdbc:postgresql://postgres-service:5432/sonar
        resources:
          requests:
            memory: "1024Mi"
            cpu: "500m"
          limits:
            memory: "2048Mi"
            cpu: "1000m"
        volumeMounts:
          - name: sonarqube-conf
            mountPath: /opt/sonarqube/conf
          - name: sonarqube-data
            mountPath: /opt/sonarqube/data
          - name: sonarqube-logs
            mountPath: /opt/sonarqube/logs
          - name: sonarqube-extensions
            mountPath: /opt/sonarqube/extensions      
        ports:
        - containerPort: 9000    
      volumes:
        - name: sonarqube-conf
          nfs:
            server: 192.168.0.184
            path: "/opt/nfs1/sonarqube/conf"
        - name: sonarqube-data
          nfs:
            server: 192.168.0.184
            path: "/opt/nfs1/sonarqube/data"
        - name: sonarqube-logs
          nfs:
            server: 192.168.0.184
            path: "/opt/nfs1/sonarqube/logs"
        - name: sonarqube-extensions
          nfs:
            server: 192.168.0.184
            path: "/opt/nfs1/sonarqube/extensions"
---
kind: Service
apiVersion: v1
metadata:
  name: sonarqube-service
  labels:
    app: sonarqube
    env: prod
spec:
  selector:
    app: sonarqube
    env: prod
  ports:
  - name: sonarqube
    protocol: TCP
    port: 9000
    targetPort: 9000
    nodePort: 30900
  type: NodePort

Note: Create sonar database in postgres

https://stackoverflow.com/questions/16825331/disallow-anonymous-users-to-access-sonar