Tuesday, May 21, 2024

K8 All Topics Details

 https://github.com/devopsproin/certified-kubernetes-administrator/tree/main

Sunday, May 19, 2024

PV Demo Using HostPath and EBS

Follow Below Link For HosPath Demo:

https://srdev.hashnode.dev/managing-persistent-volumes-in-your-deployment

Follow Below Link For EBS PV and PVC Demo:

Installing and Configuring AWS EBS CSI Driver for Kubernetes Cluster with Dynamic Provisioning of EBS Volumes


This guide provides the steps to install and configure the AWS EBS CSI Driver on a Kubernetes cluster to enable the use of Amazon Elastic Block Store (EBS) volumes as persistent volumes.

Prerequisites

  • A running Kubernetes cluster.
  • A Kubernetes version of 1.20 or greater.
  • An AWS account with access to create an IAM user and obtain an access key and secret key.

Installing Helm

Helm is a package manager for Kubernetes that simplifies the deployment and management of applications.

  • Run the following commands to install Helm
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

Installing AWS EBS CSI Driver

  • Create a secret to store your AWS access key and secret key using the following command:
kubectl create secret generic aws-secret \
    --namespace kube-system \
    --from-literal "key_id=${AWS_ACCESS_KEY_ID}" \
    --from-literal "access_key=${AWS_SECRET_ACCESS_KEY}"
  • Add the AWS EBS CSI Driver Helm chart repository:
helm repo add aws-ebs-csi-driver https://kubernetes-sigs.github.io/aws-ebs-csi-driver
helm repo update
  • Deploy the AWS EBS CSI Driver using the following command:
helm upgrade --install aws-ebs-csi-driver \
    --namespace kube-system \
    aws-ebs-csi-driver/aws-ebs-csi-driver
  • Verify that the driver has been deployed and the pods are running:
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-ebs-csi-driver

Provisioning EBS Volumes


https://docs.aws.amazon.com/eks/latest/userguide/ebs-sample-app.html

  • Create a storageclass.yaml file and apply the storageclass.yaml file using the following command:
kubectl apply -f storageclass.yaml
  • Create a pvc.yaml file and apply the pvc.yaml file using the following command:
kubectl apply -f pvc.yaml
  • Create a pod.yaml file and apply the pod.yaml file using the following command:
kubectl apply -f pod.yaml
  • Verify that the EBS volume has been provisioned and attached to the pod:
kubectl exec -it app -- df -h /data

The output of the above command should show the mounted EBS volume and its available disk space.





Wednesday, May 15, 2024

Ingress Demo Using Minikube

 https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/

Sunday, May 12, 2024

EKS Cluster Setup

 https://medium.com/@mudasirhaji/setup-kubernetes-cluster-on-amazon-eks-56cbbadace04

Wednesday, May 8, 2024

K8S Cluster Setup Steps (AWS Amazon Linux)

  yum install docker -y

 

 systemctl enable docker

 

 systemctl start docker

 

 curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

 

 sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64

 

 minikube version

 

 minikube start --driver=docker --force

 

  minikube status

  

 curl -LO https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl

  

 sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

 

 kubectl version

 

 Deploy Sample POd:

 https://kubernetes.io/docs/concepts/workloads/pods/

 

 

=======================================================================

Master 


1  hostnamectl set-hostname k8master

    

2  yum install docker -y

    

3  systemctl start docker;systemctl enable docker

    

4  sudo setenforce 0

 


5 Add K8S Repo:


cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/

enabled=1

gpgcheck=1

gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key

exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni

EOF


6.yum repolist

    

7  sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes --disableplugin=priorities

    

8  kubeadm init

  --> Note down kubeadm join commands:

  

9  kubectl get nodes


10  mkdir -p $HOME/.kube

   

11  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

   

12  sudo chown $(id -u):$(id -g) $HOME/.kube/config


13  export KUBECONFIG=/etc/kubernetes/admin.conf


14. kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml


15 kubectl get nodes


17 kubectl get pods -n kube-system


18  kubectl get nodes



Worker:



1  hostnamectl set-hostname k8worker1

    

2  yum install docker -y

    

3  systemctl start docker;systemctl enable docker

    

4  sudo setenforce 0

 


5 Add K8S Repo:


cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/

enabled=1

gpgcheck=1

gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key

exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni

EOF


6.yum repolist

    

7  sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes --disableplugin=priorities


8. Use join comamnd to connect control plain:  join command and token you can take from master 



 kubeadm join 172.31.63.238:6443 --token 15zdfy.iogdr2v5ngur6cwo \

>         --discovery-token-ca-cert-hash sha256:6fbc63b2d51467ec36482022d207af7672e1330168ceee673eb4182538f324fb



Minikube Setup Steps on AWS Amazon EC2 instance

Pre-requisites:

https://minikube.sigs.k8s.io/docs/start/  

===============

yum install docker -y

systemctl enable docker

 systemctl start docker

 curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

 sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64

 minikube version

 minikube start --driver=docker --force

 minikube status

 curl -LO https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl

 sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

 kubectl version

 Deploy Sample POd:

 https://kubernetes.io/docs/concepts/workloads/pods/

 

Wednesday, May 1, 2024

Multimanager Docker Swarm Setup

 Step 1) Create AWS EC2 Instance and install docker on it.


Step 2) Initialize swarm cluster using below command

#docker swarm init

or

#docker swarm init --advertise-addr <public/private ip>



Step 3) Worker token auto generate when you initialize the swarm cluster.
If we need to regenerate it use below command.
 #docker swarm join-token worker

Step 4) Generate manager token using below command
  #docker swarm join-token manager

Step 5)Login on another manager host and execute swarm join command which is generated using above command.

Step 6)If you restarted the server then no need to manually execute the command to rejoin the cluster. It will automatically rejoin the cluster.

Step 7)If you want to re-initialize the swarm cluster follow the below steps.

docker swarm leave --force --> Remove node from  old cluster
systemctl stop docker  --> stop docker 
rm -rf /var/lib/docker/swarm  --> remove existing swarm configuration
systemctl start docker  --> start docker
docker swarm init  --> inititiate new cluster 










 

Sample Game App Deployment on EKS cluster

 https://padmakshi.medium.com/setting-up-an-eks-cluster-and-deploying-a-game-application-a-step-by-step-guide-08790e0be117