Monday, March 27, 2023

k8 Secret

 In Kubernetes, a Secret is an object that allows you to store and manage sensitive information, such as passwords, API keys, and certificates. 

Secrets are stored in a cluster, and they can be accessed by Pods or other Kubernetes objects.


To create a Secret, you need to encode the sensitive data as base64 and store it in a YAML file. 

Here's an example YAML file that creates a Secret named "mysecret" with a username and password:


apiVersion: v1

kind: Secret

metadata:

  name: mysecret

type: Opaque

data:

  username: dXNlcm5hbWU= # base64-encoded "username"

  password: cGFzc3dvcmQ= # base64-encoded "password"




In this example, the Secret is of type "Opaque," which means that Kubernetes doesn't understand its contents. The data field contains the encoded username and password.


Once you have created a Secret, you can reference it in your Pod's YAML file using environment variables or volumes. 


For example, to use the username and password from the "mysecret" Secret as environment variables in a Pod, you could add the following to your Pod's YAML file:


env:

- name: USERNAME

  valueFrom:

    secretKeyRef:

      name: mysecret

      key: username

- name: PASSWORD

  valueFrom:

    secretKeyRef:

      name: mysecret

      key: password


This would create two environment variables in the Pod named "USERNAME" and "PASSWORD," with values equal to the decoded contents of the "username" and "password" keys in the "mysecret" Secret, respectively.

Sunday, March 26, 2023

K8 ConfigMap

 ConfigMap:

A ConfigMap stores configuration settings that your Kubernetes Pods consume.

In Kubernetes, a ConfigMap is a resource object that stores configuration data in key-value pairs. 

This data can be used by containers running in a pod to configure applications and services. 

ConfigMaps provide a way to separate configuration data from application code, making it easier to manage configuration across multiple environments and deployments.

ConfigMaps can be created manually using the kubectl command-line tool or using YAML configuration files. 

Once a ConfigMap is created, it can be mounted as a volume or used as environment variables in a container. This allows applications to be configured without needing to rebuild or redeploy them.

ConfigMaps can be used to store any type of configuration data, including database connection strings, API endpoints, and application settings. 

They can be updated without restarting the associated containers, making it easy to make configuration changes on-the-fly. ConfigMaps can also be used in conjunction with Secrets to manage sensitive configuration data such as passwords and API keys.


How does a ConfigMap work?

A ConfigMap is a dictionary of key-value pairs that store configuration settings for your applications.

First, create a ConfigMap in your cluster by tweaking our sample YAML to your needs.

Second, consume to ConfigMap in your Pods and use its values.



Example:

This YAML creates a ConfigMap with the value database set to mongodb, and database_uri, and keys set to the values in the YAML example code.



Using a ConfigMap in Environment Variables

The key to adding your ConfigMap as environment variables to your pods is the envFrom property in your Pod’s YAML.




Friday, March 17, 2023

Minikube Setup Steps

Step-1) ubuntu@ip-172-31-3-187:~$ sudo apt-get update

Hit:1 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease

Get:2 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease [119 kB]

Get:3 http://security.ubuntu.com/ubuntu jammy-security InRelease [110 kB]

Get:4 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease [107 kB]

Get:5 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy/universe amd64 Packages [14.1 MB]

Get:6 http://security.ubuntu.com/ubuntu jammy-security/main amd64 Packages [692 kB]

Get:7 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy/universe Translation-en [5652 kB]

Get:8 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy/universe amd64 c-n-f Metadata [286 kB]

Get:9 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy/multiverse amd64 Packages [217 kB]

Get:10 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy/multiverse Translation-en [112 kB]

Get:11 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy/multiverse amd64 c-n-f Metadata [8372 B]

Get:12 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 Packages [948 kB]

Get:13 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main Translation-en [205 kB]

Get:14 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 c-n-f Metadata [13.7 kB]

Get:15 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 Packages [684 kB]

Get:16 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/restricted Translation-en [107 kB]

Get:17 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/restricted amd64 c-n-f Metadata [584 B]

Get:18 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 Packages [890 kB]

Get:19 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe Translation-en [177 kB]

Get:20 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 c-n-f Metadata [18.1 kB]

Get:21 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 Packages [24.1 kB]

Get:22 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/multiverse Translation-en [6312 B]

Get:23 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/multiverse amd64 c-n-f Metadata [444 B]

Get:24 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/main amd64 Packages [40.7 kB]

Get:25 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/main Translation-en [9800 B]

Get:26 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/main amd64 c-n-f Metadata [392 B]

Get:27 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/restricted amd64 c-n-f Metadata [116 B]

Get:28 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/universe amd64 Packages [19.5 kB]

Get:29 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/universe Translation-en [14.0 kB]

Get:30 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/universe amd64 c-n-f Metadata [392 B]

Get:31 http://security.ubuntu.com/ubuntu jammy-security/main Translation-en [142 kB]

Get:32 http://security.ubuntu.com/ubuntu jammy-security/main amd64 c-n-f Metadata [8832 B]

Get:33 http://security.ubuntu.com/ubuntu jammy-security/restricted amd64 Packages [642 kB]

Get:34 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports/multiverse amd64 c-n-f Metadata [116 B]

Get:35 http://security.ubuntu.com/ubuntu jammy-security/restricted Translation-en [100 kB]

Get:36 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 Packages [710 kB]

Get:37 http://security.ubuntu.com/ubuntu jammy-security/universe Translation-en [115 kB]

Get:38 http://security.ubuntu.com/ubuntu jammy-security/universe amd64 c-n-f Metadata [13.8 kB]

Get:39 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 Packages [19.4 kB]

Get:40 http://security.ubuntu.com/ubuntu jammy-security/multiverse Translation-en [4068 B]

Get:41 http://security.ubuntu.com/ubuntu jammy-security/multiverse amd64 c-n-f Metadata [240 B]

Fetched 26.3 MB in 5s (5744 kB/s)

Reading package lists... Done

Step 2) 

ubuntu@ip-172-31-3-187:~$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 45.8M  100 45.8M    0     0  56.9M      0 --:--:-- --:--:-- --:--:-- 56.8M

ubuntu@ip-172-31-3-187:~$ ls -l

total 46912

-rw-rw-r-- 1 ubuntu ubuntu 48037888 Mar 18 03:37 kubectl

ubuntu@ip-172-31-3-187:~$ chmod +x ./kubectl

ubuntu@ip-172-31-3-187:~$ sudo mv ./kubectl /usr/local/bin/kubectl

ubuntu@ip-172-31-3-187:~$ sudo apt-get install docker.io -y

Reading package lists... Done

Building dependency tree... Done

Reading state information... Done

The following additional packages will be installed:

  bridge-utils containerd dns-root-data dnsmasq-base pigz runc ubuntu-fan

Suggested packages:

  ifupdown aufs-tools cgroupfs-mount | cgroup-lite debootstrap docker-doc rinse zfs-fuse

  | zfsutils

The following NEW packages will be installed:

  bridge-utils containerd dns-root-data dnsmasq-base docker.io pigz runc ubuntu-fan

0 upgraded, 8 newly installed, 0 to remove and 53 not upgraded.

Need to get 72.4 MB of archives.

After this operation, 287 MB of additional disk space will be used.

Get:1 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy/universe amd64 pigz amd64 2.6-1 [63.6 kB]

Get:2 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 bridge-utils amd64 1.7-1ubuntu3 [34.4 kB]

Get:3 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 runc amd64 1.1.4-0ubuntu1~22.04.1 [4241 kB]

Get:4 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 containerd amd64 1.6.12-0ubuntu1~22.04.1 [34.4 MB]

Get:5 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy/main amd64 dns-root-data all 2021011101 [5256 B]

Get:6 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/main amd64 dnsmasq-base amd64 2.86-1.1ubuntu0.1 [354 kB]

Get:7 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates/universe amd64 docker.io amd64 20.10.21-0ubuntu1~22.04.2 [33.2 MB]

Get:8 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy/universe amd64 ubuntu-fan all 0.12.16 [35.2 kB]

Fetched 72.4 MB in 17s (4327 kB/s)

Preconfiguring packages ...

Selecting previously unselected package pigz.

(Reading database ... 63605 files and directories currently installed.)

Preparing to unpack .../0-pigz_2.6-1_amd64.deb ...

Unpacking pigz (2.6-1) ...

Selecting previously unselected package bridge-utils.

Preparing to unpack .../1-bridge-utils_1.7-1ubuntu3_amd64.deb ...

Unpacking bridge-utils (1.7-1ubuntu3) ...

Selecting previously unselected package runc.

Preparing to unpack .../2-runc_1.1.4-0ubuntu1~22.04.1_amd64.deb ...

Unpacking runc (1.1.4-0ubuntu1~22.04.1) ...

Selecting previously unselected package containerd.

Preparing to unpack .../3-containerd_1.6.12-0ubuntu1~22.04.1_amd64.deb ...

Unpacking containerd (1.6.12-0ubuntu1~22.04.1) ...

Selecting previously unselected package dns-root-data.

Preparing to unpack .../4-dns-root-data_2021011101_all.deb ...

Unpacking dns-root-data (2021011101) ...

Selecting previously unselected package dnsmasq-base.

Preparing to unpack .../5-dnsmasq-base_2.86-1.1ubuntu0.1_amd64.deb ...

Unpacking dnsmasq-base (2.86-1.1ubuntu0.1) ...

Selecting previously unselected package docker.io.

Preparing to unpack .../6-docker.io_20.10.21-0ubuntu1~22.04.2_amd64.deb ...

Unpacking docker.io (20.10.21-0ubuntu1~22.04.2) ...

Selecting previously unselected package ubuntu-fan.

Preparing to unpack .../7-ubuntu-fan_0.12.16_all.deb ...

Unpacking ubuntu-fan (0.12.16) ...

Setting up dnsmasq-base (2.86-1.1ubuntu0.1) ...

Setting up runc (1.1.4-0ubuntu1~22.04.1) ...

Setting up dns-root-data (2021011101) ...

Setting up bridge-utils (1.7-1ubuntu3) ...

Setting up pigz (2.6-1) ...

Setting up containerd (1.6.12-0ubuntu1~22.04.1) ...

Created symlink /etc/systemd/system/multi-user.target.wants/containerd.service → /lib/systemd/system/containerd.service.

Setting up ubuntu-fan (0.12.16) ...

Created symlink /etc/systemd/system/multi-user.target.wants/ubuntu-fan.service → /lib/systemd/system/ubuntu-fan.service.

Setting up docker.io (20.10.21-0ubuntu1~22.04.2) ...

Adding group `docker' (GID 122) ...

Done.

Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.

Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /lib/systemd/system/docker.socket.

Processing triggers for dbus (1.12.20-2ubuntu4.1) ...

Processing triggers for man-db (2.10.2-1) ...

Scanning processes...

Scanning linux images...


Running kernel seems to be up-to-date.


No services need to be restarted.


No containers need to be restarted.


No user sessions are running outdated binaries.


No VM guests are running outdated hypervisor (qemu) binaries on this host.

ubuntu@ip-172-31-3-187:~$

ubuntu@ip-172-31-3-187:~$

ubuntu@ip-172-31-3-187:~$

ubuntu@ip-172-31-3-187:~$

ubuntu@ip-172-31-3-187:~$ ls -=l

ls: invalid option -- '='

Try 'ls --help' for more information.

ubuntu@ip-172-31-3-187:~$ ls -l

total 0


Step -3) Download minikube

ubuntu@ip-172-31-3-187:~$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64

  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current

                                 Dload  Upload   Total   Spent    Left  Speed

100 77.3M  100 77.3M    0     0  51.2M      0  0:00:01  0:00:01 --:--:-- 51.2M

ubuntu@ip-172-31-3-187:~$ ls -l

total 79212

-rw-rw-r-- 1 ubuntu ubuntu 81109436 Mar 18 03:40 minikube-linux-amd64

ubuntu@ip-172-31-3-187:~$ mv minikube-linux-amd64 minikube


ubuntu@ip-172-31-3-187:~$ sudo install minikube /usr/local/bin/minikube


ubuntu@ip-172-31-3-187:~$ minikube version

minikube version: v1.29.0

commit: ddac20b4b34a9c8c857fc602203b6ba2679794d3

ubuntu@ip-172-31-3-187:~$ : v1.29.0^C


Step 4) add current user in docker group

ubuntu@ip-172-31-3-187:~$ sudo usermod -aG docker $USER && newgrp docker


Step 5) Start Minikube

ubuntu@ip-172-31-3-187:~$ minikube start

😄  minikube v1.29.0 on Ubuntu 22.04 (xen/amd64)

✨  Automatically selected the docker driver. Other choices: ssh, none

📌  Using Docker driver with root privileges

👍  Starting control plane node minikube in cluster minikube

🚜  Pulling base image ...

💾  Downloading Kubernetes v1.26.1 preload ...

    > preloaded-images-k8s-v18-v1...:  397.05 MiB / 397.05 MiB  100.00% 60.18 M

    > gcr.io/k8s-minikube/kicbase...:  407.19 MiB / 407.19 MiB  100.00% 21.24 M

🔥  Creating docker container (CPUs=2, Memory=2200MB) ...

🐳  Preparing Kubernetes v1.26.1 on Docker 20.10.23 ...

    ▪ Generating certificates and keys ...

    ▪ Booting up control plane ...

    ▪ Configuring RBAC rules ...

🔗  Configuring bridge CNI (Container Networking Interface) ...

    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5

🔎  Verifying Kubernetes components...

🌟  Enabled addons: default-storageclass, storage-provisioner

🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

ubuntu@ip-172-31-3-187:~$ kubectl get nodes

NAME       STATUS   ROLES           AGE   VERSION

minikube   Ready    control-plane   25s   v1.26.1

ubuntu@ip-172-31-3-187:~$

Kubernetes POD

 

What are Pods in Kubernetes?

What are Kubernetes Pods? How can we use them to run our containers? How can we interact with them using kubectl and YAML?

What are Pods?

What’s the difference between single container and multi-container Pods?

How can we create Pods in Kubernetes?

apiVersion: v1
kind: Pod
metadata:
name: nginx-2
labels:
name: nginx-2
env: production
spec:
containers:
- name: nginx
image: nginx

How can we deploy and interact with our Pods using kubectl?

kubectl apply -f mypod.yaml
kubectl get pods
kubectl port-forward mypod 8080:80
kubectl delete pod mypod
kubectl delete deployment mydeployment

How can we ensure that our Pods in Kubernetes are healthy?

How to SpinUp K8s cluster

 How to Spin Up a Kubernetes Cluster

Currently, several services around the globe provide different Kubernetes implementations. Among the most popular ones, you will find:

K8 Initial Namespaces

 Initial namespaces

Kubernetes starts with four initial namespaces:

default
Kubernetes includes this namespace so that you can start using your new cluster without first creating a namespace.
kube-node-lease
This namespace holds Lease objects associated with each node. Node leases allow the kubelet to send heartbeats so that the control plane can detect node failure.
kube-public
This namespace is readable by all clients (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.
kube-system
The namespace for objects created by the Kubernetes system.


Thursday, March 9, 2023

Deploy Application on Swarm Cluster

root@ip-172-31-15-174 ~]# cat myservice.yml

version: '3'

services:

  my-app:

    image: nginx:latest

    deploy:

      replicas: 3

      restart_policy:

        condition: on-failure

    ports:

      - "8081:80" 



[root@ip-172-31-15-174 ~]# docker stack deploy --compose-file myservice.yml test-app

Creating network test-app_default

Creating service test-app_my-app


[root@ip-172-31-15-174 ~]# docker ps -a

CONTAINER ID   IMAGE                      COMMAND                  CREATED         STATUS                          PORTS

   NAMES

1050478c859a   nginx:latest               "/docker-entrypoint.…"   2 minutes ago   Exited (0) About a minute ago

   test-app_my-app.1.ifnzpj756lq6f27g3h7fdvrxz

0f94419281dc   nginx:latest               "/docker-entrypoint.…"   2 minutes ago   Up 2 minutes                    80/tcp

   test-app_my-app.2.vjo1lcb3tismhc4psq21xe555

880fe2587512   nginx:latest               "/docker-entrypoint.…"   2 minutes ago   Up 2 minutes                    80/tcp

   test-app_my-app.3.j2wonx6yr7v3vejt65xv7rfqa

11d0593d925d   dockersamples/visualizer   "/sbin/tini -- node …"   8 minutes ago   Up 8 minutes (healthy)          0.0.0.0:8080->8080/tcp, :::8080->8080/tcp   strange_ritchie

[root@ip-172-31-15-174 ~]#

[root@ip-172-31-15-174 ~]#

[root@ip-172-31-15-174 ~]#

[root@ip-172-31-15-174 ~]# docker service ls

ID             NAME              MODE         REPLICAS   IMAGE          PORTS

1daq9y4d70yr   test-app_my-app   replicated   2/3        nginx:latest   *:8081->80/tcp

Wednesday, March 8, 2023

Docker Swarm V/s Kubernetes

 



  1. Architecture: Docker Swarm is built into the Docker ecosystem, while Kubernetes is a standalone tool that can be run on any infrastructure.

  2. Scalability: While both tools are designed to be scalable, Kubernetes is better suited for managing large, complex containerized applications.

  3. Flexibility: Kubernetes is more flexible than Docker Swarm, as it can be configured and extended to meet the needs of any organization.

  4. Complexity: Kubernetes is a more complex tool than Docker Swarm, which means it can take longer to learn and set up.


Docker Compose V/s Docker Swarm

 Compose and Swarm are both tools provided by Docker for managing and deploying containerized applications, but they have different use cases.

Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to define your application’s services, networks, and volumes in a single file, and then use that file to create and start all the services that make up your application. Docker Compose is useful for development and testing environments where you need to spin up multiple containers quickly.

Docker Swarm, on the other hand, is a native clustering and orchestration tool for Docker. It allows you to manage a cluster of Docker nodes and deploy containerized applications across that cluster. Docker Swarm provides features like service discovery, load balancing, and rolling updates, making it a good choice for production environments where high availability and scalability are critical.

In summary, Docker Compose is ideal for single host environments, while Docker Swarm is designed for more complex multi-host environments where scaling and high availability are important.

Tuesday, March 7, 2023

Docker Swarm Installation

 Install docker on Master and worker node and master node execute docker swarm init command.

[root@ip-172-31-34-29 ~]# docker swarm init

Swarm initialized: current node (xk88ov6mds5qdmzaj2obkfwyp) is now a manager.

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-58apbdwqjurmocvp1mlbgu85w8340g8beh65ztuvjw43x30cqp-bqthz5iqs32vtu0tiuc6efqhr 172.31.34.29:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.


Add worker node and manager in swarm cluster.

docker swarm join-token worker

docker swarm join-token manager

Remove manager and worker node:

docker swarm leave

#docker node ls ==> to check swarm cluster node details.


===============


v2.16.0


sudo curl -L "https://github.com/docker/compose/releases/download/{v2.16.0}/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose



step-1) Create 2 node


step 2) One master and one worker


step 3) make sure docker installed on both the host.


step 4) #docker --version



step 5) execute below commands on master node


   #docker swarm init


step 6) execute below command on worker node:

docker swarm join --token SWMTKN-1-58apbdwqjurmocvp1mlbgu85w8340g8beh65ztuvjw43x30cqp-bqthz5iqs32vtu0tiuc6efqhr 172.31.34.29:2377  ==> PLease check docker swarm init command output.



step 7) execute below commands on master node:

   #docker node ls

   #docker run -it -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock dockersamples/visualizer

   (https://github.com/dockersamples/docker-swarm-visualizer)


step 8)  create yml file and deplyo it in swarm cluster.


[root@master ~]# cat sample.yml

version: '3'


services:

  bb-app:

    image: nginx

    ports:

      - "8000:3000"

  


 docker stack deploy -c sample.yml demo

 =========================================


docker swarm init add what you get to worker node --open the port

docker node ls

docker service ls

docker run -it -d -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock dockersamples/visualizer

docker ps -a

docker service create --name nginxweb -p 8081:80 nginx

docker service create --name nginxweb1 -p 8082:80 --replicas 5 nginx

docker service ps nginxweb1

docker service scale nginxweb1=7

docker service scale nginxweb1=1

docker node update --availability drain docker

docker node update --availability active docker

docker node ls

 

Docker Compose Installation

 Docker Compose is a tool that was developed to help define and share multi-container applications. With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down.


Download docker-compose:

1.Run this command to download the current stable release of Docker Compose:

$  sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

2.Apply executable permissions to the binary:

chmod +x /usr/local/bin/docker-compose

3. #ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose

[root@ip-172-31-34-29 DockerDemo]# docker-compose --version

docker-compose version 1.29.2, build 5becea4c


To run a nginx container the manual is

root@ip-172-31-34-29:~$ docker container run -itd nginx 

but if the same container if we want to create using docker-compose file then we need to create a YAML file using vi docker-compose.yaml and snippet is shown below

version: '3'

services:

  nginxwebapp:

    image: nginx

    ports: 

      - "8000:80"

Version :- Property is defined by the specification for backward compatibility but is only informative.

Services:- It defines the number of applications or containers which need to be created. As shown in the above example under nginxwebapp it defines the first container which will get created using image nginx  and expose ports “8000:80”. By giving the image name as nginx, we are specifying the container should get built on nginx and we can give any image name as per the requirement. Ports property defines the port mapping of the host to the container, port 8000 is the host port that will map to port 80 of the container.


[root@ip-172-31-34-29 DockerDemo]# docker-compose up -d

Starting dockerdemo_webapp2_1 ... done

Starting dockerdemo_webapp1_1 ... done

[root@ip-172-31-34-29 DockerDemo]# docker ps

CONTAINER ID   IMAGE     COMMAND                  CREATED          STATUS         PORTS                                   NAMES

80ea921350ed   nginx     "/docker-entrypoint.…"   50 seconds ago   Up 2 seconds   0.0.0.0:8001->80/tcp, :::8001->80/tcp   dockerdemo_webapp2_1

d679f04c516f   nginx     "/docker-entrypoint.…"   50 seconds ago   Up 2 seconds   0.0.0.0:8000->80/tcp, :::8000->80/tcp   dockerdemo_webapp1_1


[root@ip-172-31-34-29 DockerDemo]# docker-compose down

Stopping dockerdemo_webapp2_1 ... done

Stopping dockerdemo_webapp1_1 ... done

Removing dockerdemo_webapp2_1 ... done

Removing dockerdemo_webapp1_1 ... done

Removing network dockerdemo_default


Docker Swarm V/s Docker Compose:

The difference between Docker Swarm and Docker Compose is that Compose is used for configuring multiple containers in the same host. Docker Swarm is different in that it is a container orchestration tool. This means that Docker Swarm lets you connect containers to multiple hosts similar to Kubernetes





Monday, March 6, 2023

Shell and Executable form

 Exec Form:

Dockerfile:

FROM ubuntu

RUN apt-get update && apt-get install -y nginx

EXPOSE 80

VOLUME My_Vol

ENTRYPOINT ["nginx", "-g", "daemon off;"]  ==> executable form


Login to Container and check nginx process:

[root@ip-172-31-34-29 DockerDemo]# docker exec -it 6633147f038d sh

# top

top - 17:59:43 up 5 days, 16:01,  0 users,  load average: 0.07, 0.04, 0.01

Tasks:   4 total,   1 running,   3 sleeping,   0 stopped,   0 zombie

%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

MiB Mem :    964.8 total,    175.9 free,    176.1 used,    612.7 buff/cache

MiB Swap:      0.0 total,      0.0 free,      0.0 used.    641.1 avail Mem


  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND

    1 root      20   0   55200  12036  10400 S   0.0   1.2   0:00.20 nginx

    7 www-data  20   0   55524   3260   1324 S   0.0   0.3   0:00.00 nginx

   13 root      20   0    2888    968    880 S   0.0   0.1   0:00.19 sh

   18 root      20   0    7300   3476   2932 R   0.0   0.4   0:00.00 top



Shell Form:


Dockerfile:

[root@ip-172-31-34-29 DockerDemo]# cat Dockerfile

FROM ubuntu

RUN apt-get update && apt-get install -y nginx

EXPOSE 80

VOLUME My_Vol

#ENTRYPOINT ["nginx", "-g", "daemon off;"]

ENTRYPOINT nginx -g "daemon off;"



[root@ip-172-31-34-29 DockerDemo]# docker exec -it 13af7484a21d  sh

# top

top - 18:02:06 up 5 days, 16:03,  0 users,  load average: 0.00, 0.02, 0.00

Tasks:   5 total,   1 running,   4 sleeping,   0 stopped,   0 zombie

%Cpu(s):  0.0 us,  0.0 sy,  0.0 ni,100.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st

MiB Mem :    964.8 total,    176.3 free,    175.3 used,    613.2 buff/cache

MiB Swap:      0.0 total,      0.0 free,      0.0 used.    641.9 avail Mem


  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND

    1 root      20   0    2888    956    864 S   0.0   0.1   0:00.19 sh

    7 root      20   0   55200  12004  10364 S   0.0   1.2   0:00.00 nginx

    8 www-data  20   0   55524   3376   1440 S   0.0   0.3   0:00.00 nginx

    9 root      20   0    2888    988    896 S   0.0   0.1   0:00.19 sh

   14 root      20   0    7304   3304   2760 R   0.0   0.3   0:00.00 top

   

   

[root@ip-172-31-34-29 DockerDemo]# docker ps

CONTAINER ID   IMAGE         COMMAND                  CREATED              STATUS          PORTS     NAMES

13af7484a21d   shellform     "/bin/sh -c 'nginx -…"   About a minute ago   Up 59 seconds   80/tcp    relaxed_borg

6633147f038d   executeform   "nginx -g 'daemon of…"   3 minutes ago        Up 3 minutes    80/tcp    gallant_neumann


Exec form

This is the preferred form for CMD and ENTRYPOINT instructions.

The SHELL form runs the command as a child process (on a shell). The EXEC form runs the executable on the main proces

RUN V/S CMD V/S ENTRYPOINT

Docker images and layers

When Docker runs a container, it runs an image inside it. This image is usually built by executing Docker instructions, which add layers on top of existing image or OS distribution.

OS distribution is the initial image and every added layer creates a new image.

Final Docker image reminds an onion with OS distribution inside and a number of layers on top of it. 

For example, your image can be built by installing a number of deb packages and your application on top of Ubuntu 14.04 distribution.


RUN

RUN instruction allows you to install your application and packages requited for it. It executes any commands on top of the current image and creates a new layer by committing the results. Often you will find multiple RUN instructions in a Dockerfile.

RUN has two forms:

RUN <command> (shell form)

RUN ["executable", "param1", "param2"] (exec form)



CMD

CMD instruction allows you to set a default command, which will be executed only when you run container without specifying a command. If Docker container runs with a command, the default command will be ignored. If Dockerfile has more than one CMD instruction, all but last CMD instructions are ignored.


CMD has three forms:


CMD ["executable","param1","param2"] (exec form, preferred)

CMD ["param1","param2"] (sets additional default parameters for ENTRYPOINT in exec form)

CMD command param1 param2 (shell form)


Let’s have a look how CMD instruction works. The following snippet in Dockerfile

CMD echo "Hello world" 

when container runs as docker run -it <image> will produce output

Hello world

but when container runs with a command, e.g., docker run -it <image> /bin/bash, CMD is ignored and bash interpreter runs instead:

root@7de4bed89922:/#


--------------

ENTRYPOINT

ENTRYPOINT instruction allows you to configure a container that will run as an executable. It looks similar to CMD, because it also allows you to specify a command with parameters. The difference is ENTRYPOINT command and parameters are not ignored when Docker container runs with command line parameters. (There is a way to ignore ENTTRYPOINT, but it is unlikely that you will do it.)

ENTRYPOINT has two forms:

ENTRYPOINT ["executable", "param1", "param2"] (exec form, preferred)

ENTRYPOINT command param1 param2 (shell form)

Be very careful when choosing ENTRYPOINT form, because forms behaviour differs significantly.

Exec form

Exec form of ENTRYPOINT allows you to set commands and parameters and then use either form of CMD to set additional parameters that are more likely to be changed. ENTRYPOINT arguments are always used, while CMD ones can be overwritten by command line arguments provided when Docker container runs. For example, the following snippet in Dockerfile

ENTRYPOINT ["/bin/echo", "Hello"]

CMD ["world"]

when container runs as docker run -it <image> will produce output

Hello world

but when container runs as docker run -it <image> John will result in

Hello John

----


Use RUN instructions to build your image by adding layers on top of initial image.

Prefer ENTRYPOINT to CMD when building executable Docker image and you need a command always to be executed. Additionally use CMD if you need to provide extra default arguments that could be overwritten from command line when docker container runs.

Choose CMD if you need to provide a default command and/or arguments that can be overwritten from command line when docker container runs


Sample Game App Deployment on EKS cluster

 https://padmakshi.medium.com/setting-up-an-eks-cluster-and-deploying-a-game-application-a-step-by-step-guide-08790e0be117