Kubernetes


I. Introduction

1. Kubernetes Overview

- Kubernetes = popular container orchestrator
 * it lets you schedule containers on a cluster of machines
 * you can run multiple containers on one machine
 * you can run long running services (like web applications)
 * Kubernetes will manage the state of those containers:
   ** Can start the container on specific nodeskubectl create -f pod-demo.yml
   ** Will restart a container when it gets killed
   ** Can move containers from one dnode to another node
- you can run Kubernetes anywhere:
  *on-premise (own datacenter)
  *public (Google cloud, AWS)
  *hybrid: public & private
-highly modular
-open source
-backed by google
-Container Orchestration = Make many servers act like one
-Released by Google in 2015, maintained by large community
-Runs on top of Docker (usually) as a set of APIs in containers
-Provides API/CLI to managed containers across servers
-many clouds provide it for you
-many vendors make a "distribution" of it
-Kubernetes abstracts away the hardware infrastracture and exposes your whole datacenter as a single enormous computational resource. It allows you to deploy and run your software components without having to know about the actual servers underneath. When deploying a multi-component, deploys it, and enables it to easily find and communicate with all the other components of your application.


-rkt - alternative for Docker - which also works with Kubernetes


desired state - Kubernetes automatically goes back to desire state

a) Why kubernetes is needed and which docker issues that he solves?

-> use case: we have node1 and node2 on both of them there are docke conteiners with the same IP addresses

- conteriners on node1 cannot communicate with conteiners on node2
-all the containers  on a node share the host IP space and must coordinate which ports they use on that node
-If a container must be replaced , it will require a new IP adressess and any hard-coded IP addresses will break.

They are trying to get a collection of nodes to behave like a single node.
-How does the system maintain state?
-How does work get scheduled?

b) Kubernetes or Swarm 
-Kubernetes and Swarm are both container orchestrators
-Both are solid platforms with vendor backing
-Swarm: Easier to deploy //managed
-Kubernetes: more features and flexibillity

Advantages of Swarm
-Comes with Docker, single vendor container platform
-Easiest orchestrator to deploy/manage yourself
-Follows 80/20  rule, 20% of features for 80% of use cases
-Runs anywhere Docker does:
  * local, cloud, datacenter
  * ARM, Windows, 32-bit
-Secure by default
-Easier to troubleshoot

Advantages of Kubernetes
-Clouds will deploy/manage Kubernetes for you
-Infrastracture vendors are making their own distributions
-Widest adaption and community
-Flexible: Covers widest set of use cases
-"Kubernetes first" vendor support
-"No one ever got fired for buying IBM"
   * Picking soultion is not 100% rational
   * Trendy, will benefit your career
   * CIO/CTO Checkbox
- You can run Kubernetes anywhere:
  *on-premise (own datacenter)
  *public (Google Cloud, AWS)
  *hybrid: public & private
-hyghly modular
-open source
-backed by goolgle




2. How Kubernetes is build

Kubernetes - the whole orchestration system, K8s "k-eight" or Kube for short

Kubectl - CLI to configure Kubernetes and manage apps, using "cube control" official pronaunciation

Node - single server in the Kubernetes cluster

a) master node
- can be one or more
-The Kubernetes master is responsible for maintaining the desired state for your cluster. When you interact with Kubernetes, such as by using the kubectl command-line interface, you are communicating with your cluster's Kubernetes master.
-The "master" refersto a collection of processes managing the cluster state. Typically, these processes are all run on a single node in the cluster, and this node is also referred to as the master. The master can also be replicated for high availability and redudancy.
-All master nodes run the Control Plane and is comprised of the following components:
  *kube-apiserver
  *kube-scheduler
  *kube-controller-manager
  * and possibly a cloud-controller-manager

b) worker node
-can be zero or more

Kubelet - Kubernetes agent running on nodes

ControlPlane - Set of containers that manage the cluster (masters)
   * Includes API server, scheduler, controller manager, etcd, and more
   * Sometimes called the "master


Kubernetes cluster

a) master

-network - responsible for assigning IP addresses

-CoreDNS

-Kubelet - this is the kubernetes service

- kubectl 

- kubeadm



-API Server - The communication hub for all cluster components. It exposes the Kubernetes API. https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/ This is the frot end for the Kubernetes control plane. All API calls are sent to this server, and the server sends commands to the the other services.
  *1. Authenticate User; 2. Validate Request; 3. Retrive data; 4. Update ETCD; 5. Scheduler; 6. Kubelet;

  cat /etc/systemd/system/kube-apiserver.service

   ps aux | grep kube-apiserver

   kubectl get pods -n kube-system

-Scheduler - Assign your app to a worker node. Auto-detects which pod to assign to which node based on resources requeierments, hardware constraints., etc. https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/
  * it is only decided which pod goes where
     ** 1. Filter Nodes
     ** 2. Rank Nodes
  * Resource Requirements and Limits
  * Taints and Tolerations
  *  node Selectors/Affinity

   cat /etc/kubernetes/manifests/kube-scheduler.yaml

   ps aux | grep kube-scheduler


-Control Manager / Controller Manager - Maintains the cluster. Handles node failures, replacing components, maintaining the correct amount of pods, etc. https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
 *Operates the cluster controllers:
    ** Node controller - Responsible for noticing and responding when nodes go down.
       *** Node Monitor Period = 5s
       *** Node Monitor Grace Period = 40s #after that pod will be in state UNREACHABLE
       *** POD Eviction Timeot = 5m #after that pod can get back up
    ** Replication controller - Responsible for maintaing the correct number of pods for every replication controller object in the system.
    ** Endpoints - Populates the Endopints object (i.e. joins services and pods)
    ** Service Account & Token Controllers - Creates default accounts and API access tokens for new namespaces.
    ** Deployment-Controller
    ** Namespace-Controller
    ** CronJob
    ** Job-Controller
    ** PV-Protection-Controller
    ** Replicaset
    ** StatefulSet
    ** Service-Account-Controller
    ** PV-Binder-Controller
  * Functions:
     ** Watch status
     ** Remediate Situation

  cat /etc/kubernetes/manifest/kube-controller-manager.service

  cat /etc/systemd/system/kube-controller-manager

  ps aux | grep kube-controller-manager




-Etcd - key-value store for Kubernetes objects, with raft algorithms, odd numbers of masters. Data store that stores the cluster configuration. https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/. When an object is created  that object's state is stored here. Etcd acts as the reference for the cluster state. If the cluster differs from what is indicated here, the cluster is changed to match.
  * What is ETCD? - ETCD is a distributed reliable key-value store that is Simple, Secure & Fast
  * What is a Key-Value Store?
{
  "name": "John Doe",
  "age": 34,
  "location": "New York", 
  "salary": 300000
}
  * Install etcd
      1) Download Binaries
      2) Extract
      3) Run etcd Service
  * by default is running on 2379 port
  * etcd client :

      ./etcdctl set key1 value1 # this creates entry in db

  * How to get started quickly?
  * How to operate ETCD?
  * What is a distributed system?
  * How ETCD Operates?
  * RAFT Protocol
  * Best practices on number of nodes
  *ETCD in Kubernetes
     **it stores information about:
       *** Nodes
       *** PODs
       *** Configs
       *** Secrets
       *** Accounts
       *** Roles
       *** Bindings
       *** Others
     ** setup manualy or using kubeadm

     kubectl exec etcd-master -n kube-system etcdctl get / --prefix -keys-only


     ** Registry
       *** minions
       *** pods
       *** replicasets
       *** deployments
       *** roles
       *** secrets

      ** ETCD in HA Enviorment
       *** setup etcd master per master node - to make sure that they know about each other parameter in etcd.service have to be addedd

       --initial-cluster controller-0=https://${CONTROLLER0_IP}:2380, controller-1=https://${CONTROLLER1_IP}:2380

      **




b) node1
- pods - are the smallest unit of Kubernetes and can be either a single conteiner or a group of conteiners . All conteiners in a pod have the same IP address, and all pods in the cluster have unique IPs in the same IP space.

- kubectl 
  * https://learnk8s.io/blog/kubectl-productivity?utm_campaign=DevOps%20and%20Docker%20Updates&utm_medium=email&utm_source=Revue%20newsletter

- kubeadm


-kubelet - Runs and manages the containers on the node and talks  to the API server. https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/ This is the primary node agent that runs on each node . It uses a PodSpec, a provided object that describes a pod, to monitor the pods on its node. The kubelet checks the state of its pods and ensures that they match the spec.
  * function:
    ** Register Node
    ** Create PODs
    ** Monitor Node & PODs
  * Kubeadm does not deploy Kubelets !!!
  * kubelet.service

    ps aux | grep kubelet


-kube-proxy - Load balances traffic between application components. https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/ This runs on the nodes and provides network connectivity for services on the nodes that connect to the pods. Services must be defined via the API in order to configure the proxy

- container runtime - The program that runs your containers (Docker, rkt, containerd). https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/ This is container manager., it can be any container runtime that is complaint with the Open Container initiative (such as Docker). When Kubernetes needs to instantiate a contianerinside of a pod, it interfaces with the container runtime to build the correct type of container.



c) node 2

What is communicate with what
API Server -> etcd
Scheduler -> API Server
Controller Manager -> API Server
kube-proxy -> API Server
kubelet -> container runtime
kubelet -> API Server


3. Kubernetes advanteges

- all containers can communicate with all other conteiners without NAT
-all nodes can communicate with all conteiners (and vice versa) without NAT
-The IP that a container sees itself as is the IP that others see it as


4. Kubernetes Local installation

-Kubernetes is a series of containers, CLI's, and configurations
-many ways to install, lets focus on easiest for learning it could depends on:
  *RAM utilization
  *Easily shutdown
  *Easily reset?
  *Easily used on local system
-Docker Desktop: Enabled in settings; Sets up everthing inside Docker's existing Linux VM
-Docker Toolbox on Windows: MiniKube; Uses VirtualBox to make Linux VM
-Your Own Linux Host or VM: MicroK8s; Installs Kubernetes right on the OS
- In the Browser: http://play-with-k8s.com (you can setup and play) & katacoda.com (already setup and ready to play with) -> doesn't keep enviornment


Docker Desktop
-Runs/configure Kubernetes Master containers
-Manages kubectl install and certs
-Easily install, disable, and remove from Docker GUI


MiniKube
-is a tool that makes it easy to run Kubernetes locally
-Minikube runs a single-node Kubernetes cluster inside a Linux VM
-just for test and learning porpose
-it cannot spin up a production cluster, it's a one node machine with no high availability
-It works on Windows, Linux, and MacOS
-you will need Virtualization Software installed to run minikube:
  *VirtualBox
-Download Windows Installer from Github https://github.com/kubernetes/minikube
-minikube-installer.exe
-minikube start# command to start a cluster
-Much like the docker-machine expiriance
-Creates a VirtualBox VM with Kubernetes master setup
-Doesn't install kubectl


MicroK8s
-Install Kubernetes (without Docker Engine) on localhost (Linux)
-Uses snap (rather then apt or yum) for install

5. Kubernetes Setup

-Kubernetes should be able to run anywhere
-there are more integration for certain Cloud Providers, like AWS & GCE
  *things like Volumes and External Load Balancers work only with supported Cloud Providers
-there are multpile tools to install a kubernetes cluster
-for production cluster you need:
  *minikube and docker client are great for local setups but not for real clusters
  *Kops and kubeadm are tools to spin up a production clusters
  * you do not need both , just one of them

-On AWS, the best tool is kops
  *At some point AWS EKS (hosted Kubernetes) will be available, at that point this will probably be the preferred option

-For other installs, or if you can not get kops to work, you can use kubeadm
  *Kubeadm is an alternative approach, kops is still recommended (on AWS) - you also have AWS integrations with kops automatically



6. Cloud/ On-premise setup


- there is also a legacy tool called kube-up.sh - this was a simple tool to bring up a cluster, but is now deprecated, it does not create a production ready enviorment


Kops
-To setup Kubernetes on AWS, you can use a tools called kops
  *kops stand for Kubernetes Operations
-the tool allows you to do production grade Kubernetes installations, upgrades and managment
-Kops only works on Mac/Linux
- on windows it is necessary to boot a virtual machine (VirtualBox, Vagrant)

Kops installation

wget https://storage.googleapis.com/kubernetes-release/relase/v1.4.3/bin/linux/amd64/kubectl

sudo mv kubectl /usr/local/bin/

sudo chmod +x /usr/local/bin/kubectl

kubectl

ssh-keygen -f .ssh/id_rsa

sudo mv /usr/local/bin/kops-linux-amd64 /usr/local/bin/kops

kops create cluster --name=kubernetes.demo.aka --state=s3:/kopstate --zones=eu-north-1a --node-count=2 --node-size=t2.micro --dns-zone=kubernetes.demo.aka

kops edit cluster kubernetes.demo.aka --state=s3://kopstate

kops update cluster kubernetes.demo.aka --yes --state=s3://kopstate

cat .kube/config

kubectl get node

kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8000


kubectl expose deployment hello-minikube -type=NodePort

kubectl get services

kops delete cluster kubernetes.demo.aka ---state=s3://kopstate


7. Cluster Setup

a) Install on master fo CentOS 7

1) The first thing that we are going to do is use SSH to log in to all machines. Once we have logged in, we need to elevate privileges using sudo.

sudo su

2) Disable SELinux.

setenforce 0

sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

3) Enable the br_netfilter module for cluster communication.

modprobe br_netfilter

echo '1'  > /proc/sys/net/bridge/bridge-nf-call-iptables

4) Disable swap to prevent memory allocation issues.

swapoff -a

vim /etc/fstab # comment out the swap line

5) Install the Docker prerequisites.

yum install -y yum-utils device-mapper-persistent-data lvm2

6) Add the Docker repo and install Docker.

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum install -y docker-ce

7) Conigure the Docker Cgroup Driver to systemd, enable and start Docker.

sed -i '/^ExecStart/ s/$/ --exec-opt native.cgroupdriver=systemd/' /usr/lib/systemd/system/docker.service

systemctl daemon-reload

systemctl enable docker --now

systemctl status docker

docker info | grep -i cgroup

8)  Add the Kubernetes repo.

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


9) Install Kubernetes.

yum install -y kubelet kubeadm kubectl

10) Enable Kubernetes. The kubelet service will not start until you run kubeadm init.

systemctl enabled kubelet

11) Initialize the cluster using the IP range for Flannel.

kubeadm init --pod-network-cidr=10.244.0.0/16

12) Copy the kubeadmin join command.

13) Exit sudo and run the following.

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

14) Deploy Flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

15) Check the cluster state.

kubectl get pos --all-namespaces

16) Run the join command that you copied earlier (this command needs to be run as sudo), then check your nodes from the master.

kubectl get nodes




8. Buillding Containers

9. Running your first app

- a file pod-demo.yml with the pod definition:

apiVersion: v1
kind: Pod
metadata:
  name: node.demo.example.com
  labels:
    app: demo
spec:
  containers:
  - name: k8s-demo
    image: wardviaene/k8s-demo
    ports:
    - containerPort: 3000

-use kubectl to create the pod on the kubernetes cluster:

kubectl create -f pod-demo.yml

kubectl port-forward nodedemo.example.com 8081:3000

kubectl expose pod nodedemo.example.com --type=NodePort --name nodedemo-service

minikube service nodedemo-service --url
 #returns a IP address and port

-to be able to access the app forom outside the cluster. On AWS, you can easily add an external Load Balancer. This AWS Load Balancer will route the traffic to the correct pod in Kubernetes there are other solutions for other cloud providers that don't have a Load Balancer
  *your own haproxy/nginx load balancer in front of your cluster
  *expose ports directly




First app with Load Balancer 

pod: demo.yml

apiVersion: v1
kind: Pod
metadata:
  name: nodedemo.example.com
  labels:
    app: demo
spec:
  containers:
  - name: k8s-demo
    image: wardviaene/k8s-demo
    ports:
    - name: nodejs-port
      containerPort: 3000

service: demo-service.yml

a) Load Balancer

apiVersion: v1
kind: Service
metadata:
  name: demo-service
spec:
  ports:
  - port: 80
    targetPort: nodejs-port
    protocol:TCP
  selector:
    app: demo
  type: LoadBalancer

b) Node Port


apiVersion: v1
kind: Service
metadata:
  name: demo-service
spec:
  ports:
  - port: 31001
    nodePort: 31001
    targetPort: nodejs-port
    protocol:TCP
  selector:
    app: demo
  type: NodePort

-now you can point hostname like www.example.com to the ELB to reach your pod from the internet



10. Building Container Images


II. Kubernetes Basics

1. Kubernetes Container Abstraction


a) namespace 

-filter group of objects in cluster
-enables organizing resources into non-overlapping groups (for example, per tenant)


b) Selector
 - is a technique to grab multiple  pods based on a label that they all share. The nice thing here is when you use the kubectl command , it automatically added a label of the name of the Deployment to all these pods.

Deploying workloads

a) pod
-one or more containers running together on one Node
-Basic unit of deployment. Containers are always in pods
-are the smallest deployable unit of Kubernetes containing one or more processes in co-located containers. All conteiners in a pod have the same IP address, and all posds in the cluster have unique IPs in the same IP space.


b)Controller
-for creating/updating pods and other objects
-many types of Controllers inc. Deployment, ReplicaSet, StatefulSet, DeamonSet, job, CronJob, etc.
-ReplicaSets - keeps one or more pod replicas running
-ReplicationController - the older, less-powerful equivalent of a ReplicaSet
-Job - Runs pods that perform a completable task
-CronJob - Runs a schedule job once or periodically
-DeamonSet - Runs one pod replica per node (on all nodes or only on those matching a node selector)
-StatefulSet - Runs stateful pods with a stable identity
Deployment - Declarative deployment and updates of pods





Services

a) Service
-network endpoint to connect to a pod
-Exposes one or more pods at a single and stable IP address and port pair

b) Endpoints
-Defines which pods (or other server s)are exposed through service

c) Ingress
-Exposes one or more services to external clients through a single externally reachable IP address


Config

a) Secrets
-Like, a ConfigMap, but for sensitive data

b) ConfigMaps
A key-value map for storing non-sensitive config options for apps and exposing it to them


Storage

a) PersistentVolume
-Points to persistent storage that can be mounted into a pod through a PersistentVolumeClaim

b) PersistnentVolumeClaim
-A request for and claim to Persistent Volume

c) StorageClass 
-Defines the type of dynamically-provisioned storage claimable in PersistentVolumeClaim


2. Kubectl Run, Create, Apply

-We get three ways to create pods from the kubectl CLI
  * kubectl run (changing to be only for pod creation)
  * kubectl create (create some resources via CLI or YAML)
  * kubectl apply (create/update anything via YAML)
-For now we will just use run or create CLI

-two ways to deploy pods (containers) : via commands, or via YAML

3. Node Architecture


-containers - red boxes
- container in the same pod can commucnicate with each other just using the port number and localhost.
-pods within a cluster can also communicate with children but that needs to go over the network.
-on nodes is installe kublet and kube-proxy
-kubelets are responsible to launch the pods. So it' going to connect to the Master node to get this information.
- the kube-proxy is going to feed its information about what pods are on these nodes to "iptables".
- "iptables" - is the firawell in Linux and it can also route traffic.
-Whenever a new pod is launched then kube-proxy is going to change the "iptables" rules to make sure that the pod is routable within the cluster

4. Cluster Architecture


5. Pods. Scaling Pods



Pods
- All containers can communicate with all other conatiners without NAT
- All nodes can communicate with all containers (and vice versa) without NAT
- The IP that a container sees itself as is the IP that other see it as

example:
Pod IP Address: 10.20.0.3
Container IP Address outside the pod: 10.20.0.3
Container IP inside the pod: 127.0.0.1
program is exposde in container on the port:80
Application in a pod have access to shared volumes


- Must have fild in pod-definition yaml:
   *apiVersion
   *kind
   *metadata
   *spec




KIND                          VERSION
POD                             v1
Service                         v1
ReplicaSet                    apps/v1
Deployment                  apps/v1


apiVersion: v1
kind: Pod
metadata:
  name: examplepod
  namespace: pod-example
spec:
  volumes:
  - name: html
    emptyDir: {}
  containers:
  - name: webcontainer
    image: nginx
    volumeMounts:
    - name: html
      mountPath: /usr/share/nginx/html
  - name: filecontainer
    image: debian
    volumeMounts:
    - name: html

      mountPath: /html
    command: ["/bin/sh", "-c"]
    args:
      - while true; do
         date >> /html/index.html;
         sleep 1;
        done

Scaling Pods
- If your application is stateless you can horizontally scale it
  * Stateless = your application doesn't have a state, it doesn't write any local files / keeps local sessions
  * All traditional databases (MySQL, Postgres) are stateful, they have database files that can't be split over multiple instances

-Most web applications can be made stateless:
  * Session managment needs to be done outside the container
  * Any files  that need  to be saved can't be saved locally on the container

a) Relication Controller

-Scaling in Kubernetes can be done using the Replication Controller
-The replication controller will ensure a specified number of pod replicas will run at all time
-A pods created with the replica controller will automatically be replaced  if they fail, get deleted, or are terminated
-Using the replication controller is also recommended if you just want to make sure 1 pod is always running, even after reboots
  *you can then run a replication controller with 1 replica
  *this makes sure that the pod is always running


apiVersion: v1
kind: ReplicationController
metadata: 
  name: demo-controller
spec:
  replicas: 2
  selector:
    app: demo
  template:
    metadata:
       labels:
         app: demo
    spec:
       containers:
       - name: k8s-demo
          image: wardviaene/k8s-demo
          ports:
          containerPort: 3000

https://github.com/wardviaene/kubernetes-course/tree/master/replication-controller


apiVersion: v1
kind: ReplicationController
metadata: 
  name: demo-controller
  labels:
    app: demo
    type:front-end
spec:
  template:
    metadata:
       labels:
         app: demo
         type:front-end
    spec:
       containers:
       - name: k8s-demo
          image: wardviaene/k8s-demo
          ports:
          containerPort: 3000
replicas: 3


kubectl create -f helloworld-repl-controller.yml

kubectl get rc

kubectl get pods

kubectl describe pod pod_name

kubectl delete pod pod_name

kubectl get pods

kubectl scale --replicas=4 -f helloworld-repl-controller.yml

kubectl scale --replicas=1 rc/helloworld-controller

kubectl get rc

It will work only if pods are stateless. And it will only scale horizontaly If you have  statful pod then you will not able to do this.

kubectl delete rc/helloworld-controller

b) Replication Set

-Replica Set is the next-generation Replication Controller
-It supports a new selector that can do selection based on filtering according a set of values
  *e.g. "enviorment" either "dev"" or "qa"
  *not only based on equality, like the Replication Controller
     ** e.g. "enviorment" == "dev"
-This Replica Set, rather than the Replication Controller, is used by the Deployment object
-ReplicaSet can be add to alread existed pods/deploment

apiVersion: v1
kind: ReplicaSet
metadata: 
  name: demo-controller
  labels:
    app: demo
    type:front-end
spec:
  template:
    metadata:
       labels:
         app: demo
         type:front-end
    spec:
       containers:
       - name: k8s-demo
          image: wardviaene/k8s-demo
          ports:
          containerPort: 3000
replicas: 3
selector:
  matchLabels:
    type: front-end



kubectl replace -f replicaset-definition.yml

kubectl scale --replicas=6 -f replicaset-definition.yml

kubectl scale --replicas=6 replicaset demo

#DESCRIPTOR
apiVersion: apps/v1
kind: ReplicaSet
metadata: 
  name: frontend
  labels:
    app: guestbook
    tier:front-end
#REPLICASET DEFINITION
spec: 
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
    matchExpressions:
      -(key:tier,operator:in,values:[frontend]) 
  template: 
    metadata:
     labels:
       app: guestbook
       tier:frontend
#CONTAINER DEFINITION
    spec:
      containers:
      - name: php-redis
        image: php-redis
#RESOURCE LIMITS & ENVIORMENT DEFINITION
resources:
      requests:
       cpu: 100m
       memory: 100Mi
      env:
      -name: GET_HOSTS_FROM
       value: dns
      ports:  -containerPort: 80


6. Deployments

- A deployment declaration in Kubernetes allows you to do app deployments and updates
- When using the deployment object, you define the state of your application
  * Kubernetes will then make sure the cluster matches your desired state
- Just using the replication controller or replication set might be cumbersome to deploy apps
  * the Deployment Object is easier to use and gives you more possibilities
-With a deployment object you can:
  * Create a deployment (e.g. deploying an app)
  * Update a deployment (e.g. deploying a new version)
  * Do rolling updates (zero downtime deployments)
  * Roll back to a previous version
  * Pause / Resume a deployment (e.g. to roll-out to only a certain percentage)

//// deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.3
        ports:
        - containerPort: 80





apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:    
      app: nginx
      type: front-end
spec:  
  template:
    metadata:
      labels:
        app: nginx
        type: front-end
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.3
replicas: 2
selector:
    matchLabels:

      app: nginx


kubectl create -f deployment/helloworld.yml --record

kubectl get deployments

kubectl get rs

kubectl get pods

kubectl get pods --show-labels

kubectl rollout status deployment/helloworld-deployment

kubectl expose deployment helloworld-deployment --type=NodePort

kubectl get service

kubectl describe service helloworld-deployment

minikube service helloworld-deployment --url

kubectl set image deployment/helloworld-deployment k8s-demo=wardivaene/k8s-demo:2 # change image of deployment

Useful commands

kubectl get deployments # get information on current deployments

kubectl get rs # get information about the replica sets

kubectl get pods --show-labels # get pods, and also show labels attached to those pods

kubectl rollout status deployment/helloworld-deployment # get deployment status

kubectl set image deployment/helloworld-deployment k8s-demo=k8s-demo:2 # run k8s-demo with the image label version 2

kubectl edit deployment/helloworld-deployment # edit the deployment object

kubectl rollout status deployment/helloworld-deployment # get the status of the rollout

kubectl rollout history deployment/helloworld-deployment # get the rollout history

kubectl rollout undo deployment/helloworld-deployment # rollback to previous version

kubectl rollout undo deployment/helloworld-deployment --to-revision=n # rollback to any version version

kubectl edit deployment/helloworld-deployment # edit the deployment object

add line under replicas:
revisionHistoryLimit: 100


Name: nginx-deployment
Namespace: default
CreationTimestamp: Thu, 30 Nov 2017 10:56:25 + 0000
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision=2
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unabailable
###
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
### Deployments define an update strategy and allow for rolling updates:
### In this case, the pods will be replaced in increments of 25% of the total number  of pods.
###
PodTemplate:
  Labels: appp=nginx
  Containers:
    nginx:
      Image: nginx:1.9.1
      Port: 80/TCP
      Enviorment: <none>
      Mounts: <none>
    Volumes: <none>
### Deployments contain a PodSpec.
### The PodSpec can updated to increament  the container version of the structure of the pods that are deployed.
### If the PodSpec is updated and differs from the current spec, this triggers the rollout process.






7. Services

-Pods are very dynamic, they come and go on the Kubernetes cluster
  * When using a Replication Controller, pods are terminated and created during scaling operations
  * When using a Deployments, when updating the image version pods are terminated and new pods take the place of older pods
-That's why Pods should never be accessed directly, but always through a Service
-A service is the logical bridge between the "moral" pods and other services or end-users
-When using the "kubectl expose" you create a nwe service for pod, so it could be accessed externally
-Creating a service will create an endpoin for your pod(s):
  * a ClusterIP: a virtual IP address only reachble from within the cluster (this is the default)
  * a NodePort: a port that is the same on each node that is also reachable externally
  * a LoadBalancer: a LoadBalancer created by the cloud provider that will route  external traffic to every node on the NodePort (ELB on AWS)
-The options shown only allow you to create Virtual IPs or ports
-there is also a possibillity ti use DNS names
  * ExternalName can provide a DNS name for the service
  * e.g. for service discovery using DNS
  * This only works when the DNS add-on is enabled
- this is an example of a Service definition (also created using kubectl expose)

apiVersion: v1
kind: Service
metadata: 
  name: demo-service
spec:
  ports:
  - port: 31001
    nodePort: 31001
    targetPort: nodejs-port
    protocol: TCP
  selector:
    app: demo
  type: NodePort

-Note: by default service can only run between ports 30000-32767, but you could change this behavior by adding the --service-node-port-range= argument to the kube-apiserver (in the init script)


kubectl describe svc demo-service

kubectl get svc

kubectl delete svc demo-service

kubectl exec -it testbox /bin/bash

nslookup my-awsome-service

curl my-awsome-service:32768




Exposing Containers

kubectl expose  creates a service for existing pods
-A service is a stable address for pod(s)
-If we want to connect to pod(s), we need a service
-CoreDNS allows us to resolve services by name
-There are different types of services
  *ClusterIP
  *NodePort
  *LoadBalancer
  *ExternalName

a) ClusterIP (default)

- Single, internal virtual IP allocated
- Only reachable from within cluster (nodes and pods)
- Pods can reach service on apps port number
- these services are always available in Kubernetes

Creating a ClusterIP Service

kubectl get pods -w

kubectl create deployment httpenv --image=bretfisher/httpenv # lets start a simple http server using sample code

kubectl scale deplolment/httpenv --replicas=5

kubectl expose deployment/httpenv --port 8888

Inspecting ClusterIP Service

kubectl get services # look up what IP was allocated

-remmeber this IP is cluster internal only, how do we curl it?

-if  you are on Docker Desktop (Host OS is not container OS)

kubectl run --generator=run-pod/v1 tmp-shell --rm -it --image bretfisher/netshoot -- bash

curl httpenv:8888

kubectl get endpoints httpenv

curl [ip of service]:8888


apiVersion: v1
kind: Service
metadata: 
  name: demo-service
spec:
  type: ClusterIP
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: demo



b) NodePort

- high port allocated on each node
- port is open on every node's IP
- anyone can connect (if the can reach node)
-other pods need to be updated to this port
- these services are always available in Kubernetes

Create NodePort Service

- Let expose a NodePort so we  can access it via the host IP (including  localhost on Windows/Linux/macOS)

kubectl expose deployment/httpenv --port 8888 --name httpenv-np --type NodePort


kubectl get services
NAME              TYPE             CLUSTER-IP          EXTERNAL-IP          PORT(S)                AGE
httpenv-np         NodePort       10.96.118.173          <none>                        8888:32334/TCP    14s


<port1>:<port2>
- it is opposte then what would you see in docker and Swarm
- <port1> - it is one inside cluster , inside the container itself that's listing
- <port2> - it is that is on your nodes exposed to the world, port range is 30000-32767

 - NodePort service also creates a ClusterIP

-these service types are additive, each on creates the one above it:
   *ClusterIP
   *NodePort
   *LoadBalancer

apiVersion: v1
kind: Service
metadata: 
  name: demo-service
spec:
  type: NodePort
  ports:
  - port: 80
    nodePort: 31001
    targetPort: 80
  selector:
    app: demo





c) LoadBalancer

- Controls a LB endpoint external to the cluster
- Only avaible when infra provider gives youa a LB (AWS LB, etc.)
- Creates NodePort+ClusterIP services, tells LB to send to NodePort so we  can access it via the host IP (including  localhost on Windows/Linux/macOS)

Add a LoadBalancer Service

- If you are on Docker Desktop, it provides a built-in LoadBalancer that publishes the --port  on localhost

kubectl expose deployment/httpenv --port 8888 --name httpenv-lb --type LoadBalancer

curl localhost:8888

- if you are on kubeadm, minikube, or microk8s
  *no built-in LB
  * you can still run the command, it will just stay at "pending" (but its NodePort works)

Cleanup

kubectl delete service/httpenv service/httpenv-np
kubectl delete service/httpenv-lb deployment/httpenv




d) ExternalName

- Adds CNAME DNS reocrd to CoreDNS only
- Not used for Pods, but for giving pods a DNS name to use for something outside Kubernetes

e) Ingress

- http traffic


8. Labels

- Labels  are key/value pairs that can be attached to object
  * Labels are like tags in AWS or other cloud providers, used to tag resources
-You can label your objects, for instance your pod, following an organizational structure
  * Key: enviorment - Value: dev/staging/qa/prod
  * Key: department - Value: engineering/ finance /marketing
-Labels are not unique and multiple labels can be added to one object, you can use filters to narrow down results
  *this is called Label Selectors
-Using Label Selectors, you can use matching expressions to match labels
  * For instancea particular pod can only run on a node labeled with "enviorment" equals "development"
  * More complex matching: "enviorment" in "development" or "qa"

Node Labels
-You can also use labells to tag nodes
- Once nodes are tagged, you can use label selectors to let pods only run on specific nodes
- There are 2 steps requierd to run a pod on  a specific set of nodes:
  * First you tag the node
  * Then you add a nodeSelector to your pod configuration
- first, Add a label or multiple labels to your nodes:

kubectl label nodes node1 hardware=high-spec

kubectl label nodes node2 hardware=low-spec

-secondly, add a pod that uses those labels

apiVersion: v1
kind: Pod
metadata: 
  name: demo-pod
  labels:
    app:demo
spec:
  containers:
  - name: k8s-demo
    image: nginx
    ports:
    - containerPort: 3000
  nodeSelector:
    hardware: high-spec

kubectl create -f pod.yaml

kubectl get nodes --show-labels

kubectl nodes minikube hardware=high-spec

kubectl get nodes --show-labels


9. Healthchecks

- If your application malfunctions, the pod and container can still be running, but the application might not work anymore.
-To detect and resolve problems with your application, you can run health checks
-you can run 2 different type of health checks
  *running a command in the container periodically
  *periodic checks on a URL (HTTP)
-the typical production application behind a load balancer should always have health checks implemented in some way to ensure availability and resiliency of the app
-this is how a health check looks like on our example containerL

apiVersion: v1
kind: Pod
metadata: 
  name: demo-pod
  labels:
    app: demo
spec:
  containers:
  - name: k8s-demo
    image: nginx
    ports:
    - containerPort: 3000
  livenessProbe:
    httpGet:
      path: /
      port: 3000
    initialDelaySeconds: 15
    timeoutSeconds: 30

10. ReadinessProbe

-Besides livenesssProbes, you can also use readinessProbes on a container within a Pod
-livnessProbes: indicates whether a container is running
  *if the check fails, the container will be restarted
-readinessProbes: indicates whether the container is ready to serve requests
  *if the check fails, the container will not be restarted, but the Pod's IP address will be removed from the Service, so it'll not serve any requests anymore
-The readiness test will make sure that at startup, the pod will only receive traffic when the test succeeds - it can not route the traffic until this test will be succesful!!!
-You can use these probes in conjunction, and you can configure different tests for them
-If your container always exits when something goes wrong, you don't need a livenessProbe
-In general, you configure both the livenessprobe and the readinessProbe

kubectl create -f demo-healthcheck.yml && watch -n1 kubectl get pods

11. Pod State & LifeCycle / Application Lifecycle Managment

Pod State
-different statuses ans states of a Pod and Container:
  * Pod Status field: high level status
  * Pod Condition: the condition of the pod
  * Container State: state of the container(s) itself
-Pod have a status field, which you see when you do kubectl get pods
-In this scenario all pods are in the running status
  *This means that the pod has been bound to a node
  *all containers have been created
  *at least one container it still running, or is starting/restarting
-other valid statuses are:
  *Pending: Pod has been accepted but is not running
     **Happens when the container image is still downloading
     **If the pod cannot be scheduled because of resource constraints, it'll also be in this status
  *Succeeded: All containers within this pod have been terminated successfully and will not be restarted
  * Failed: All containers within this pod have been Terminated, and at least one container returned a failure code
     ** The failure code is the exit code of the process when a container terminates
  * unknown: the State of the pod couldn't be determined
     ** A network error might have been occurred (for example the node where the pod is running on is down)
-You can get the pod conditions using kubectl describe pod PODNAME

kubectl describe pod demo

- These are conditions which the pod has passed
  * In this example, Initialized, Ready, and PodSchedule
-There are 5 different PodConditions:
  * PodScheduled: the pod has been scheduled to a node
  * Ready: Pod can serve requests and is going to be added to matching Services
  * Initialized: initialization containers have been started successfully
  * Unschedulable: the Pod can't be scheduled (for example due to resource constraints)
  * ContainersReady: all containers in the pod are ready



init container
-
Type                  Status
Initilized            False
Ready                False
PodScheduled   True

main container
-
Type                  Status
Initilized            True
Ready                False
PodScheduled   True

initialDelaySeconds = readines probe(liveness probe) - post start hook

readiness probe
-
Type                  Status
Initilized            True
Ready                True
PodScheduled   True


kind: Deployment
apiVersion: apps/v1
metadata:
name: lifecycle
spec:
replicas: 1
selector:
matchLabels:
app: lifecycle
template:
metadata:
labels:
app: lifecycle
spec:
initContainers:
- name: init
image: busybox
command: ['sh', '-c', 'sleep 10']
containers:
- name: lifecycle-container
image: busybox
command: ['sh', '-c', 'echo $(date +%s): Running >> /timing && echo "The app is running!" && /bin/sleep 120']
readinessProbe:
exec:
command: ['sh', '-c', 'echo $(date +%s): readinessProbe >> /timing']
initialDelaySeconds: 35
livenessProbe:
exec:
command: ['sh', '-c', 'echo $(date +%s): livenessProbe >> /timing']
initialDelaySeconds: 35
timeoutSeconds: 30
lifecycle:
postStart:
exec:
command: ['sh', '-c', 'echo $(date +%s): postStart >> /timing && sleep 10 && echo $(date +%s): end postStart >> /timing']
preStop:
exec:
command: ['sh', '-c', 'echo $(date +%s): preStop >> /timing && sleep 10']

Annotations 
 - While labels and selectors are used to group and select objects, annotations are used  to record other details for informatory purpose.
-For example tool details like name, version build information etc or contact details, phone numbers, email ids etc, that may be used for some kind of integration purpose.

apiVersion: v1
kind: ReplicaSet
metadata: 
  name: demo-pod
  labels:
    app: demo
  annotations:
      buildVersion: 1.34
spec:
  replicas: 3
  selector:
    matchLabels:
      app: App1
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
      - name: simple-webapp
        image: simple-webapp


a) Rolling Updates and Rollbacks in Deploy

b) Configure Applications

c) Scale Applications

d) Self-Healing Application

12. Secrets

-Secret provides a way in Kubernetes to distribute credentials, keys, passwords or "secret" data to the pods
-Kubernetes itself uses this Secrets mechanism to provide the credentials to access the internal API
-You can also use the same mechanism to provide secrets to your application
-Secrets is one way to provide secrets, native to Kubernetes
  *There are still other ways your container can get its secrets if you don't want to use Secrets (e.g. using an external vault services in your app)
-Secrets can be used in the following ways:
  *Use secrets as environment variables 
  *Use secrets as a file in a pod
     **This setup uses volumes to be mounted in a container
     **In this volume you have files
     **Can be used for instance for dotenv files or your app can just read this file
  *Use an external image to pull secrets (from a private image registry)
-To generate secrets using files:

echo -n 'root' > ./username.txt

echo -n 'password' > ./password.txt

kubectl create secret generic db-user-pass --from-files=./username.txt --from-file=./password.txt

-A secret can also be an SSH key or an  SSL certificate

kubectl create secret generic ssl-certificate --from-file=ssh-privatekey=~/.ssh/id_rsa --ssl-cert=mysslcert.crt

- to generate secrets using yaml definitions:

apiVersion: v1
kind: Secret
metadata: 
  name: db-secret
type: Opaque
data:
  password: cmkjdsn=
  username: kjnfdkj==

echo -n "root" | based64

kubectl create -f secrets-db-secret.yml

-you can create a pod that exposes the secrets as environment variables

apiVersion: v1
kind: Pod
metadata: 
  name: demo-pod
  labels:
    app: demo
spec:
  containers:
  - name: k8s-demo
    image: nginx
    ports:
    - containerPort: 3000
  env:
   - name: SECRET_USERNAME
     valueFrom:
      secretKeyRef:
       name: db-secret
       key: username
   - name: SECRET_PASSWORD
    [...]

The secrets will be stored in:
/etc/creds/db-secret/username
/etc/creds/db-secret/password

- Alternatively, you can provide the secrets in a life:

apiVersion: v1
kind: Pod
metadata: 
  name: demo-pod
  labels:
    app: demo
spec:
  containers:
  - name: k8s-demo
    image: nginx
    ports:
    - containerPort: 3000
  volumeMounts:
  - name: credvolume
    mountPath: /etc/creds
    readOnly:true
  volumes:
  - name: credvolume
    secret:
      secretName: db-secrets

13. WebUI

-Kubernetes comes with a Web UI you can use instead of the kubectl commands
-You can use it to:
  *Get an overview of running applications on your cluster
  *Creating and modyifing individual Kubernetes resources and workloads (like kubectl create and delete)
  *Retrieve information on the state of resources (like kubectl describe pod)
-In general , you can access the kubernetes Web UI at https://<kubernetes-master>/ui
- If you cannot access (for instance if it is not enabled on your deploy type), you can  install it manually using:

kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yml


kubectl create -f sample-user.yaml


apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system


kubectl -n kube-system get secret | grep admin-user

kubectl -n kube-system describe secret admin-user-token-<id displayed by previous command>

Login: admin Password: the password that is listed in ~/.kube/config (open file in editor and look for "password: ..."
Choose for login token and enter the login token from the previous step

-If a password is asked, you can retrive the password entering

kubectl config view

- If you are using minikube, you can use the following command to launch the dashboard:

minikube dashboard

-Or if you just want to know the url:

minikube dashboard --url


14. API Primitives

15. Services & Other Network Primitives




III. Advanced Topics

1. Service auto-discovery

2. ConfigMap

-A ConfigMap can also contain full configuration files
  *e.g. an webserver config file
-This file can then be mounted using volumes where the application expects its config file
-This way you can "inject" configuration settings into containers without changing the container itself
-To generate configmap using files:

cat <<EOF > app.properties
driver=jdbc
database=postgres
lookandfeel=1
otherparams=xyz
param.with.hierarchy=xyz
EOF

kubectl create configmap app-config --from-file=app.properties


-You can create a pod that exposes the ConfigMap using a volume

apiVersion: v1
kind: Pod
metadata: 
  name: demo-pod
  labels:
    app: demo
spec:
  containers:
  - name: k8s-demo
    image: nginx
    ports:
    - containerPort: 3000
  volumeMounts:
    -name: config-volume
     mountPath: /etc/config
  volumes:
    - name: config-volume
      configMap:
       name: app-config

The config values will be stored in files:
/etc/config/driver
/etc/config/param/with/hierarchy


-You can create a pod that exposes the ConfigMap as environment variables

apiVersion: v1
kind: Pod
metadata: 
  name: demo-pod
  labels:
    app: demo
spec:
  containers:
  - name: k8s-demo
    image: nginx
    ports:
    - containerPort: 3000
  env:
    -name: DRIVER
     valueFrom:
      configMapKeyRef:
       name: app-config
       key: driver
    -name: DATABASE
    [...]



3. Ingress

-Ingress is a solution available since Kubernetes 1.1 that allows inbound connections to the cluster
-It's an alternative to the external  Loadbalancer and nodePorts
  * Ingress allows you to easily expose services that need to be accessible from outside to the cluster
-With ingress you can run your own ingress controller a basically a loadbalancer) within the Kubernetes cluster.
-There are a default ingress controllers available, or you can write your own ingress controller
-You can create ingress rules using the ingress object

apiVersion: extensions/v1beta1
kind: Ingress
metadata: 
  name: demo-rules
spec:
  rules:
  - host: demo-v1.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: demo-v1
          servicePort: 80
  - host: demo-v2.example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: demo-v2
          servicePort:80

4. External DNS

-On public cloud providers, you can use the ingress controller to reduce the cost of your Load Balancers
  *You can use 1 Load Balancer that captures all the external traffic and send it to the ingress controller
  *The ingress controller can be configured to route the different traffic to all your apps based on HTTP rules (host and prefixes)
  * ths only works for HTTP(s)-based applications
-One greate tool to enabled such approach is External DNS
-This tool will automatically created the necessary DNS records in your  external DNS server (like route53)
-For every hostname that you use in ingress, it'll create a new record to send traffic to your loadbalancer
-The major DNS providers are supported: Google CloudDNS, Route53, Azure DNS, CloudFlare, DigitalOcean, etc
-Other setups are also possible without ingress controllers (for example directly on hostPort - node Port is still WIP , but will be out soon)

5. Volumes

-Volumes in kuberentes allow you to store data outside the container
-When a container stops all data on the container itself is lost
  *That is why up until not I've been using stateless apps: apps that don't  keep a local state, but store their state in an external service
     ** External Service like a databased, catching server (e.g. MySQL, AWS S3)
-Persistent Volumes in Kubernetes allow you attach a volume to a container that will exists even when the container stops
-Volumes can be attached using different volume plugins:
  * on the node: Local Volume
  * on AWS Cloud : EBS Storage
  * on Google Cloud: Google Disk
  * on Network storage: NFS, Cephfs
  * on Microsoft Cloud: Azure Disk
- Using volumes, you could deploy applications with state on your cluster
  *those applications need to read/write to files on the local filesystem that need to be persistent in time
-You could run a MySQL database using persistent volumes
  *Although this might not be ready for production (yet)
  *Volumes are new since the June 2016 release in Kubernetes
-If your node stops working, the pod can be rescheduled on another node, and the volume can be attached to the new node
-To use volumes , you need to create the volume first, example for AWS:

aws ec2 create-volume --size 10 --region us-east-1 --availability-zone us-east-1a -volume-type gp2

- to use volumes, you need to creare a pod with a volume definition

[...]
spec:
  containers:
  - name: k8s-demo
    image: nginx
    volumeMounts:
    - mountPath: /myvol
      name: myvolume
    ports:
    - containerPort: 3000
    volumes: 
    - name:myvolume
       awsElasticBlockStore:
         volumeID: ...
 
 

-Creating and connecting: 2 types
-Volumes
  *tied to lifecycle of a Pod
  *al containers in a single Pod can share them
-PersistentVolumes
  *Created at the cluster level, outlives a Pod
  *Separates storage config Pod using it
  *Multiple Pods can share them
-CSI plugins are the new way to connect to storage

-The kubernetes plugins have the capability to provision storage for you
-The AWS Plugin can for instance provision storage for you by creating the volumes in AWS before attaching them to a node
-This is done using the StorageClass object
   * https://kubernetes.io/docs/concepts/storage/persistent-volumes/

-To use auto provisioned volumes, you can create the following yaml file:

kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  zone: us-east-1

-This allow you to create volume claims using the aws-ebs provisioner
-Kubernetes will provision volumes of the type gp2 for you (General Purpose - SSD)
- Creation o f a volume claim and specify the size:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mycalim
annotations:
  volume.beta.kubernetes.io/storage-class: "standard"
spec:
  accessModes:
   -ReadWriteOnce
  resources:
    requets:
      storage: 8Gi


- Finally, you can launch a pod using  a volume:


apiVersion: v1

kind: Pod
metadata: 
  name: demo-pod
spec:
  containers:
  - name: k8s-demo
    image: nginx
    volumeMounts:
    - mountPath: "/var/www/html"
      name: mypd
    volumes:
    - name: mypd
      persistentVolumeClaim:
       claimName: myclaim



- persitent volume for postgres db

---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: postgres
spec:
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - image: "postgres:9.6.2"
          name: postgres
          env:
            - name: POSTGRES_USER
              valueFrom:
                configMapKeyRef:
                  name: mealplan-config
                  key: postgres_user
            - name: POSTGRES_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: mealplan-config
                  key: postgres_password
          ports:
            - containerPort: 5432
              name: postgres
          volumeMounts:
            - name: postgres-storage
              mountPath: /var/lib/postgresql/db-data
      volumes:
        - name: postgres-storage
          persistentVolumeClaim:
            claimName: postgres-pv-claim
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pv-claim
spec:
  accessModes:
    - ReadWriteOnce # only one pod at a time can reach it
  resources:
    requests:
      storage: 5Gi
-


apiVersion: v1
kind: PersistentVolume
metadata:
  name: demo
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /srv/demo
    server: master.server

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demo
spec:
  accessModes:
    - ReadWriteOnce
  resouces:
    requests:
      storage: 2Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: demo
spec:
  containers:
  - image: master.server:5000/fedora:24-demo
    name: fedora
    imagePullPolicy: Always
    volumeMounts:
    - mountPath: "/mnt"
      name: demo
  volumes:
    - name: demo
      persitentVolumeClaim:
        claimName: demo



6. Pod Presets

-Pod presets can inject information into pods at runtime
  *Pod Present are used to inject Kubernetes Resources like Secrets, ConfigMaps, Volumes end Enviorment variables
-Imagine you have 20 applications you want to deploy, and they all need to get a specific credential
  *You can edit the 20 specifications and add the credential, or
  *You can use presents to create 1 Preset object, which will inject an environment variable or config file to all matching pods
-When injecting Environment variables and VolumeMounts, the Pod Preset will apply the changes to all containers within the pod
-This is an example of a Pod Preset

apiVersion: settings.k8s.io/v1alpha1 # it might change when it become stable
kind: PodPreset
metadata: 
  name: share-credential
spec:
  selector:
    matchLabels:
      app: myapp
  env:
    - name: MY_SECRET
      value: "123456"
  volumeMounts:
    - mountPath: /share
      name: share-volume
  volumes:
    - name: share-volume
      emptyDir: {}
  

- You can use more than one PodPreset, they'll all be applied to matching Pods
- If there's a conflict, the PodPreset will not be applied to the pod
- PodPreset can match zero or more Pods
  *It's possible that no pods are currently matching, but that matching, but that matching pods will be launched at a later time


7. StatefulSets

-Pets Sets was a new feature starting from Kubernetes 1.3, and got renamed to StatefulSets which is stable since Kubernetes 1.9
-It is introduced to be able to run stateful applications:
  *That need a stable pod hostname (instead of podname-randomstring)
    **Your podname will have a sticky identity, using an index, e.g. podname-0 podname-1 and podname-2 (and when a pod gets rescheduled, it'll keep that identity)
  * Statefulsets allow stateful apps stable storage with volumes based on thier ordinal number (podname-x)
     ** Deleting and/or scaling a StatefulSet down will not delete the volumes associated with the StatefulSet (preserving data)
-A StatefulSet will allow your stateful app to use DNS to find other peers
  *Cassandra clusters, ElasticSearch clusters, use DNS to find other members of the cluster
    ** for example: cassandra-0.cassandra for all pods to reach the first node in the cassandra cluster
  *Using StatefulSet you can run for instance 3 cassandra nodes on Kubernetes named cassandra-0 until cassandra-2
  * If you wouldn't use StatefulSet, you would get a dynamic hostname, which you wouldn't be able to use in your configuration files, as the name can always change
-A statefulSet will also allow your stateful app to order the startup and teardown:
  * Instead f randomly terminating one pod (one instance of your app), you'll know which one that will go
     ** When scaling up it goes from 0 to n-1 (n= replication factor)
     ** When scaling down it starts with the highest number (n-1) to 0
  * This is useful if you first need to drain the data from a node before it can be shut down



8. Daemon Sets

-  ensure that one copy of the pod is always present in all nodes in the cluster
- use case: say you would like to deploy a monitoring agent or log collector on each of your nodes in the cluster so you can monitor your cluster better
-On each pod, set the nodeName property in its specification before it is created and when they are created, they automatically land on the respective nodes. So that's how it used to be until kubernetes version v.1.12. From v1.12 onwords the Daemon set uses the default scheduler and node affinity rules that we learned in one of the previous lectures to schedule pods on nodes.
-ensure that every single node in the Kubernetes cluster runs the same pod resource
  *This is useful if you want to ensure that a certain pod is running on every single kubernetes node
-When a node is added to the cluster,a new pod will be started automatically
-Same when a node is removed, the pod will not be rescheduled on another node
-Typical use case:
  *Logging aggregators
  *Monitoring
  *Load Balancers / Reverse Proxies / API Gateways
  *Running a daemon that only needs one instance per physical instance


apiVersion: apps/v1
kind: ReplicaSet
metadata: 
  name: monitoring-daemon
spec:
  selector:
    matchLabels:
      app: monitoring-agent
  template:
    metadata:
      labels:
        app: monitoring-agent
    spec:
      containers:
      - name: monitoring-agent
        image: monitoring-agent



apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: monitoring-daemon
spec:
  selector:
    matchLabels:
      app: monitoring-agent
  template:
    metadata:
      labels:
        app: monitoring-agent
    spec:
      containers:
      - name: monitoring-agent
        image: monitoring-agent



apiVersion: apps/v1
kind: DaemonSet
metadata: 
  name: monitoring-daemon
  labels: 
    app: monitoring-agent
spec:
  template:
    metadata:
      labels:
        name: monitor-agent
    spec:
      containers:
      - name: monitoring-agent

        image: monitoring-agent
        ports:
        -name: nodejs-port
         containerPort:3000 


kubectl get daemonsets

kubectl describe daemonsets monitoring-daemon




9. Monitoring and Logging



a) Monitor Cluster Components


-Kuberenes for now does not come with a full featured built -in monitoring solution
-Number of opensource solutions: Metrics-Server, Prometheus, Elastic Stack, and proprietatry solutions like Datadog and Dynatrace.
-Heapster was one ot the orginal projects that enabled monitoring and anlysis for Kubernetes. however Heapster is now deprecated and a slimmed down version was formed known as the Metrics Server.


Metrics Server
-you can have one metrics server per kubernetes cluster the metric server retrieves metrics from each of the kubernetes nodes and pods, aggregates them and stores them in memory.
-Note that the metric server is only an in memory monitoring solution and does not store the metrics on the desk and as a result you cannot see historical performance data.

So how are the metrics generates for the pods on these nodes?
Kubernetes runs an agents on each node known as the kubelet, ehich is responsible for receiving instructions from the kubernetes API master server and running pods on the nodes.
The kublet also contains a subcomponent known as Cadvisor or Containe Advisior . CAdvisor is responsible for retriving performance metrics from pods, and exposing them through the kubelet API to make the metrics avaible for the Metrics Server.

-for minikube

minikube addons enable metrics-server

-for others

git clone https://github.com/kubernetes-incubator/metrics-server.git

kubectl create -f deploy/1.8+/

-https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/

b) Monitor Cluster Component Logs


docker run kodekloud/event-simulator

docker run -d kodekloud/event-simulator

event-simulator.yaml
apiVersion: v1
kind: Pod
metadata:
  name: event-simulator-pod
spec:
  containers:
  - name: event-simulator
    image: kodekloud/event-simulator

docker logs -f ecf # using contianer id

kubectl logs -f name_of_the_pod

event-simulator.yaml
apiVersion: v1
kind: Pod
metadata:
  name: event-simulator-pod
spec:
  containers:
  - name: event-simulator
    image: kodekloud/event-simulator
  - name: image-processor
    image: some-image-procesor

kubectl logs -f event-simulator-pod event-simulator

c) Monitor Applications

d) Application Logs

e) Resource Usage Monitoring

-Heapster enables Container Cluster Monitoring and Performance Analysis (https://github.com/kubernetes-retired/heapster)
-It is providing a monitoring platform for Kubernetes
-It is a prerequisite if you want to do pod auto-scaling in Kubernetes
-Heapster exports clusters metrics via REST endpoints
-You can use different backends with Heapster
  *You can use InfluxDB, but others like Google Cloud Monitoring/Logging and Kafka are also possible
-Visualizations (graphs) can be shown using Grafana
  * The Kubernetes dashboard will also show graphs pnce monitoring is enabled
-All these technologies (Heapster, InfluxDB, and Grafana) can be started in pods
- The yaml files can be found on the github  repository of Heapster
  * https://github.com/kubernetes/heapster/tree/master/deploy/kube-config/influxdb
  * after downloading the repository the whole platform can be deployed using the addon system or bby using kubectl create -f directory-with-yaml-files/




-https://github.com/kubernetes-sigs/metrics-server

kubectl create -f .

kubectl top node

kubectl top pod

https://github.com/kubernetes-retired/heapster


10. Autoscaling

-Kubernetes has the possibillity to automatically scale pods based matrics
-Kubernetes can automatically scale a Deployment, Replication Controller or ReplicaSet
-In Kubernetes can automatically scale a Deployment, Replication Controller or ReplicaSet
-In Kubernetes 1.3 scaling based on CPU usage is possible out of the box
  *With alpha support, application based based matrics are also avaible (like queries per second or average request latency
  ** to enabled this, cluster has to be started with the env var ENABLE_CUSTOM_METRICS to true
-Autoscaling will periodically query the utilization for the targeted pods
  * By default 30 sec, can be changed using the "--horizontal-pod-autoscaler-sync-period" when launching the controller-manager
-Autoscaling will use heapster, the monitoring tool, o gather its metrics and make scaling decisions
  * Heapster must be installed and running before autoscaling will work
-An example 
  *you run a deployment with a pod with CPU resource request of 200m
  * 200m = 200 millicpu (or also 200 millicores)
  * 200m - 0.2, which is 20% of a CPU core of the running node
    ** if the node has 2 cores, it's still 20% of a single core
  * You introduce auto-scaling at 50% of the CPU usage (which is 100m)
  * Horizontal Pod Autoscaling will increase/decrease pods to maintain a target CPU utilization of 50% (or 100m /10% of a core within this pod)
-This is a pod that you can use to test autoscaling:



apiVersion:extensions/v1beta1
kind: Deployment
metadata: 
  name: hpa-example
  labels: 
    app: monitoring-agent
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: hpa-example
    spec:
      containers:
      - name: hpa-example
        image: hpa-example
        ports:
        -name: http-port
         containerPort:80
        resources:
         requests:
          cpu: 200m



apiVersion:autoscaling/v1
kind: HorizontalPodAutoscaler
metadata: 
  name: hpa-example-autoscaler
  labels: 
    app: monitoring-agent
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: hpa-example
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50
  


11. Node Affinity

-The affinity/anti-affinity feature allows you to do more complex scheduling than the nodeSelector and also works on Pods
  *The language is more expensive
  * You can create rules that are no hard requirements, but rather a preferred rule, meaning that the scheduler will still be able to schedule your pod, even if the rules cannot be met
  *You can create rules that take other pod labels into account
    ** for example, a rule that makes sure 2 different pods will never be on the same node
-Kubernetes can do node affinity an pod affinity/anti-affinity
  * Node affinity is similar to the nodeSelector
  * Pod affinity/anti-affinity allows you to create rules how pod is running, it'll need to be recreated to apply the rules again
  *Affinit/anti-affinity mechanism is only relevant during scheduling, once a pod is running, it'll need to be recreated to apply the rules again
-I'll first cover node affinity and will then cover pod affinity/anti-affinity
-Node Affinity Types:
 * Avaible
    requiredDuringSchedulingIgnoreDuringExecution
    preferredDuringSchedulingIgnoreDuringExecution
 * Planned
          requiredDuringSchedulingRequiredDuringExecution


-There are two states in the lifcycle of a pod when consider node affinity:

  *DuringScheduling - is the state where a pod does not exist and is created for the first time. We have no doubt that when s pod  is first created the affinity rules specified are considered to place the pods on the right note. 
Now what if the nodes with matching labels are not available. For example we forgot to label the node as large.  That is where the type of node affinity used comes to play.

    **requierd - If you select the required  type which  is the first one the scheduler will mandate that the pod be placed on a node with given affinity. . Rules if it cannot find one the pod will not be schedule. This type will be used in cases where the placement of the pod is crucial. If a matching node does not exist the pod will not be scheduled.

    **preferred - If the pod placement is less important than running the workload itself. In that case you could set it to preferred and in cases where a matching node is not found. The scheduler will simply ignore node affinity rules and place the card on any available note. This is a way of telling the scheduler hey try your best to place the pod on matching node but if yo really cannot find one just plays is anywywhere.

  *DuringExecution - is the state where a part has been running and a change is made in the enviorment that affects node affinity such a change in the label of a node. 
For example say an administrator removed the label we said earlier called size equals large from the node.
     **ignored - pods will continue to run and any changes in node affinity will not impact them once they are scheduled
     **requierd -  during execution is introduced which will evict any pods that are running on notes that do not meet affinity rules. In the earlier example, a pod running on the large node will be evicted or terminated of the label large is removed from the node.


apiVersion: v1
kind: Pod
metadata: 
  name: demo-pod
spec:
  containers:
  - name: data-processor
    image: data-processor
  nodeSelector:
    size: Large




apiVersion: 
kind:
metadata: 
  name: demo-pod
spec:
  containers:
  - name: data-processor
    image: data-processor

affinity:
  nodeAffinity:
    requiredDuringSchedulingIgnoreDuringExecution:
      nodeSelectorTerms:
      - matchExpressions:
        - key:size
          operator: In
          values:
          - Large
          - Medium


https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

12. InterPod (Anti-)Affinity

13. Taints and Tolerations

Taints
-are set on nodes
-possible taint effects values: NoSchedule | PreferNoSchedule | NoExecute

kubectl taint nodes node-name key=value:taint-effect

kubectl taint nodes node1 app=blue:NoSchedule

- taint is seton master nodes automatically that prevents any parts from being schedule on this .This behavior can be modified  however a best practice is to not deploy application workloads on a master server to see this this taint run a cube

kubectl describe node kubemaster | grep Taint



Tolerations
- are set on pods

apiVersion: v1
kind: Pod
metadata: 
  name: demo-pod
  labels:
    app: demo
spec:
  containers:
  - name: nginx-container
    image: nginx
  tolerations:
  - key: "app"
    operator: "Equal"
    value: "blue"
    effect: "NoSchedule"

14. Operators

15. Scheduling

apiVersion: v1
kind: Pod
metadata: 
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 8080
  nodeName:

-What is Schedule? If some pods the pods those not have nodeName: defined it can be scheduled
-Which node to schedule?




No Scheduler

apiVersion: v1
kind: Pod
metadata: 
  name: nginx
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx
    ports:
    - containerPort: 8080
  nodeName: node02


apiVersion: v1
kind: Binding
metadata: 
  name: nginx
target:
  apiVersion: v1
  kind: Node
  name: node02

curl --header "Content-Type:application/json" --request POST --data '{"apiVersion":"v1", "kind":"Binding" ... .}  http://$SERVER/api/v1/namespaces/default/pods/$PODNAME/binding/ # send a post request to the pods binding API with the data set to the binding object in a JSON format.
  


a) Labels & Selectors

kubectl get pods --selector app=App1

Use cases for Labels and Selectors:
- Kubernetes objects use labels and selectors internally to connect  different objects together. For example to create a replicaset consisting of different pods, we first label pod definition and use selector in a replicaset to group the pods . In the replica-set set definition file. Yo will see labels definefd in two places. Note that this is an  area where beginners attempt to make a mistake. The labels defined under the template section are the  labels configured on the pods. The labels you see at the top are the labels of the replicaset itself.

apiVersion: apps/v1
kind: ReplicaSet
metadata: 
  name: demo-controller
  labels:
    app: demo
    type:front-end
spec:
  template:
    metadata:
       labels:
         app: demo
         type:front-end
    spec:
       containers:
       - name: k8s-demo
          image: wardviaene/k8s-demo
          ports:
          containerPort: 3000
replicas: 3
selector:
  matchLabels:
    type: front-end

b) Resource Limits


- resource which should be monitored : CPU, Memory, Disk
-default resource limits per conteiner: 0,5 CPU, 256 Mi Memory
- CPU can not be lower than 1m of CPU
- in case odf the cpu , kubernetes throttles the CPU so that it does not go beyond its specified limit. A container cannot use more CPU resources than its limit.
-A container can use more memory resources that its limit. So if a pod tries to consume more memory that its limit constantly , the pod will be terminated.

Resource Reguest

apiVersion: v1
kind: Pod
metadata: 
  name: simple-webapp-color
  labels:
    app: simple-webapp-color
spec:
  containers:
  - name: simple-webapp-color
    image: simple-webapp-color
    ports:
    - containerPort: 8080
    resources:
      requests:
        memory: "1Gi"
        cpu: 1



c) Manual Scheduling

d) Daemon Sets

e) Multiple Schedulers

wget https://storage.googleapis.com/kubernetes-release/release/v1.12.0/bin/linux/amd64/kube-scheduler

-Deploy addtional scheduler


kube-scheduler.service
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --scheduler-name=default-scheduler

my-custom-scheduler.service
ExecStart=/usr/local/bin/kube-scheduler \\
  --config=/etc/kubernetes/config/kube-scheduler.yaml \\
  --scheduler-name=my-custom-scheduler


/etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    image: k8s.gcr.io/kube-scheduler-amd64:v1.11.3
    name: kube-scheduler

my-custom-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
    - --scheduler-name=my-custom-scheduler
    - --lock-object-name=my-custom-scheduler
    image: k8s.gcr.io/kube-scheduler-amd64:v1.11.3
    name: kube-scheduler

-The leader-elect option is used when you have multiple copies of the scheduler running on different master nodes, in a high Availability setup where you have multiple master nodes with the kube-scheduler process running on both of them. If multiple copies of the same scheduler are running on different nodes only onbe can be active at a time. That's where the leader-elect option helps in choosing a leader who will lead scheduling activities. . To get multiple schedulers working you must either set the leader-elect option to false, in case where you don't  have multiple masters. In case you do have multiple masters, you can pass in an additional parameter to set a lock object name. This is to differentiate the new custom scheduler from the  default during the leader election process. Once done, create the pod using the kubectl create command. Run the get pods command in the kube0system namespace and look for the new custom scheduler. Make sure its in a running state. The next step is to configure a new pod  or a deployment to use the new scheduleer. In the pod specification file. Add a new file called schedulerName and specify the name of the new scheduler. This way when the pod is created , the right scheduler picks it up to shedule. Create the pod using the kubectl create command . If the scheduler was not configured correctly , then the pod will continoute to remain in a Pending State. If everything is good , then the pod willl be in a Running State.
-So how do you know which scheduler picked it up? View the events using the kubectl get events command. This is list all the events in the current namespace. View the logs of the pod.

pod-definition.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - image: nginx
    name: nginx
  schedulerName: my-custom-scheduler

kubectl get events

kubectl logs my-custom-scheduler --name-space=kube-system



f) Scheduler Events

kubectl get events

g) Configure Kubernetes Scheduler

https://github.com/kubernetes/community/blob/master/contributors/devel/scheduler.md
https://kubernetes.io/blog/2017/03/advanced-scheduling-in-kubernetes/
https://jvns.ca/blog/2017/07/27/how-does-the-kubernetes-scheduler-work/
https://stackoverflow.com/questions/28857993/how-does-kubernetes-scheduler-work


16. Node Selectors

On the Node

apiVersion:
kind: Pod
metadata:
  name: demo-pod
spec: 
  containers:
  - name: data-processor
    image: data-processor
  nodeSelector:
    size: Large

On the Pod

kubectl label nodes <node-name> <label-key>=<label-value>

kubectl label nodes node-1 size=Large

IV. Administration

1. Master Services

2. Quotas and Limits

Resource Quota

apiVersion: v1
kind: ResourceQuota
metadata: 
  name: compute-quota
  namespace: dev

spec:
  hard:
    pods: "10"
    requests.cpu: "4"
    requests.memory: 5Gi
    limits.cpu: "10"
    limits.memory: 10Gi
    

3. Namespaces

a) Default namespaces (created with kubernets cluster)

-kube-system
-default
-kube-public

b) Namespaces 

-isolation
-can be used to separate dev and production enviorment
  * namespace -dev
  * namespace -prod
- every namespaces can have different policies
- every namespace can have separate resource limits
- resources within the same namespace can reach  o each other simply by their names. Example: msql.connect("db-service"). To connect to service in another namespace example: msql.connect("db-service.dev.svc.cluster.local").

DNS

mysql.connect("db-service.dev.svc.cluster.local")

db-service - Service Name
dev - Namespace
svc - Service
cluster.local - Domain


apiVersion: v1

kind: Pod
metadata: 
  name: demo-pod
  namespace: dev
  labels:
    app: demo
spec:
  containers:
  - name: k8s-demo
    image: nginx



apiVersion: v1
kind: Namespace
metadata: 
  name: dev


kubectl create namespace newnamespace

kubectl get pods --namespace=kube-system

kubectl create -f pod-definition.yml --namespace=dev

kubectl config set-context $(kubectl config current-context) --namespace=dev # change default namespace

4. User managment

5. RBAC

6. Networking

a) Network Plugin

-Networking and IP address managment is provided by a network plugin. This can be a CNI plugin or a Kubenet plugin.

-Implementation varies according to the network plugin that is used. However, all K8s networking must follow these three rules:
  * All containers can communicate with all other containers without NAT.
  * All nodes can communicate with all containers (and vice versa) without NAT.
  * The IP address that the container sees itself as the IP that others see it as

b) DNS

-As of Kubernetes 1.3, DNS is a built-in service launched automatically using the addon manager
  *The addons are in the /etc/kubernetes/addons directory on master node
-The DNS service can be used within pods to find other services running on the same cluster
-Multiple containers within 1 pod don't need this service, as they can contect each other directly
  * A container in the same pod can connect the port of the other container directly using localhost:port
-To make DNS work, a pod will need Service definition


-All services that are defined in the cluster get a DNS record. This is true for the DNS service as well.
-Pods search DNS relative to their own namespaces
-The DNS server schedules a DNS pod on the cluster and configures the kubelets to set the containers to use the cluster's DNS service.

kubectl exec -it nginx_pod cat /etc/resolv.conf

-PodSpec DNS policies determine the way that a container uses DNS. Options inlude Default, ClusterFirst, or None.


Namespace: web_data
  *Database
  *Webserver

Namespace: web_log
  *Logprocessor


DNS
A Record: Database.web_data.svc.cluster.local
A Record: Webserver.web_data.svc.cluster.local
A Record: Logprocessor.web_logs.svc.cliuster.local

-Once there is a source for IP addresses in the cluster. DNS can start.

kubectl get pods --all-namespaces -o wide


c) Etcd

-Etcd is updated with the IP information.

Flannel agent - on nodes

d) IPTables - on nodes

-The network plugin configures IPTables on the  nodes to set up routing that allows communication between pods and nodes as well as with pods on other nodes within the cluster

e) Pre-Requisites - Network, Switching , Routing, Tools

f) Pre-Requisites - Network Namespaces

g) Pre-Requisites - DNS and CoreDNS

-All services that are defined in the cluster get a DNS record. This is true for the DNS service as well.
-Pods search DNS relative to their own namespaces
-The DNS server schedules a DNS pod on the cluster and configures the kubelets to set the containers to use the cluster's DNS service.

kubectl exec -it nginx_pod cat /etc/resolv.conf

-PodSpec DNS policies determine the way that a container uses DNS. Options inlude Default, ClusterFirst, or None.


Namespace: web_data
  *Database
  *Webserver

Namespace: web_log
  *Logprocessor


DNS
A Record: Database.web_data.svc.cluster.local
A Record: Webserver.web_data.svc.cluster.local
A Record: Logprocessor.web_logs.svc.cliuster.local
 

h) Pre-Requisites - Networking in Docker

i) Networking Configuration on Cluster Nodes

j) Service Networking

k) POD Networking Concepts

l) network LoadBalancer

m) Ingress

n) flannel

-network overlay
-network plugin
-there is no much of post configuration
-it is running agent on each hosts
-preconfigure address space
-or etcd directory to store the network configuration
-packets are forwarded

-

ip route

ps -ax | grep [f]lannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml # installation of flannel

7. Node maintenance / Cluster Maintenance

a) Cluster Upgrade Process

b) Operating System Upgrades

c) Backup and Restore Methodologies



8. High Availability

9. TLS on ELB

10. Security

a) Authentication & Autorization

Roles

kubectl get roles --all-namespaces

Service account

kubectl get serviceaccount --all-namespaces

b) Kubernetes Security

Kubernetes Attack Surfaces

- in Master Node
  * Access to Nodes/Virtual Machines
- in etcd
  * Access to etcd API or keyvalue store
- in API Server
   *Access to Kube API Server Deamon
- in Kubernetes Control Plane
   * Intercept/modify/iniject control plane traffic
- in Kubelet
  * Access via Kubelet API
- in Container
  *Compromise Container Runtime
  * Escape container to host through vulnerability or volume mount
- in Application
  *  Intercept/modify/inject application traffic

c) Network Policies

d) TLS Certificates for Cluster Component

e) Image Securely

f) Security Context

g) Secure Persistent : Value Store

11. Storage

a) Persistent Volumes

b) Access Modes for Volumes

c) Persistent Volume Claims

d) Kubernetes Storage Object

e) Configure Applications with Persistent Storage



12. Installation, Configuration & Validation

a) Design a Kubernetes Cluster

b) Install Kubernetes Master and Nodes

c) Secure Cluster Communication

d) HA Kubernetes Cluster

e) Kub

f) Provision Infrastracture

g) Choose a Network Solution

h) Ku  fig

i) Run & Analyze end-to-end test

j) Node end-to-end test

13. Troubleshooting

a) Application Failure

b) Worker Node Failure

c) Control Plane Failure

d) Networking

14. Static Pods

-You need pod definition file to the kublet without kube-api. How to do that? You can configure pod definition files from directory on the server designated to store information about pods. The pods definition files in this directory kubelet periodically checks this directory for files reads these files and creates pods on the host. Not only does it create the pod it can ensure that the pod stays alive. If the application crashes , the kubelet attempts to restart it. If you make a change to any of the files within this directory, the kubelet recreates the pod for those changes to take effect.. If you remove a file from this directory the part is deleted automatically. -> It can be only used to creation of pods

kubelet.service -> file

Exec=/usr/local/bin/kubelet \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --pod-manifest-path=/etc/Kubernetes/manifests \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --register-node=true \\
  --v=2

kubelet.service -> file

Exec=/usr/local/bin/kubelet \\
  --container-runtime=remote \\
  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
  --config=kubeconfig.yaml \\
  --kubeconfig=/var/lib/kubelet/kubeconfig \\
  --network-plugin=cni \\
  --register-node=true \\
  --v=2

kubeconfig.yaml -> file

staticPodPath: /etc/kubernetes/manifest...


docker ps

kubectl get pods

- Can the kubelet create both kinds of pods at the same time? Well, the way the kubelet works is it can take in requests for creating parts from different inputs.  The first is through the pod definition files from static pods folder. The second, is through an HTTP API endpoint. And that is how the kube-apiserver provides input to kubelet. The kubelet can create both kinds of pods - the static pods and the ones from api server - at the same time.

Api server is aware of the static pods created by the kubelet? If you run kubectl get pods command on the master node , the static pods will be listed as any other pod. When the kubelet creates a static pod, if it is part of a cluster, t also creates a mirror object in the kubeapi server.  What you see from the kube-apiserver is just a read only mirror of the pod.. You can view details about the pod but you cannot edit or delete it like  the usual parts. You can only delete them by modifing the files from the nodes manifest folder. Note that the name of the part is automatically appended with the node name.

-Why would you want to use static pods? Since static pods are not dependent on the Kubernetes control plane,  you can use static pods to deploy the control plane components itself  pods on a node. Start by installing kubelet on all the master nodes. The create pod definition files that uses Docker images of the various control plane components such as the api server, controller , etcd etc.  Place the definition files in the designated manifests folder. And kubelet takes care of deploying the control plane components themselves as pods on the cluster. This way you don't have to to download  the binaries configure services or worry about so the service is crashing. If any of these services were to crash since it's a static pod it will automatically be restarted by the kubelet.  That's how the kubeadmin tool set's up a Kubernetes cluster. Which is why when you list the pods in the kube-system namespace, you see the control plane components as pods in a cluster setup by the kubeadmin tool.



Static Pods vs DeamonSets

- Static Pods
  * Created by the kubelet
  *Deploy Control Plane components as Static Pods
  * Ignored by the Kube-Scheduler
-DaemonSets
  *Created by Kube-API server (DaemonSet Controller)
  *Deploy Monitoring Agents, Logging Agents on nodes
  *Ignored by the Kube-Scheduler

V. Packaging

1. Introduction to Helm

2. Creating Helm Charts

3. Helm Repository

4. Building & Deploying

VI. Extras

1. kubeadm

2. TLS Certificates with cetr-manager

8. Scaling ReplicaSets

-Start a new deployment for one replica/ pod
kubectl run my-apache --image httpd

-let is scale it up with another pod
kubectl scale deploy/my-apache --replicas 2
kubectl scale deployment my-apache --replicas 2
  *those are the same command
  *deploy = deployment = deployments
  * what is happening during
    **deployment updated to 2 replicas
    **ReplicaSet Controller sets pod count to 2
    **Control Plane assign node to pod
    **Kubelet sees pod is needed, start container



9. Inspecting Objects / Logging

kubectl logs deployment/my-apache #

kubectl logs deployment/my-apache --follow --tail 1 # get container logs


kubectl logs -l run=my-apache #

-Lookup the Stern tool for better log tailing - https://github.com/wercker/stern

kubectl describe pod/my-apache-xxxx-yyyy # get a bunch of details about an object, including events

kubectl get pods -w # watch a command (without needing watch)

kubectl delete pod/my-apache-xxxx-yyyy # watch the pod get re-created






10. Kubectl commands

kubectl version

kubectl run my-nginx --image nginx # What is created during this run:

Pods -> ReplicaSet ->Deployment

kubectl delete deployment my-nginx

kubectl get pods # list the pods, get information about all running pods

kubectl get pods --all-namespaces
 # list pods in all namespaces

kubectl get pods --all-namespaces -o wide
 # list pods in all namespaces in the cluster in detail

kubectl get namespaces # show all the namespace names

kubectl describe pod <pod> # describe one pod

kubectl expose pod <pod> --port=444 --name=frontend # expose the port of a pod (creates a new service)

kubectl port-forward <pod> 8080 # port forward the ecposed pod port to your local machine

kubectl attach <podname> -i # attach to the pod

kubectl exec <pod> -- command # execute a command on the pod, is not persistent data

kubectl label pods <pod> mylabel=awesome # add a new label to a pod

kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
 # Run a shell in a pod - very useful for debugging

kubectl get all # to see all created objects


As you might have seen already, it is a bit difficult to create and edit YAML files. Especially in the CLI. During the exam, you might find it difficult to copy and paste YAML files from browser to terminal. Using the kubectl run command can help in generating a YAML template. And sometimes, you can even get away with just the kubectl run command without having to create a YAML file at all. For example, if you were asked to create a pod or deployment with specific name and image you can simply run the kubectl run command.
Use the below set of commands and try the previous practice tests again, but this time try to use the below commands instead of YAML files. Try to use these as much as you can going forward in all exercises
Reference (Bookmark this page for exam. It will be very handy):


kubectl run --generator=run-pod/v1 nginx --image=nginx # Create an NGINX Pod 
kubectl run --generator=run-pod/v1 nginx --image=nginx --dry-run -o yaml # Generate POD Manifest YAML file (-o yaml). Don't create it(--dry-run)
kubectl run --generator=deployment/v1beta1 nginx --image=nginx Create a deployment
kubectl run --generator=deployment/v1beta1 nginx --image=nginx --dry-run -o yaml Generate Deployment YAML file (-o yaml). Don't create it(--dry-run)
kubectl run --generator=deployment/v1beta1 nginx --image=nginx  --dry-run --replicas=4 -o yaml Generate Deployment YAML file (-o yaml). Don't create it(--dry-run) with 4 Replicas (--replicas=4) 
kubectl run --generator=deployment/v1beta1 nginx --image=nginx --dry-run --replicas=4 -o yaml > nginx.deployment.yaml Save it to a file - (If you need to modify or add some other details before actually creating it)
Before we begin, familiarize with the two options that can come in handy while working with the below commands:
--dry-run : By default as soon as the command is run, the resource will be created. If you simply want to test your command , use the --dry-run option. This will not create the resource, instead, tell you weather the resource can be created and if your command is right.
 -o yaml: This will output the resource definition in YAML format on screen.

Use the above two in combination to generate a resource definition file quickly, that you can then modify and create resources as required, instead of creating the files from scratch.

POD
kubectl run --generator=run-pod/v1 nginx --image=nginx # Create an NGINX Pod
kubectl run --generator=run-pod/v1 nginx --image=nginx --dry-run -o yaml #  Generate POD Manifest YAML file (-o yaml). Don't create it(--dry-run)
Deployment
kubectl create deployment nginx --image=nginx # Create a deployment
kubectl create deployment nginx --image=nginx --dry-run -o yaml # Generate Deployment YAML file (-o yaml). Don't create it(--dry-run)
kubectl run --generator=deployment/v1beta1 nginx --image=nginx --dry-run --replicas=4 -o yaml # Generate Deployment YAML file (-o yaml). Don't create it(--dry-run) with 4 Replicas (--replicas=4)
The usage --generator=deployment/v1beta1 is deprecated as of Kubernetes 1.16. The recommended way is to use the kubectl create option instead.
IMPORTANT:
kubectl create deployment does not have a --replicas option. You could first create it and then scale it using the kubectl scale command.

kubectl run --generator=deployment/v1beta1 nginx --image=nginx --dry-run --replicas=4 -o yaml > nginx-deployment.yaml # Save it to a file - (If you need to modify or add some other details)
OR
kubectl create deployment nginx --image=nginx --dry-run -o yaml > nginx-deployment.yaml 
You can then update the YAML file with the replicas or any other field before creating the deployment. 
Service
kubectl expose pod redis --port=6379 --name redis-service --dry-run -o yaml Create a Service named redis-service of type ClusterIP to expose pod redis on port 6379
(This will automatically use the pod's labels as selectors)
Or
kubectl create service clusterip redis --tcp=6379:6379 --dry-run -o yaml (This will not use the pods labels as selectors, instead it will assume selectors as app=redis. You cannot pass in selectors as an option. So it does not work very well if your pod has a different label set. So generate the file and modify the selectors before creating the service)

kubectl expose pod nginx --port=80 --name nginx-service --dry-run -o yaml # Create a Service named nginx of type NodePort to expose pod nginx's port 80 on port 30080 on the nodes: (This will automatically use the pod's labels as selectors, but you cannot specify the node port. You have to generate a definition file and then add the node port in manually before creating the service with the pod.)
Or
kubectl create service nodeport nginx --tcp=80:80 --node-port=30080 --dry-run -o yaml
(This will not use the pods labels as selectors)
Both the above commands have their own challenges. While one of it cannot accept a selector the other cannot accept a node port. I would recommend going with the `kubectl expose` command. If you need to specify a node port, generate a definition file using the same command and manually input the nodeport before creating the service.

Reference:


11. Services



12. Kubernetes Service DNS

- Starting with 1.11, internal DNs is provided by CoreDNS
- Like Swarm, this is DNS-Based Service Discovery
- So far we have been using hostnames to access services

curl <hostname>

-but that only works for services in the same namespace

kubectl get namespaces

-services also have a FQDN

curl <hostname>.<namespace>.svc.cluster.local





13. Kubernetes Managment Techniques

a) Kubectl Generators


Run, Create, and Expose Generators

-These commands use helper templates called "generators"
-Every resource in Kubernetes has a specification or "spec"

kubectl create deployment sample --image nginx --dry-run -o yaml

kubectl create job test --image nginx --dry-run -o yaml

kubectl expose deployment/test --port 80 --dry-run -o yaml


-you can output those templates with --dry-run -o yaml
-you can use those YAML defaults as a starting point
-generators are "opinionated defaults"


b) The Future of Kubectl Run

- Right now (1.12-1.15) run is in a state of flux
-The goal is to reduce its features to only create Pods
  *Right now defaults to creating Deployments (with the warning)
  *It has lots of generators but they are all deprecated
  *the idea is to make it easy like docker run for one-off tasks
-It is not recommanded for production
-use for simple dev/test or troubleshooting pods


Old Run Confiusion

-The generators activate different Controllers based on options
-using dry-run we can see which generators are used

kubectl run test --image nginx --dry-run

kubectl run test --image nginx --port 80 --expose --dry-run

kubectl run test --image nginx --restart OnFailure --dry-run

kubectl run test --image nginx --restart Never --dry-run

kubectl run test --image nginx --schedule "*/1 ****" --dry-run



c) Imperative vs. Declarative

-Imperative:
  *Focus on how a program operates
  *example "I'dclike a cup of coffee" - I boil water, scoop out 42 grams of medium-fine grounds, poor over 700 grams of water, etc.
-Declartive:
   *Focus on what a program should accomplish
  *example "I'dclike a cup of coffee" - Barista, I'd like a cup of coffee (Barista is the engine that works through the steps, including retrying to make a cup, and is only finished when I have a cup)


Kubernetes Imperative

-Examples: kubectl run, kubectl create deployment, kubectl update
  *we start with a state we know (no deployment exists)
  *we ask kubectl run to create a deployment
-Different commands are required to change that deployment
-Different commands are required per object
-Imperative is easier when you know the state
-Imperative is easier for humans at the CLI
-Imperative is NOT easy to automate


Kubernetes Declarative

- Example: kubectl apply -f my-resources.yaml
  *We do not know the current state
  *We only know what we want the end results to be (yaml contents)
-Same command each time (tiny exception for delete)
-Resources can be all in a file, or many files (apply a whole dir)
-Requires understanding the YAML keys and values
-more work than kubectl run for just starting a pod
-the easiest way to automate
-the eventual path to GitOps happiness



d) Three Managment Approaches

Imperative commands : run, expose, scale, edit, create deployment
-best for dev/learning/personal projects
-easy to learn, hardest to manage over time


Imperative objects: create -f file.yml, replace -f file.yml, delete -f file.yaml
-good for prod of small enviorments, single file per command
-store your changes in git-based yaml files
-hard to automate



Declarative objects: apply -f file.yml or dir\, diff
-best for prod, easier to automate
-harder to understand and predict changes





-Do not mix the three approaches
-Learn the Imperative CLI for easy control of local and test setups
-Move to apply -f file.yml and apply -f directory\ for prod
-Store yaml in git , git commit each change before you apply
-This trains you for later doing GitOps (where git commits are automatically aplied to clusters

14. Declarative Kubernetes YAML

a) Kubectl Apply

kubectl apply -f filename.yml # create/update resources in a file

kubectl apply -f myyaml/ 
# create/update a whole directories of yaml

kubectl apply -f https://repo/pod.yml 
# create/update from a URL
Be careful , lets look at it first (browser or curl)!
curl -L https://repo/pod.yml







b) K8s Configuration YAML

- Kubernetes configuration file (YAML or JSON)
- Each file contains one or more manifests
-Each manifest describes an API object (deployment, job, secret)
-Each manifest needs four parts (root key:values in the file)

apiVersion:
kind:
metadata:
spec:

////pod.yml

apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.17.3
    ports:
    - containerPort: 80


//// deployment.yml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.3
        ports:
        - containerPort: 80


////app.yml

apiVersion: v1
kind: Service
metadata:
  name: app-nginx-service
spec:
  type: NodePort
  ports:
  - port: 80
  selector:
    app: app-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: app-nginx
  template:
    metadata:
      labels:
        app: app-nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.3
        ports:
        - containerPort: 80



c) Bulding Your YAML Files

- kind: We can get a list of resources the cluster supports

kubectl api-resources

- notice some resources have multiple API's (old vs new)

- apiVersion: We can get the API versions the cluster supports

kubectl api-versions

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
spec:

->

apiVersion: app/v1
kind: Deployment
metadata:
spec:


- kind: + apiVersion:  = resources

- metadata: only name is required

- spec: where all the action is at


d) Building Your YAML spec

-We can get all the keys each kind supports

kubectl explain services --recursive

kubectl explain services.spec

kubectl explain services.spec.type

-

kubectl explain deployment.spec.template.spec.volumes.nfs.server

apiVersion: apps/v1
kind: Deployment
metadata:
spec:
  template:
    spec:
       volumes:
         nfs:
            server:


kubernetes.io/docs/reference/#api-reference



e) Dry Runs and Diff's

-new stuff, not out of beta yet (1.15)
-dry-run a create (client side only)

kubectl apply -f app.yml --dry-run

-dry-run a create/update on server

kubectl apply -f app.yml --server-dry-run

-see a diff visually

kubectl diff -f app.yml


-

f) Labels and Annotations

-Labels goes under metadata: in your YAML
-Simple list of key: value for identifying your resource later by selecting, grouping, or filtering for it
-common examples include tier: frontend, app: api, env: prod, customer: acme.co
-not meant to hold complex, large, or non-identifying info, which is what annotations are for
-filter a get command

kubectl get pods -l app=nginx

-apply only matching labels

kubectl apply -f myfile.yml -l app=nginx

Label Selectors

-The "glue" telling Services and Deployments which pods are theirs
-Many resources use Label Selector to "link" resource dependencies
-You will see these match up in the Service and Deployment YAML
-Use Labels and Selectors to control which pods go to which nodes
-Taints and Tolerations also control node placement

....
kind: Service
...
spec:
  ...
  selector:
    app: app-nginx
....

---
kind:Deployment
...
spec:
  ...
 selector
    matchLabels:
      app: app-nginx
  ...
  template:
    metadata:
      labels:
        app: app-nginx
...


15. Future of Kubernetes

a) Storage

-Storage and stateful workloads are harder in all systems
-Containers make it both harder and easier than before
-StatefulSets is a new resource type, making Pods more sticky
-avoid stateful workloads for first few deployments until you are good at the basics
  *use db-as-a-service whenever you can




b) Ingress

-None of our Service types work at OSI Layer 7 (HTTP)
-How do we route outside connections based on hostname or URL?
-Ingress Controllers (opional) do this with 3rd party proxies
-Nginx is popular , but Traefik, HAProxy, F5, Envoy, Istio, etc.
-Note this is still beta (n 1.15) and becoming popular
-Implementation is specific to Controller chosen



c) CRD's and The Operators Pattern

-You can add 3rd  party Resources and Controllers
-This extends Kubernetes API and CLI
-A pattern is starting to emerge of using these together
-Operator: automate deployment and managment of complex apps
-e.g. Databases, monitoring tools, backups, and custom ingresses

d) Higher Deployment Abstractions 

- All our kubectl commands just talk to the Kubernetes API
-Kubernetes has limited built-in tmeplating, versioning, tracking, and management of your apps
-There are now over 60 3rd party tools to do that, but many are defunct
-Helm is the most popular
-"Compose on Kubernetes" comes with Docker Desktop
-Remembere these are optional , and your distro may have a preference
-Most distros support Helm

Helm
-helm charts

Templating YAML
-Many of the deployment tools have templating options
-You will need a solution as the number of enviorments/apps grow
-Helm was the first "winner" in this space, but can be complex
-Official Kustomize feature works out-of-the-box (as of 1.14)

docker app COMMAND

- docker app and compose-on-kubernetes are Docker's way

CNAB Standard

-


e) Kubernetes Dashboard

-Default GUI for "upstream" Kubernetes
  *github.com/kubernetes/dashboard
-Some distributions have their own GUI (Rancher, Docker Ent, OpenShift)
-Clouds don't  have it by default
-Let's you view resources and upload YAML
-Safty First - few comapnies where hacked by GUI




f) Kubectl Namespaces and Context

- Namspaces limit scope, aka "virtual clusters"
- Not related to docker/Linux namespaces
- Won't need them in small clusters
- There are some built-in, to hide system stuff from kubectl "users"

kubectl get namespaces

- default kubectl namespaces:
   * default
   * kube-node-lease
   * kube-public
   * kube-system


kubectl get all -all-namespaces

-context changes kubectl cluster and namspace
- context
   *Cluster
   *Authentication/User
   *Namespaces

-see ~/.kube/config file


kubectl config get-contexts

kubectl config set*




g) Future of K8s

-  More focus on stability and security
  * 1.14, 1.15, largely dull releases (a good thing)
  * recent security audit has created backlog
- Clearing away deprecated features like kubectl run generators
-improving features like server-side dry-run
-More and improve Operators
-Helm 3.0 (easier deployment, chart repos, libs)
-More declarative-style features
-Better Windows Server support
-more edge cases, kubeadm HA clusters

Related Projects

-kubernetes has becme the "differencing and scheduling engine backbone" for so many new projects
-Knative - Servless workloads on Kubernetes
-k3s - mini, simple Kubernetes
-k3OS - Minimal Linux OS for k3s
-Server Mesh - new layer in distributed app traffic for better control, security, and monitoring


16. Pods debbuging / troubleshooting

kubectl attach nodedemo.example.com -i

kubectl exec -it nodedemo.example.com -- bash

kubectl logs nodedemo.example.com



17. AWS Load Balancer - how does the AWS LoadBalancer routes traffic to the  correct pod

-The LoadBalancer uses a NodePort that is exposed on all non-master nodes

18.

19.

20.

21.

22.

23.

24.

25.

26.

27.

28.

29.

30.

31.

32.

33.

34.

35. shpod tool - alpine image + few kubernetes tools (ex. helm , kubectl)

https://github.com/BretFisher/shpod?utm_campaign=DevOps%20and%20Docker%20Updates&utm_medium=email&utm_source=Revue%20newsletter





100. Sources:
a) https://github.com/wardviaene/kubernetes-course
b) https://github.com/luksa/kubernetes- in-action 
c) www.manning.com/books/ kubernetes-in-action 
d) https://forums.manning.com/forums/kubernetes-in-action 
e) https://github.com/kubernetes/kubernetes/wiki/Special-Interest-Groups-(SIGs) 
f) https://github.com/kubernetes/ kubernetes 
g)
h) The Kubernetes website at https://kubernetes.io
i) The Kubernetes Blog, which regularly posts interesting info (http://blog.kuber-netes.io)
j) The Kubernetes community’s Slack channel at http://slack.k8s.io
k) The Kubernetes and Cloud Native Computing Foundation’s YouTube channels:
https://www.youtube.com/channel/UCZ2bu0qutTOM0tHYa_jkIwg 
https://www.youtube.com/channel/UCvqbFHwN-nwalWPjPUKpvTA 



/////////////////////////////////////////////////////////////////


4. Kubernetes installation

a)


This lesson covers how to install Kubernetes on a CentOS 7 server in our Cloud Playground. Below, you will find a list of the commands used in this lesson.
*Note in this lesson we are using 3 unit servers as this meets the minimum requirements for the Kubernetes installation. Use of a smaller size server (less than 2 cpus) will result in errors during installation.
  1. The first thing that we are going to do is use SSH to log in to all machines. Once we have logged in, we need to elevate privileges using sudo.
    sudo su  
  2. Disable SELinux.
    setenforce 0
    sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
  3. Enable the br_netfilter module for cluster communication.
    modprobe br_netfilter
    echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
  4. Disable swap to prevent memory allocation issues.
     swapoff -a
     vim /etc/fstab.orig  ->  Comment out the swap line
  5. Install the Docker prerequisites.
     yum install -y yum-utils device-mapper-persistent-data lvm2
  6. Add the Docker repo and install Docker.
     yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
     yum install -y docker-ce
  7. Conigure the Docker Cgroup Driver to systemd, enable and start Docker
     sed -i '/^ExecStart/ s/$/ --exec-opt native.cgroupdriver=systemd/' /usr/lib/systemd/system/docker.service 
     systemctl daemon-reload
     systemctl enable docker --now
  8. Add the Kubernetes repo.
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
    EOF
  9. Install Kubernetes.
     yum install -y kubelet kubeadm kubectl
  10. Enable Kubernetes. The kubelet service will not start until you run kubeadm init.
    systemctl enable kubelet
*Note: Complete the following section on the MASTER ONLY!
  1. Initialize the cluster using the IP range for Flannel.
    kubeadm init --pod-network-cidr=10.244.0.0/16
  2. Copy the kubeadmin join command.
  3. Exit sudo and run the following:
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  4. Deploy Flannel.
    kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  5. Check the cluster state.
    kubectl get pods --all-namespaces
*Note: Complete the following steps on the NODES ONLY!
  1. Run the join command that you copied earlier (this command needs to be run as sudo), then check your nodes from the master.
    kubectl get nodes


b)


Prerequesit:


-Ubuntu 18


Komentarze

Prześlij komentarz

Popularne posty z tego bloga

Helm

Ansible Tower / AWX