Kubernetes, dépassionné et pour les ultra débutants

A presentation at Riviera Dev in July 2023 in Sophia Antipolis, France by Horacio Gonzalez

Slide 1

Slide 1

Kubernetes, dépassionné et pour les ultra débutants Sébastien Blanc, Sun Tan & Horacio Gonzalez 2023-07-10

Slide 2

Slide 2

Who are we? Sébastien Blanc Horacio Gonzalez Sun Tan DevRel Aiven @sebi2706 DevRel OVHCloud @LostInBrittany Senior Software Engineer Red Hat @_sunix

Slide 3

Slide 3

Agenda Introduction ● Why Kubernetes ● Containers ● What is Kubernetes? 1 - Diving into K8s building blocks ● Playing with kubectl ● YAML 2 - Being a good cloud native citizen ● Requests and limits ● Health probes ● ConfigMap and Secrets 3 - Advanced K8s ● Persistent Volumes ● Tolerance and taints ● Operators

Slide 4

Slide 4

Introduction Why Kubernetes? Containers What is Kubernetes?

Slide 5

Slide 5

Why k8s? A typical Java application Based on real life experiences

Slide 6

Slide 6

Why k8s? A typical Java application Based on real life experiences

Slide 7

Slide 7

Why k8s? A typical Java application Based on real life experiences

Slide 8

Slide 8

Why k8s? A typical Java application Based on real life experiences

Slide 9

Slide 9

Pain point #1 Manual deployments

Slide 10

Slide 10

Pain point #2 Scaling

Slide 11

Slide 11

Pain point #3 Developer Environment

Slide 12

Slide 12

Kubernetes To the rescue!

Slide 13

Slide 13

Kubernetes seems too difficult

Slide 14

Slide 14

Think big, start small, learn fast Think Big, Start Small, Scale Learn Fast -Jim Carroll

Slide 15

Slide 15

Start small with Containers Containers are used in Kubernetes Containers can be used without Kubernetes

Slide 16

Slide 16

Introduction Why Kubernetes? Containers What is Kubernetes?

Slide 17

Slide 17

Container evolution

Slide 18

Slide 18

Container tools The most popular Daemon less Pods/containers

Slide 19

Slide 19

Containers were there for a while 1979: Unix V7 (Chroot) 2000: FreeBSD Jails 2001: Linux VServer 2004: Solaris Containers 2005: Open VZ (Open Virtuzzo) 2006: Process Containers (cgroups) 2008: LXC 2011: Warden 2013: LMCTFY 2013: Docker

Slide 20

Slide 20

Container image Source code Build Push/Pull Run anywhere Basically a Dockerfile Using `Docker or Podman Optionally to a container image registry like dockerhub or quay.io Any linux host that support container technology should be able to run it.

Slide 21

Slide 21

vs a Java application Source code Build Push/Pull Run anywhere Basically Java files Using Maven or Gradle Optionally to a Maven repo like Nexus or Artifactory Any OS host that support JVM technology should be able to run it.

Slide 22

Slide 22

Containers are easy… For developers

Slide 23

Slide 23

Less simple if you must operate them Like in a production context

Slide 24

Slide 24

And what about microservices? Are you sure you want to operate them by hand?

Slide 25

Slide 25

And what about microservices? Are you sure you want to operate them by hand?

Slide 26

Slide 26

Kubernetes: a full orchestrator

Slide 27

Slide 27

Not the only orchestrator But the most popular one…

Slide 28

Slide 28

Introduction Why Kubernetes? Containers What is Kubernetes?

Slide 29

Slide 29

An open-source container orchestration system A cluster of instances

Slide 30

Slide 30

Kubernetes cluster: more details

Slide 31

Slide 31

Desired State Management Declarative infrastructure

Slide 32

Slide 32

Desired State Management Let’s begin with 5 objects

Slide 33

Slide 33

Kubernetes Cluster - Nodes Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Primary api etcd scheduler controllers Istio proxy

Slide 34

Slide 34

Kubernetes Cluster - Declarative API Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Primary api image: repo/mytomcat:v1 replicas: 4 etcd scheduler controllers Istio proxy

Slide 35

Slide 35

Kubernetes Cluster - 4 Tomcat Instances Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Primary api etcd scheduler controllers Istio proxy

Slide 36

Slide 36

Kubernetes Cluster - Pod Failure Node Primary kubelet Node kubelet Node kubelet kubelet Node kubelet Node kubelet X api etcd scheduler Node controllers Istio proxy

Slide 37

Slide 37

Kubernetes Cluster - Recovery Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Primary api etcd scheduler controllers Istio proxy

Slide 38

Slide 38

Kubernetes Cluster - Node failure Primary api etcd scheduler Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet X controllers Istio proxy

Slide 39

Slide 39

Kubernetes Cluster - Pods replaced Node kubelet Node kubelet Node kubelet Node kubelet Primary api etcd scheduler Node kubelet controllers Istio proxy

Slide 40

Slide 40

Kubernetes Cluster - New node Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Node kubelet Primary api etcd scheduler controllers Istio proxy

Slide 41

Slide 41

1- Dive into K8s building blocks Playing with Kubectl YAML

Slide 42

Slide 42

Kubectl > pronunciation fight Pronounce kubectl as you want 😅

Slide 43

Slide 43

Kubectl > kubernetes tool/cli

Slide 44

Slide 44

Démo Kubectl

Slide 45

Slide 45

1- Dive into K8s building blocks Playing with Kubectl YAML

Slide 46

Slide 46

Kubernetes Kubernetes is a distributed and structured YAML database CRUD, structured and typed objects: Resources Resources live in Namespaces https://asciinema.org/a/lfxttSBoSoVH9hkS4lOxzuGdk

Slide 47

Slide 47

Create a Resource object speaker.yaml apiVersion: “stable.world.com/v1” kind: Speaker metadata: name: horacio spec: name: “Horacio” title: “DevRel at OVH Cloud” action: “speak” Execute $ kubectl apply -f speaker.yaml $ kubectl get Speaker

Slide 48

Slide 48

Kubernetes Kubernetes is a distributed and structured YAML database Controllers that do the job ● Listening to Resources Create/Update/Delete events: the user requirements ● Perform to match the user requirements

Slide 49

Slide 49

Kubernetes controller https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/

Slide 50

Slide 50

Kubernetes Kubernetes is a distributed and structured YAML database ● By default, a set of Resources and Controllers to manage a cluster of machines

Slide 51

Slide 51

Pod Container(s) sharing network addressing/volumes, etc. apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: hellotomcat-container image: quay.io/sunix/hello-tomcat imagePullPolicy: IfNotPresent ports: - containerPort: 8080 https://asciinema.org/a/EeeNkoQ2eJ76Twx2S0sCybTzz

Slide 52

Slide 52

Deployment Deploy and manage identical pods apiVersion: apps/v1 kind: Deployment metadata: name: hellotomcat labels: app: hellotomcat spec: replicas: 2 selector: matchLabels: app: hellotomcat template: metadata: labels: app: hellotomcat spec: containers: - name: hellotomcat-container image: quay.io/sunix/hello-tomcat ports: - containerPort: 8080 imagePullPolicy: IfNotPresent https://asciinema.org/a/EsaRue6eDKWyvRCHmRKxIfydI

Slide 53

Slide 53

Service Let the pods communicates in the cluster or outside apiVersion: v1 kind: Service metadata: name: hellotomcat-service spec: type: NodePort selector: app: hellotomcat ports: - protocol: TCP port: 8080 targetPort: 8080

Slide 54

Slide 54

Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: hellotomcat-ingress labels: app: hellotomcat spec: rules: - host: 192.168.49.2.nip.io http: paths: - path: / pathType: Prefix backend: service: name: hellotomcat-service port: number: 8080 Manage the paths and domain name redirections https://asciinema.org/a/PpW6P3EftEUWb13UOvoBK6wOW

Slide 55

Slide 55

2 - Being a good cloud native citizen Requests and limits Health probes ConfigMap and Secrets

Slide 56

Slide 56

Resource management apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app resources: requests: memory: “64Mi” cpu: “250m” limits: memory: “128Mi” cpu: “500m”

Slide 57

Slide 57

What if a pod uses too many resources? CPU is compressible, memory is incompressible

Slide 58

Slide 58

Resource quota kind: ResourceQuota metadata: name: compute-resources spec: hard: requests.cpu: “1” requests.memory: 1Gi limits.cpu: “2” limits.memory: 2Gi requests.nvidia.com/gpu: 4 Limit the total sum of compute resources that can be requested in a given namespace

Slide 59

Slide 59

Limit range apiVersion: v1 kind: LimitRange metadata: name: cpu-resource-constraint spec: limits: - default: # this section defines default limits cpu: 500m defaultRequest: # this section defines default requests cpu: 500m max: # max and min define the limit range cpu: “1” min: cpu: 100m type: Container Default, minimum and maximum resources usage per pod in a namespace

Slide 60

Slide 60

Verifying resource usage % kubectl top pods NAME hello-world-deployment-bc4fd6b9-dgspd hello-world-deployment-bc4fd6b9-f85mf hello-world-deployment-bc4fd6b9-hh7xs hello-world-deployment-bc4fd6b9-lz494 CPU(cores) 3m 3m 4m 5m % kubectl top pods —containers POD hello-world-deployment-bc4fd6b9-dgspd hello-world-deployment-bc4fd6b9-f85mf hello-world-deployment-bc4fd6b9-hh7xs hello-world-deployment-bc4fd6b9-lz494 NAME hello-world hello-world hello-world hello-world % kubectl top nodes NAME MEMORY% nodepool-ce18c6cd-1291-4a6e-83-node-5c283f nodepool-ce18c6cd-1291-4a6e-83-node-85b011 nodepool-ce18c6cd-1291-4a6e-83-node-c3cfcf MEMORY(bytes) 2Mi 2Mi 2Mi 2Mi CPU(cores) 0m 1m 1m 0m MEMORY(bytes) 2Mi 2Mi 2Mi 2Mi CPU(cores) CPU% MEMORY(bytes) 110m 104m 121m 5% 5% 6% 1214Mi 1576Mi 1142Mi 23% 30% 22%

Slide 61

Slide 61

Démo Requests & Limits

Slide 62

Slide 62

2 - Being a good cloud native citizen Requests and limits Health probes ConfigMap and Secrets

Slide 63

Slide 63

Readiness probe Tell people you’re ready

Slide 64

Slide 64

Readiness probe apiVersion: v1 kind: Pod metadata: labels: test: readiness name: readiness-exec spec: containers: - name: readiness image: organisation/readiness readinessProbe: httpGet: path: /ready port: 80 initialDelaySeconds: 10 periodSeconds: 3

Slide 65

Slide 65

Liveness probe Tell people youʼre alive

Slide 66

Slide 66

Liveness probe apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: organisation/liveness livenessProbe: exec: command: - cat - /tmp/alive initialDelaySeconds: 10 periodSeconds: 5 timeoutSeconds: 2

Slide 67

Slide 67

Startup probe Tell people you have started

Slide 68

Slide 68

Startup probe apiVersion: v1 kind: Pod metadata: labels: test: starting name: starting-exec spec: containers: - name: starting image: organisation/starting startupProbe: httpGet: path: /ready port: 80 periodSeconds: 3 failureThreshold: 24

Slide 69

Slide 69

Démo Requests & Limits

Slide 70

Slide 70

2 - Being a good cloud native citizen Requests and limits Health probes ConfigMap and Secrets

Slide 71

Slide 71

Config files are a bad practice

Slide 72

Slide 72

Config maps Storing configuration for other objects to use

Slide 73

Slide 73

Describing a Config Map apiVersion: v1 kind: ConfigMap metadata: name: game-demo data: # property-like keys; each key maps to a simple value player_initial_lives: “3” ui_properties_file_name: “user-interface.properties” # file-like keys game.properties: | enemy.types=aliens,monsters player.maximum-lives=5 user-interface.properties: | color.good=purple color.bad=yellow allow.textmode=true

Slide 74

Slide 74

Using a Config Map in a Pod apiVersion: v1 kind: Pod metadata: name: configmap-demo-pod spec: containers: - name: demo image: alpine command: [“sleep”, “3600”] env: # Define the environment variable - name: PLAYER_INITIAL_LIVES # Notice that the case is different here # from the key name in the ConfigMap. valueFrom: configMapKeyRef: name: game-demo # The ConfigMap this value comes from. key: player_initial_lives # The key to fetch. - name: UI_PROPERTIES_FILE_NAME valueFrom: configMapKeyRef: name: game-demo key: ui_properties_file_name

Slide 75

Slide 75

Using a Config Map in a Pod apiVersion: v1 kind: Pod metadata: name: configmap-demo-pod spec: containers: - name: demo image: alpine command: [“sleep”, “3600”] volumeMounts: - name: config mountPath: “/config” readOnly: true volumes: # You set volumes at the Pod level, then mount them into containers inside that Pod - name: config configMap: # Provide the name of the ConfigMap you want to mount. name: game-demo # An array of keys from the ConfigMap to create as files items: - key: “game.properties” path: “game.properties” - key: “user-interface.properties” path: “user-interface.properties”

Slide 76

Slide 76

Kubernetes secrets Secret A piece of information that is only known by one person or a few people and should not be told to others.

Slide 77

Slide 77

Kubernetes secrets Object that contains a small amount of sensitive data. Injected as volume or environment variable.

Slide 78

Slide 78

A warning on Kubernetes Secrets No full encryption All YAMLs and base64

Slide 79

Slide 79

Kubernetes secrets Encryption Configuration

Slide 80

Slide 80

Vaults provide full encryption https

Slide 81

Slide 81

Creating a Secret # Create a new Secret named db-user-pass with username=admin and password=’S!B*d$zDsb=’ $ kubectl create secret generic db-user-pass \ —from-literal=username=admin \ —from-literal=password=’S!B*d$zDsb=’ # Or store the credentials in files: $ echo -n ‘admin’ > ./username.txt $ echo -n ‘S!B*d$zDsb=’ > ./password.txt # And pass the file paths in the kubectl command: $ kubectl create secret generic db-user-pass \ —from-file=username=./username.txt \ —from-file=password=./password.txt

Slide 82

Slide 82

Verifying a Secret # Verify the Secret $ kubectl get secrets NAME TYPE db-user-pass Opaque DATA 2 AGE 3m34s $ kubectl describe secret db-user-pass Name: db-user-pass Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== password: username: 12 bytes 5 bytes

Slide 83

Slide 83

Decoding a Secret # View the contents of the Secret you created: $ kubectl get secret db-user-pass -o jsonpath=’{.data}’ {“password”:”UyFCXCpkJHpEc2I9”,”username”:”YWRtaW4=”} # Decode the password data: $ echo ‘UyFCXCpkJHpEc2I9’ | base64 —decode S!B*d$zDsb= # In one step: $ kubectl get secret db-user-pass -o jsonpath=’{.data.password}’ | base64 —decode S!B*d$zDsb=

Slide 84

Slide 84

Using a Secret in a Pod apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: “/etc/foo” readOnly: true volumes: - name: foo secret: secretName: mysecret optional: true

Slide 85

Slide 85

Using a Secret in a Pod apiVersion: v1 kind: Pod metadata: name: secret-demo-pod spec: containers: - name: demo image: alpine command: [“sleep”, “3600”] env: # Define the environment variable - name: PASSWORD valueFrom: SecretKeyRef: name: game-secret # The Secret this value comes from. key: game-password # The key to fetch.

Slide 86

Slide 86

3 - Advanced K8s Persistent Volumes Tolerance and taints Operators

Slide 87

Slide 87

Local storage is a bad idea

Slide 88

Slide 88

Persistent Volumes

Slide 89

Slide 89

The storage dilemma

Slide 90

Slide 90

Demo MySQL

Slide 91

Slide 91

3 - Advanced K8s Persistent Volumes Tolerance and taints Operators

Slide 92

Slide 92

Taints & Tolerations Taint applied to a Kubernetes Node that signals the scheduler to avoid or not schedule certain Pods Toleration applied to a Pod definition and provides an exception to the taint

Slide 93

Slide 93

Using Taints & Tolerations # No pod will be able to schedule onto node-5c283f unless it has a matching toleration. $ kubectl taint nodes node-5c283f type=high-cpu:NoSchedule node/node-5c283f tainted apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent tolerations: - key: “high-cpu” operator: “Exists” effect: “NoSchedule”

Slide 94

Slide 94

Example use cases for Taints Dedicated nodes

Slide 95

Slide 95

Affinity & Anti-affinity Node Affinity rules that force the pod to be deployed, either exclusively or in priority, in certains nodes Pod Affinity indicate that a group of pods should always be deployed together on the same node (because of network communication, shared storage, etc.)

Slide 96

Slide 96

3 - Advanced K8s Persistent Volumes Tolerance and taints Operators

Slide 97

Slide 97

Taming microservices with Kubernetes

Slide 98

Slide 98

What about complex deployments

Slide 99

Slide 99

Specially at scale Lots of clusters with lots and lots of deployments

Slide 100

Slide 100

We need to tame the complexity Making it easier to operate

Slide 101

Slide 101

Taming the complexity

Slide 102

Slide 102

Helm Charts are configuration Operating is more than installs & upgrades

Slide 103

Slide 103

Kubernetes is about automation How about automating human operators?

Slide 104

Slide 104

Kubernetes Operators A Kubernetes version of the human operator

Slide 105

Slide 105

Kubernetes Controllers Keeping an eye on the resources

Slide 106

Slide 106

Building operators Basic K8s elements: Controllers and Custom Resources

Slide 107

Slide 107

Kubernetes Controllers: control loops They watch the state of the cluster, and make or request changes where needed

Slide 108

Slide 108

A reconcile loop Strives to reconcile current state and desired state

Slide 109

Slide 109

Custom Resource Definitions Extending Kubernetes API

Slide 110

Slide 110

Extending Kubernetes API By defining new types of resources

Slide 111

Slide 111

Kubernetes Operator Automating operations

Slide 112

Slide 112

What’s a Kubernetes Operator?

Slide 113

Slide 113

Example: databases Things like adding an instance to a pool, doing a backup, sharding…

Slide 114

Slide 114

Knowledge encoded in CRDs and Controllers

Slide 115

Slide 115

Custom Controllers for Custom Resources Operators implement and manage Custom Resources using custom reconciliation logic

Slide 116

Slide 116

Operator Capability Model Gauging the operator maturity

Slide 117

Slide 117

Operator SDK

Slide 118

Slide 118

But I’m a Java developer! Can I code Kubernetes Operators in Java? Easily?

Slide 119

Slide 119

Operators in Java

Slide 120

Slide 120

Démo Opérator

Slide 121

Slide 121

Quizz Kahoot Le livre d’Aurélie

Slide 122

Slide 122

That’s all, folks! Thank you all!