A presentation at OVHcloud Spain Tech Lab in in Madrid, Spain by Horacio Gonzalez
OVHcloud Kubernetes Initiation Tech Lab Horacio Gonzalez 2023-06-05 - Madrid
Who are we? Introducing myself and introducing OVHcloud
Horacio Gonzalez @LostInBrittany Spaniard Lost in Brittany Flutter
OVHcloud Web Cloud & Telcom 30 Data Centers in 12 locations 1 Million+ Servers produced since 1999 Private Cloud 34 Points of Presence on a 20 TBPS Bandwidth Network 1.5 Million Customers across 132 countries Public Cloud 2200 Employees worldwide 3.8 Million Websites hosting Storage 115K Private Cloud VMS running 1.5 Billion Euros Invested since 2016 300K Public Cloud instances running P.U.E. 1.09 Energy efficiency indicator 380K Physical Servers running in our data centers 20+ Years in Business Disrupting since 1999 Network & Security
Why do we need Kubernetes? Taming the complexity of operating containers
From bare metal to containers
From bare metal to containers
From bare metal to containers
Dockerfiles, images and containers
Containers are easy… For developers
Less simple if you must operate them Like in a production context
And what about microservices? Are you sure you want to operate them by hand?
And what about microservices? Are you sure you want to operate them by hand?
Kubernetes: a full orchestrator
Not the only orchestrator But the most popular one…
Kubernetes cluster: masters and nodes
Kubernetes cluster: more details
Desired State Management Declarative infrastructure
Desired State Management
Let’s deploy an application
Demo: Hello Kubernetes World https://docs.ovh.com/gb/en/kubernetes/deploying-hello-world/
Needed tools: kubectl https://kubernetes.io/docs/tasks/tools/
Putting Kubernetes in production A journey not for the faint of heart
Kubernetes can be wonderful For both developers and devops
The journey from dev to production
It’s a complex technology Lots of abstraction layers
Kubernetes networking is complex…
The storage dilemma
The ETCD vulnerability
Kubernetes is insecure by design* It’s a feature, not a bug. Up to K8s admin to secure it according to needs
Not everybody has the same security needs
Kubernetes allows to enforce security practices as needed
Always keep up to date Both Kubernetes and plugins
And remember, even the best can get hacked Remain attentive, don’t get too confident
A managed Kubernetes Because your company job is to use Kubernetes, not to operate it!
Kubernetes is powerful It can make Developers’ and DevOps’ lives easier
But there is a price: operating it Lot of things to think about
We have seen some of them
Different roles Each role asks for very different knowledge and skill sets
Operating a Kubernetes cluster is hard But we have a good news…
Most companies don’t need to do it! As they don’t build and rack their own servers!
If you don’t need to build it, choose a certified managed solution You get the cluster, the operator get the problems
Demo: A complete app - Wordpress https://docs.ovh.com/gb/en/kubernetes/installing-wordpress/
Needed tools: helm https://helm.sh/
Helm: a package manager for K8s
Wordpress is easy… Two pods and a persistent volume
Yet is a complete app Specially when deployed in production context
Namespaces Logical isolation
Namespaces
Initial namespaces
Working with namespaces $ kubectl create namespace my-namespace namespace/my-namespace created $ kubectl get namespaces NAME STATUS default Active kube-node-lease Active kube-public Active kube-system Active my-namespace Active $ kubectl get NAMESPACE kube-system kube-system kube-system kube-system kube-system kube-system […] AGE 45d 45d 45d 45d 7s pods —all-namespaces NAME calico-kube-controllers-6b5885747b-m79ng canal-22dj9 canal-4l4mv canal-6rdxv coredns-9f744c589-64spf coredns-9f744c589-tl26z READY 1/1 2/2 2/2 2/2 1/1 1/1 STATUS Running Running Running Running Running Running RESTARTS 0 0 0 0 0 0 AGE 6m58s 7m 6m39s 7m19s 42s 6m25s
Working with namespaces $ kubectl apply -f hello.yml -n my-namespace service/hello-world-service created deployment.apps/hello-world-deployment created $ kubectl get NAMESPACE kube-system kube-system kube-system kube-system kube-system kube-system […] kube-system my-namespace pods —all-namespaces NAME calico-kube-controllers-6b5885747b-m79ng canal-22dj9 canal-4l4mv canal-6rdxv coredns-9f744c589-64spf coredns-9f744c589-tl26z READY 1/1 2/2 2/2 2/2 1/1 1/1 wormhole-vx6sn hello-world-deployment-bc4fd6b9-5mtk4 1/1 1/1 $ kubectl delete namespace my-namespace namespace “my-namespace” deleted STATUS Running Running Running Running Running Running Running Running RESTARTS 0 0 0 0 0 0 0 0 AGE 6m58s 7m 6m39s 7m19s 42s 6m25s 9m53s 37s
Executing commands kubectl exec
Pods are black boxes How can we debug them?
Interactively execute commands $ kubectl exec hello-world-deployment-bc4fd6b9-5sgls -c hello-world -it — sh / # ls bin dev etc home lib mnt proc root run sbin srv sys / # tmp usr Execute commands in a container inside a pod var
Persistent Volumes How to store persistent data in K8s
Local storage is a bad idea
Persistent Volumes
The storage dilemma
Resource management Request and limits
Resource management apiVersion: v1 kind: Pod metadata: name: frontend spec: containers: - name: app image: images.my-company.example/app resources: requests: memory: “64Mi” cpu: “250m” limits: memory: “128Mi” cpu: “500m”
What if a pod uses too many resources? CPU is compressible, memory is incompressible
Resource quota kind: ResourceQuota metadata: name: compute-resources spec: hard: requests.cpu: “1” requests.memory: 1Gi limits.cpu: “2” limits.memory: 2Gi requests.nvidia.com/gpu: 4 Limit the total sum of compute resources that can be requested in a given namespace
Limit range apiVersion: v1 kind: LimitRange metadata: name: cpu-resource-constraint spec: limits: - default: # this section defines default limits cpu: 500m defaultRequest: # this section defines default requests cpu: 500m max: # max and min define the limit range cpu: “1” min: cpu: 100m type: Container Default, minimum and maximum resources usage per pod in a namespace
Verifying resource usage % kubectl top pods NAME hello-world-deployment-bc4fd6b9-dgspd hello-world-deployment-bc4fd6b9-f85mf hello-world-deployment-bc4fd6b9-hh7xs hello-world-deployment-bc4fd6b9-lz494 CPU(cores) 3m 3m 4m 5m % kubectl top pods —containers POD hello-world-deployment-bc4fd6b9-dgspd hello-world-deployment-bc4fd6b9-f85mf hello-world-deployment-bc4fd6b9-hh7xs hello-world-deployment-bc4fd6b9-lz494 NAME hello-world hello-world hello-world hello-world % kubectl top nodes NAME MEMORY% nodepool-ce18c6cd-1291-4a6e-83-node-5c283f nodepool-ce18c6cd-1291-4a6e-83-node-85b011 nodepool-ce18c6cd-1291-4a6e-83-node-c3cfcf MEMORY(bytes) 2Mi 2Mi 2Mi 2Mi CPU(cores) 0m 1m 1m 0m MEMORY(bytes) 2Mi 2Mi 2Mi 2Mi CPU(cores) CPU% MEMORY(bytes) 110m 104m 121m 5% 5% 6% 1214Mi 1576Mi 1142Mi 23% 30% 22%
Health probes Telling Kubernetes that the pod is alive and healthy
Liveness probe apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: registry.k8s.io/busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5
Readiness probe apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: registry.k8s.io/busybox args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600 readinessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5
Startup probe apiVersion: v1 kind: Pod metadata: labels: test: liveness name: liveness-exec spec: containers: - name: liveness image: registry.k8s.io/busybox livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 startupProbe: exec: command: - cat - /tmp/healthy periodSeconds: 5 failureThreshold: 24
Defining configuration Config maps & secrets
Config files are a bad practice
Config maps Storing configuration for other objects to use
Creating a Config Map # Create a new configmap named my-config with keys for each file in folder bar $ kubectl create configmap my-config-1 —from-file=./config/bar configmap/my-config created # Create a new configmap named my-config with specified keys instead of names on disk $ kubectl create configmap my-config-2 —from-file=ssh-privatekey=~/.ssh/id_rsa —from-file=ssh-publickey=~/.ssh/id_rsa.pub configmap/my-config created # Create a new configMap named my-config with key1=config1 and key2=config2 $ kubectl create configmap my-config-3 —from-literal=key1=config1 —from-literal=key2=config2 configmap/my-config created
Describing a Config Map apiVersion: v1 kind: ConfigMap metadata: name: game-demo data: # property-like keys; each key maps to a simple value player_initial_lives: “3” ui_properties_file_name: “user-interface.properties” # file-like keys game.properties: | enemy.types=aliens,monsters player.maximum-lives=5 user-interface.properties: | color.good=purple color.bad=yellow allow.textmode=true
Using a Config Map in a Pod
Using a Config Map in a Pod apiVersion: v1 kind: Pod metadata: name: configmap-demo-pod spec: containers: - name: demo image: alpine command: [“sleep”, “3600”] env: # Define the environment variable - name: PLAYER_INITIAL_LIVES # Notice that the case is different here # from the key name in the ConfigMap. valueFrom: configMapKeyRef: name: game-demo # The ConfigMap this value comes from. key: player_initial_lives # The key to fetch. - name: UI_PROPERTIES_FILE_NAME valueFrom: configMapKeyRef: name: game-demo key: ui_properties_file_name
Using a Config Map in a Pod apiVersion: v1 kind: Pod metadata: name: configmap-demo-pod spec: containers: - name: demo image: alpine command: [“sleep”, “3600”] volumeMounts: - name: config mountPath: “/config” readOnly: true volumes: # You set volumes at the Pod level, then mount them into containers inside that Pod - name: config configMap: # Provide the name of the ConfigMap you want to mount. name: game-demo # An array of keys from the ConfigMap to create as files items: - key: “game.properties” path: “game.properties” - key: “user-interface.properties” path: “user-interface.properties”
Kubernetes secrets Storing sensitive information inside the cluster Encoded in Base64, decoded when attached to a pod
A warning on Kubernetes Secrets No full encryption All YAMLs and base64
Creating a Secret # Create a new Secret named db-user-pass with username=admin and password=’S!B*d$zDsb=’ $ kubectl create secret generic db-user-pass \ —from-literal=username=admin \ —from-literal=password=’S!B*d$zDsb=’ # Or store the credentials in files: $ echo -n ‘admin’ > ./username.txt $ echo -n ‘S!B*d$zDsb=’ > ./password.txt # And pass the file paths in the kubectl command: $ kubectl create secret generic db-user-pass \ —from-file=username=./username.txt \ —from-file=password=./password.txt
Verifying a Secret # Verify the Secret $ kubectl get secrets NAME TYPE db-user-pass Opaque DATA 2 AGE 3m34s $ kubectl describe secret db-user-pass Name: db-user-pass Namespace: default Labels: <none> Annotations: <none> Type: Opaque Data ==== password: username: 12 bytes 5 bytes
Decoding a Secret # View the contents of the Secret you created: $ kubectl get secret db-user-pass -o jsonpath=’{.data}’ {“password”:”UyFCXCpkJHpEc2I9”,”username”:”YWRtaW4=”} # Decode the password data: $ echo ‘UyFCXCpkJHpEc2I9’ | base64 —decode S!B*d$zDsb= # In one step: $ kubectl get secret db-user-pass -o jsonpath=’{.data.password}’ | base64 —decode S!B*d$zDsb=
Using a Secret in a Pod
Using a Secret in a Pod apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: “/etc/foo” readOnly: true volumes: - name: foo secret: secretName: mysecret optional: true
Using a Secret in a Pod apiVersion: v1 kind: Pod metadata: name: secret-demo-pod spec: containers: - name: demo image: alpine command: [“sleep”, “3600”] env: # Define the environment variable - name: PASSWORD valueFrom: SecretKeyRef: name: game-secret # The Secret this value comes from. key: game-password # The key to fetch.
Taints & Tolerations And Affinity & Anti-affinity
Taints & Tolerations Taint applied to a Kubernetes Node that signals the scheduler to avoid or not schedule certain Pods Toleration applied to a Pod definition and provides an exception to the taint
Using Taints & Tolerations # No pod will be able to schedule onto node-5c283f unless it has a matching toleration. $ kubectl taint nodes node-5c283f type=high-cpu:NoSchedule node/node-5c283f tainted apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent tolerations: - key: “high-cpu” operator: “Exists” effect: “NoSchedule”
Example use cases for Taints Dedicated nodes
Example use cases for Taints Nodes with Special Hardware
Affinity & Anti-affinity Node Affinity rules that force the pod to be deployed, either exclusively or in priority, in certains nodes Pod Affinity indicate that a group of pods should always be deployed together on the same node (because of network communication, shared storage, etc.)
Deploy applications to specific Nodes https://help.ovhcloud.com/csm/fr-public-cloud-kubernetes-label-nodeaffinity-node-pools
OVHcloud Managed Kubernetes Why would you choose ours?
Certified Kubernetes platform
OVHcloud Managed Private Registry
Node Pools Users can define node pools controlled from inside Kubernetes
Autoscaling Based on node pools New instances are spawned or released based on load
Kubernetes in a private network
Other features ● ● ● ● ● ● Healthcare HDS 1 conformity ISO 27001/27701/27017/27018 conformity Terraform provider Control plane audit logs API server IP restrictions … https://github.com/ovh/public-cloud-roadmap/projects/1 https://discord.com/invite/ovhcloud
Demo: Working with OVHcloud API https://docs.ovh.com/gb/en/kubernetes/deploying-hello-world-ovh-api/
Infrastructure as Code The perfect companion to a cloud
Infrastructure as Code (IaC)
IaC tools
HashiCorp Terraform
Modular architecture: providers
Configuration packages: modules
Terraform registry
OVHcloud Terraform Provider https://registry.terraform.io/providers/ovh/ovh/latest/docs
OVHcloud Terraform Provider https://github.com/ovh/terraform-provider-ovh
Demo: Using Terraform https://docs.ovh.com/gb/en/kubernetes/creating-a-cluster-through-terraform/
Needed tools: terraform https://www.terraform.io/
That’s all, folks! Thank you all!
How to get the most out of Kubernetes
1- Key concepts and advantages of Kubernetes
2- How to configure your first Kubernetes project from the OVHcloud Manager (creation of clusters, remote access with kubectl, registering a 1st app, basic network services and persistent volumes)
3- Practical applications (Use cases and advanced configurations - how to resize volumes, loadbalancer configuration,…)