The DevOps team would like to get the list of all Namespaces in the cluster. Get the list and save it to /opt/course/1/namespaces on ckad5601.
Answer:
k get ns > /opt/course/1/namespaces
The content should then look like:
xxxxxxxxxx # /opt/course/1/namespaces NAME STATUS AGE default Active 136m earth Active 105m jupiter Active 105m kube-node-lease Active 136m kube-public Active 136m kube-system Active 136m mars Active 105m shell-intern Active 105m
Question 2 | Pods
Solve this question on instance: ssh ckad5601
Create a single Pod of image httpd:2.4.41-alpine in Namespacedefault. The Pod should be named pod1 and the container should be named pod1-container.
Your manager would like to run a command manually on occasion to output the status of that exact Pod. Please write a command that does this into /opt/course/2/pod1-status-command.sh on ckad5601. The command should use kubectl.
Answer:
x k run # help
k run pod1 --image=httpd:2.4.41-alpine --dry-run=client -oyaml > 2.yaml
vim 2.yaml
Change the container name in 2.yaml to pod1-container:
xxxxxxxxxx # /opt/course/2/pod1-status-command.sh kubectl -n default get pod pod1 -o jsonpath="{.status.phase}"
To test the command:
xxxxxxxxxx ➜ sh /opt/course/2/pod1-status-command.sh Running
Question 3 | Job
Solve this question on instance: ssh ckad7326
Team Neptune needs a Job template located at /opt/course/3/job.yaml. This Job should run image busybox:1.31.0 and execute sleep 2 && echo done. It should be in namespace neptune, run a total of 3 times and should execute 2 runs in parallel.
Start the Job and check its history. Each pod created by the Job should have the label id: awesome-job. The job should be named neb-new-job and the container neb-new-job-container.
Answer:
xxxxxxxxxx k -n neptune create job -h
k -n neptune create job neb-new-job --image=busybox:1.31.0 --dry-run=client -oyaml > /opt/course/3/job.yaml -- sh -c "sleep 2 && echo done"
xxxxxxxxxx ➜ k -n neptune describe job neb-new-job ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 2m52s job-controller Created pod: neb-new-job-jhq2g Normal SuccessfulCreate 2m52s job-controller Created pod: neb-new-job-vf6ts Normal SuccessfulCreate 2m42s job-controller Created pod: neb-new-job-gm8sz
At the age column we can see that two pods run parallel and the third one after that. Just as it was required in the task.
Question 4 | Helm Management
Solve this question on instance: ssh ckad7326
Team Mercury asked you to perform some operations using Helm, all in Namespacemercury:
Delete release internal-issue-report-apiv1
Upgrade release internal-issue-report-apiv2 to any newer version of chart bitnami/nginx available
Install a new release internal-issue-report-apache of chart bitnami/apache. The Deployment should have two replicas, set these via Helm-values during install
There seems to be a broken release, stuck in pending-install state. Find it and delete it
Answer:
Helm Chart: Kubernetes YAML template-files combined into a single package, Values allow customisation
Helm Release: Installed instance of a Chart
Helm Values: Allow to customise the YAML template-files in a Chart when creating a Release
Step 1
First we should delete the required release:
xxxxxxxxxx ➜ helm -n mercury ls NAME NAMESPACE ... STATUS CHART APP VERSION internal-issue-report-apiv1 mercury ... deployed nginx-18.1.14 1.27.1 internal-issue-report-apiv2 mercury ... deployed nginx-18.1.14 1.27.1 internal-issue-report-app mercury ... deployed nginx-18.1.14 1.27.1
➜ helm -n mercury ls NAME NAMESPACE ... STATUS CHART APP VERSION internal-issue-report-apiv2 mercury ... deployed nginx-18.1.14 1.27.1 internal-issue-report-app mercury ... deployed nginx-18.1.14 1.27.1
Step 2
Next we need to upgrade a release, for this we could first list the charts of the repo:
xxxxxxxxxx ➜ helm repo list NAME URL bitnami http://localhost:6000
➜ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "bitnami" chart repository Update Complete. ⎈Happy Helming!⎈
➜ helm search repo nginx --versions NAME CHART VERSION APP VERSION DESCRIPTION bitnami/nginx 18.2.0 1.27.1 NGINX Open Source is a web server that can be a... bitnami/nginx 18.1.15 1.27.1 NGINX Open Source is a web server that can be a... bitnami/nginx 18.1.14 1.27.1 NGINX Open Source is a web server that can be a... bitnami/nginx 13.0.0 1.23.0 NGINX Open Source is a web server that can be a...
Here we see that two newer chart versions are available. But the question only requires us to upgrade to any newer chart version available, so we can simply run:
xxxxxxxxxx ➜ helm -n mercury upgrade internal-issue-report-apiv2 bitnami/nginx Release "internal-issue-report-apiv2" has been upgraded. Happy Helming! NAME: internal-issue-report-apiv2 LAST DEPLOYED: Wed Oct 2 14:17:09 2024 NAMESPACE: mercury STATUS: deployed REVISION: 2 TEST SUITE: None NOTES: CHART NAME: nginx CHART VERSION: 18.2.0 APP VERSION: 1.27.1 ...
➜ helm -n mercury ls NAME NAMESPACE ... STATUS CHART APP VERSION internal-issue-report-apiv2 mercury ... deployed nginx-18.2.0 1.27.1 internal-issue-report-app mercury ... deployed nginx-18.1.14 1.27.1
Looking good!
INFO: Also check out helm rollback for undoing a helm rollout/upgrade
Step 3
Now we’re asked to install a new release, with a customised values setting. For this we first list all possible value settings for the chart, we can do this via:
xxxxxxxxxx helm show values bitnami/apache # will show a long list of all possible value-settings
helm show values bitnami/apache | yq e # parse yaml and show with colors
Huge list, if we search in it we should find the setting replicaCount: 1 on top level. This means we can run:
Thank you Helm for making our lives easier! (Till something breaks)
Question 5 | ServiceAccount, Secret
Solve this question on instance: ssh ckad7326
Team Neptune has its own ServiceAccount named neptune-sa-v2 in Namespaceneptune. A coworker needs the token from the Secret that belongs to that ServiceAccount. Write the base64 decoded token to file /opt/course/5/token on ckad7326.
Answer:
Secrets won’t be created automatically for *ServiceAccounts, but it’s possible to create a Secret manually and attach it to a ServiceAccount by setting the correct annotation on the Secret. This was done for this task.
xxxxxxxxxx k -n neptune get sa # get overview k -n neptune get secrets # shows all secrets of namespace k -n neptune get secrets -oyaml | grep annotations -A 1 # shows secrets with first annotation
If a Secret belongs to a ServiceAccount, it’ll have the annotation kubernetes.io/service-account.name. Here the Secret we’re looking for is neptune-secret-1.
xxxxxxxxxx ➜ k -n neptune get secret neptune-secret-1 -o yaml apiVersion: v1 data: ... token: ZXlKaGJHY2lPaUpTVXpJMU5pSXNJbXRwWkNJNkltNWFaRmRxWkRKMmFHTnZRM0JxV0haT1IxZzFiM3BJY201SlowaEhOV3hUWmt3elFuRmFhVEZhZDJNaWZRLmV5SnBjM01pT2lKcmRXSmxjbTVsZEdWekwzTmxjblpwWTJWaFkyTnZkVzUwSWl3aWEzVmlaWEp1WlhSbGN5NXBieTl6WlhKMmFXTmxZV05qYjNWdWRDOXVZVzFsYzNCaFkyVWlPaUp1WlhCMGRXNWxJaXdpYTNWaVpYSnVaWFJsY3k1cGJ5OXpaWEoyYVdObFlXTmpiM1Z1ZEM5elpXTnlaWFF1Ym1GdFpTSTZJbTVsY0hSMWJtVXRjMkV0ZGpJdGRHOXJaVzR0Wm5FNU1tb2lMQ0pyZFdKbGNtNWxkR1Z6TG1sdkwzTmxjblpwWTJWaFkyTnZkVzUwTDNObGNuWnBZMlV0WVdOamIzVnVkQzV1WVcxbElqb2libVZ3ZEhWdVpTMXpZUzEyTWlJc0ltdDFZbVZ5Ym1WMFpYTXVhVzh2YzJWeWRtbGpaV0ZqWTI5MWJuUXZjMlZ5ZG1salpTMWhZMk52ZFc1MExuVnBaQ0k2SWpZMlltUmpOak0yTFRKbFl6TXROREpoWkMwNE9HRTFMV0ZoWXpGbFpqWmxPVFpsTlNJc0luTjFZaUk2SW5ONWMzUmxiVHB6WlhKMmFXTmxZV05qYjNWdWREcHVaWEIwZFc1bE9tNWxjSFIxYm1VdGMyRXRkaklpZlEuVllnYm9NNENUZDBwZENKNzh3alV3bXRhbGgtMnZzS2pBTnlQc2gtNmd1RXdPdFdFcTVGYnc1WkhQdHZBZHJMbFB6cE9IRWJBZTRlVU05NUJSR1diWUlkd2p1Tjk1SjBENFJORmtWVXQ0OHR3b2FrUlY3aC1hUHV3c1FYSGhaWnp5NHlpbUZIRzlVZm1zazVZcjRSVmNHNm4xMzd5LUZIMDhLOHpaaklQQXNLRHFOQlF0eGctbFp2d1ZNaTZ2aUlocnJ6QVFzME1CT1Y4Mk9KWUd5Mm8tV1FWYzBVVWFuQ2Y5NFkzZ1QwWVRpcVF2Y3pZTXM2bno5dXQtWGd3aXRyQlk2VGo5QmdQcHJBOWtfajVxRXhfTFVVWlVwUEFpRU43T3pka0pzSThjdHRoMTBseXBJMUFlRnI0M3Q2QUx5clFvQk0zOWFiRGZxM0Zrc1Itb2NfV013 kind: Secret ...
This shows the base64 encoded token. To get the decoded one we could pipe it manually through base64 -d or we simply do:
xxxxxxxxxx ➜ k -n neptune describe secret neptune-secret-1 ... Data ==== token: eyJhbGciOiJSUzI1NiIsImtpZCI6Im5aZFdqZDJ2aGNvQ3BqWHZOR1g1b3pIcm5JZ0hHNWxTZkwzQnFaaTFad2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJuZXB0dW5lIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Im5lcHR1bmUtc2EtdjItdG9rZW4tZnE5MmoiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoibmVwdHVuZS1zYS12MiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjY2YmRjNjM2LTJlYzMtNDJhZC04OGE1LWFhYzFlZjZlOTZlNSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpuZXB0dW5lOm5lcHR1bmUtc2EtdjIifQ.VYgboM4CTd0pdCJ78wjUwmtalh-2vsKjANyPsh-6guEwOtWEq5Fbw5ZHPtvAdrLlPzpOHEbAe4eUM95BRGWbYIdwjuN95J0D4RNFkVUt48twoakRV7h-aPuwsQXHhZZzy4yimFHG9Ufmsk5Yr4RVcG6n137y-FH08K8zZjIPAsKDqNBQtxg-lZvwVMi6viIhrrzAQs0MBOV82OJYGy2o-WQVc0UUanCf94Y3gT0YTiqQvczYMs6nz9ut-XgwitrBY6Tj9BgPprA9k_j5qEx_LUUZUpPAiEN7OzdkJsI8ctth10lypI1AeFr43t6ALyrQoBM39abDfq3FksR-oc_WMw ca.crt: 1066 bytes namespace: 7 bytes
Copy the token (part under token:) and paste it using vim.
xxxxxxxxxx vim /opt/course/5/token
File /opt/course/5/token should contain the token:
Create a single Pod named pod6 in Namespacedefault of image busybox:1.31.0. The Pod should have a readiness-probe executing cat /tmp/ready. It should initially wait 5 and periodically wait 10 seconds. This will set the container ready only if the file /tmp/ready exists.
The Pod should run the command touch /tmp/ready && sleep 1d, which will create the necessary file to be ready and then idles. Create the Pod and confirm it starts.
Answer:
xxxxxxxxxx k run pod6 --image=busybox:1.31.0 --dry-run=client -oyaml --command -- sh -c "touch /tmp/ready && sleep 1d" > 6.yaml
vim 6.yaml
Search for a readiness-probe example on https://kubernetes.io/docs, then copy and alter the relevant section for the task:
Running k get pod6 we should see the job being created and completed:
xxxxxxxxxx ➜ k get pod pod6 NAME READY STATUS RESTARTS AGE pod6 0/1 ContainerCreating 0 2s
➜ k get pod pod6 NAME READY STATUS RESTARTS AGE pod6 0/1 Running 0 7s
➜ k get pod pod6 NAME READY STATUS RESTARTS AGE pod6 1/1 Running 0 15s
We see that the Pod is finally ready.
Question 7 | Pods, Namespaces
Solve this question on instance: ssh ckad7326
The board of Team Neptune decided to take over control of one e-commerce webserver from Team Saturn. The administrator who once setup this webserver is not part of the organisation any longer. All information you could get was that the e-commerce system is called my-happy-shop.
Search for the correct Pod in Namespacesaturn and move it to Namespaceneptune. It doesn’t matter if you shut it down and spin it up again, it probably hasn’t any customers anyways.
Answer:
Let’s see all those Pods:
xxxxxxxxxx ➜ k -n saturn get pod NAME READY STATUS RESTARTS AGE webserver-sat-001 1/1 Running 0 111m webserver-sat-002 1/1 Running 0 111m webserver-sat-003 1/1 Running 0 111m webserver-sat-004 1/1 Running 0 111m webserver-sat-005 1/1 Running 0 111m webserver-sat-006 1/1 Running 0 111m
The Pod names don’t reveal any information. We assume the Pod we are searching has a label or annotation with the name my-happy-shop, so we search for it:
xxxxxxxxxx k -n saturn describe pod # describe all pods, then manually look for it
# or do some filtering like this k -n saturn get pod -o yaml | grep my-happy-shop -A10
We see the webserver we’re looking for is webserver-sat-003
xxxxxxxxxx k -n saturn get pod webserver-sat-003 -o yaml > 7_webserver-sat-003.yaml # export vim 7_webserver-sat-003.yaml
Change the Namespace to neptune, also remove the status: section, the token volume, the token volumeMount and the nodeName, else the new Pod won’t start. The final file could look as clean like this:
xxxxxxxxxx # 7_webserver-sat-003.yaml apiVersion: v1 kind: Pod metadata: annotations: description: this is the server for the E-Commerce System my-happy-shop labels: id: webserver-sat-003 name: webserver-sat-003 namespace: neptune # new namespace here spec: containers: - image: nginx:1.16.1-alpine imagePullPolicy: IfNotPresent name: webserver-sat restartPolicy: Always
Then we execute:
xxxxxxxxxx k -n neptune create -f 7_webserver-sat-003.yaml
xxxxxxxxxx ➜ k -n neptune get pod | grep webserver webserver-sat-003 1/1 Running 0 22s
It seems the server is running in Namespaceneptune, so we can do:
xxxxxxxxxx k -n saturn delete pod webserver-sat-003 --force --grace-period=0
Let’s confirm only one is running:
xxxxxxxxxx ➜ k get pod -A | grep webserver-sat-003 neptune webserver-sat-003 1/1 Running 0 6s
This should list only one pod called webserver-sat-003 in Namespaceneptune, status running.
Question 8 | Deployment, Rollouts
Solve this question on instance: ssh ckad7326
There is an existing Deployment named api-new-c32 in Namespaceneptune. A developer did make an update to the Deployment but the updated version never came online. Check the Deployment history and find a revision that works, then rollback to it. Could you tell Team Neptune what the error was so it doesn’t happen again?
Answer:
xxxxxxxxxx k -n neptune get deploy # overview k -n neptune rollout -h k -n neptune rollout history -h
xxxxxxxxxx ➜ k -n neptune describe pod api-new-c32-7d64747c87-zh648 | grep -i error ... Error: ImagePullBackOff
xxxxxxxxxx ➜ k -n neptune describe pod api-new-c32-7d64747c87-zh648 | grep -i image Image: ngnix:1.16.3 Image ID: Reason: ImagePullBackOff Warning Failed 4m28s (x616 over 144m) kubelet, gke-s3ef67020-28c5-45f7--default-pool-248abd4f-s010 Error: ImagePullBackOff
Someone seems to have added a new image with a spelling mistake in the name ngnix:1.16.3, that’s the reason we can tell Team Neptune!
Now let’s revert to the previous version:
xxxxxxxxxx k -n neptune rollout undo deploy api-new-c32
Does this one work?
xxxxxxxxxx ➜ k -n neptune get deploy api-new-c32 NAME READY UP-TO-DATE AVAILABLE AGE api-new-c32 3/3 3 3 146m
Yes! All up-to-date and available.
Also a fast way to get an overview of the ReplicaSets of a Deployment and their images could be done with:
xxxxxxxxxx k -n neptune get rs -o wide | grep api-new-c32
Question 9 | Pod -> Deployment
Solve this question on instance: ssh ckad9043
In Namespacepluto there is single Pod named holy-api. It has been working okay for a while now but Team Pluto needs it to be more reliable.
Convert the Pod into a Deployment named holy-api with 3 replicas and delete the single Pod once done. The raw Pod template file is available at /opt/course/9/holy-api-pod.yaml.
In addition, the new Deployment should set allowPrivilegeEscalation: false and privileged: false for the security context on container level.
Please create the Deployment and save its yaml under /opt/course/9/holy-api-deployment.yaml on ckad9043.
Answer
There are multiple ways to do this, one is to copy an Deployment example from https://kubernetes.io/docs and then merge it with the existing Pod yaml. That’s what we will do now:
xxxxxxxxxx cp /opt/course/9/holy-api-pod.yaml /opt/course/9/holy-api-deployment.yaml # make a copy!
vim /opt/course/9/holy-api-deployment.yaml
Now copy/use a Deployment example yaml and put the Pod’smetadata: and spec: into the Deployment’stemplate: section:
xxxxxxxxxx # /opt/course/9/holy-api-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: holy-api # name stays the same namespace: pluto # important spec: replicas: 3 # 3 replicas selector: matchLabels: id: holy-api # set the correct selector template: # => from here down its the same as the pods metadata: and spec: sections metadata: labels: id: holy-api name: holy-api spec: containers: - env: - name: CACHE_KEY_1 value: b&MTCi0=[T66RXm!jO@ - name: CACHE_KEY_2 value: PCAILGej5Ld@Q%{Q1=# - name: CACHE_KEY_3 value: 2qz-]2OJlWDSTn_;RFQ image: nginx:1.17.3-alpine name: holy-api-container securityContext: # add allowPrivilegeEscalation: false # add privileged: false # add volumeMounts: - mountPath: /cache1 name: cache-volume1 - mountPath: /cache2 name: cache-volume2 - mountPath: /cache3 name: cache-volume3 volumes: - emptyDir: {} name: cache-volume1 - emptyDir: {} name: cache-volume2 - emptyDir: {} name: cache-volume3
To indent multiple lines using vim you should set the shiftwidth using :set shiftwidth=2. Then mark multiple lines using Shift v and the up/down keys.
To then indent the marked lines press > or < and to repeat the action press .
Next create the new Deployment:
xxxxxxxxxx k -f /opt/course/9/holy-api-deployment.yaml create
and confirm it’s running:
xxxxxxxxxx ➜ k -n pluto get pod | grep holy NAME READY STATUS RESTARTS AGE holy-api 1/1 Running 0 19m holy-api-5dbfdb4569-8qr5x 1/1 Running 0 30s holy-api-5dbfdb4569-b5clh 1/1 Running 0 30s holy-api-5dbfdb4569-rj2gz 1/1 Running 0 30s
Finally delete the single Pod:
xxxxxxxxxx k -n pluto delete pod holy-api --force --grace-period=0
Team Pluto needs a new cluster internal Service. Create a ClusterIP Service named project-plt-6cc-svc in Namespacepluto. This Service should expose a single Pod named project-plt-6cc-api of image nginx:1.17.3-alpine, create that Pod as well. The Pod should be identified by label project: plt-6cc-api. The Service should use tcp port redirection of 3333:80.
Finally use for example curl from a temporary nginx:alpinePod to get the response from the Service. Write the response into /opt/course/10/service_test.html on ckad9043. Also check if the logs of Podproject-plt-6cc-api show the request and write those into /opt/course/10/service_test.log on ckad9043.
Answer
xxxxxxxxxx k -n pluto run project-plt-6cc-api --image=nginx:1.17.3-alpine --labels project=plt-6cc-api
This will create the requested Pod. In yaml it would look like this:
We could also use create service but then we would need to change the yaml afterwards:
xxxxxxxxxx k -n pluto create service -h # help k -n pluto create service clusterip -h #help k -n pluto create service clusterip project-plt-6cc-svc --tcp 3333:80 --dry-run=client -oyaml # now we would need to set the correct selector labels
Check the Service is running:
xxxxxxxxxx ➜ k -n pluto get pod,svc | grep 6cc pod/project-plt-6cc-api 1/1 Running 0 9m42s
xxxxxxxxxx ➜ k -n pluto get ep NAME ENDPOINTS AGE project-plt-6cc-svc 10.28.2.32:80 84m
Yes, endpoint there! Finally we check the connection using a temporary Pod:
xxxxxxxxxx ➜ k run tmp --restart=Never --rm --image=nginx:alpine -i -- curl http://project-plt-6cc-svc.pluto:3333 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 612 100 612 0 0 32210 0 --:--:-- --:--:-- --:--:-- 32210 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> ...
Great! Notice that we use the Kubernetes Namespace dns resolving (project-plt-6cc-svc.pluto) here. We could only use the Service name if we would also spin up the temporary Pod in Namespacepluto .
And now really finally copy or pipe the html content into /opt/course/10/service_test.html.
There are files to build a container image located at /opt/course/11/image on ckad9043. The container will run a Golang application which outputs information to stdout. You’re asked to perform the following tasks:
ℹ️ Run all Docker and Podman commands as user root. Use sudo docker and sudo podman or become root with sudo -i
Change the Dockerfile: set ENV variable SUN_CIPHER_ID to hardcoded value 5b9c1065-e39d-4a43-a04a-e59bcea3e03f
Build the image using sudo docker, tag it registry.killer.sh:5000/sun-cipher:v1-docker and push it to the registry
Build the image using sudo podman, tag it registry.killer.sh:5000/sun-cipher:v1-podman and push it to the registry
Run a container using sudo podman, which keeps running detached in the background, named sun-cipher using image registry.killer.sh:5000/sun-cipher:v1-podman
Write the logs your container sun-cipher produces into /opt/course/11/logs on ckad9043
Answer
Dockerfile: list of commands from which an Image can be build
Image: binary file which includes all data/requirements to be run as a Container
Container: running instance of an Image
Registry: place where we can push/pull Images to/from
➜ sudo docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE registry.killer.sh:5000/sun-cipher v1-docker 409fde3c5bf9 24 seconds ago 7.76MB ...
➜ sudo docker push registry.killer.sh:5000/sun-cipher:v1-docker The push refers to repository [registry.killer.sh:5000/sun-cipher] c947fb5eba52: Pushed 33e8713114f8: Pushed latest: digest: sha256:d216b4136a5b232b738698e826e7d12fccba9921d163b63777be23572250f23d size: 739
There we go, built and pushed.
Step 3
Next we build the image using Podman. Here it’s only required to create one tag. The usage of Podman is very similar (for most cases even identical) to Docker:
We’ll create a container from the perviously created image, using Podman, which keeps running in the background:
xxxxxxxxxx ➜ sudo podman run -d --name sun-cipher registry.killer.sh:5000/sun-cipher:v1-podman f8199cba792f9fd2d1bd4decc9b7a9c0acfb975d95eda35f5f583c9efbf95589
Step 5
Finally we need to collect some information into files:
xxxxxxxxxx ➜ sudo podman logs sun-cipher 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 8081 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 7887 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 1847 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 4059 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 2081 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 1318 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 4425 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 2540 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 456 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 3300 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 694 2077/03/13 06:50:34 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 8511 2077/03/13 06:50:44 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 8162 2077/03/13 06:50:54 random number for 5b9c1065-e39d-4a43-a04a-e59bcea3e03f is 5089
This is looking not too bad at all. Our container skills are back in town!
Question 12 | Storage, PV, PVC, Pod volume
Solve this question on instance: ssh ckad5601
Create a new PersistentVolume named earth-project-earthflower-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.
Next create a new PersistentVolumeClaim in Namespaceearth named earth-project-earthflower-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.
Finally create a new Deploymentproject-earthflower in Namespaceearth which mounts that volume at /tmp/project-data. The Pods of that Deployment should be of image httpd:2.4.41-alpine.
xxxxxxxxxx ➜ k -n earth describe pod project-earthflower-d6887f7c5-pn5wv | grep -A2 Mounts: Mounts: /tmp/project-data from data (rw) # there it is /var/run/secrets/kubernetes.io/serviceaccount from default-token-n2sjj (ro)
Question 13 | Storage, StorageClass, PVC
Solve this question on instance: ssh ckad9043
Team Moonpie, which has the Namespacemoon, needs more storage. Create a new PersistentVolumeClaim named moon-pvc-126 in that namespace. This claim should use a new StorageClassmoon-retain with the provisioner set to moon-retainer and the reclaimPolicy set to Retain. The claim should request storage of 3Gi, an accessMode of ReadWriteOnce and should use the new StorageClass.
The provisioner moon-retainer will be created by another team, so it’s expected that the PVC will not boot yet. Confirm this by writing the event message from the PVC into file /opt/course/13/pvc-126-reason on ckad9043.
Now the same for the PersistentVolumeClaim, head to the docs, copy an example and transform it into:
xxxxxxxxxx vim 13_pvc.yaml
xxxxxxxxxx # 13_pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: moon-pvc-126 # name as requested namespace: moon # important spec: accessModes: - ReadWriteOnce # RWO resources: requests: storage: 3Gi # size storageClassName: moon-retain # uses our new storage class
xxxxxxxxxx k -f 13_pvc.yaml create
Next we check the status of the PVC :
xxxxxxxxxx ➜ k -n moon get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE moon-pvc-126 Pending moon-retain 2m57s
xxxxxxxxxx ➜ k -n moon describe pvc moon-pvc-126 Name: moon-pvc-126 ... Status: Pending ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 4s (x19 over 4m28s) persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'moon-retainer' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
This confirms that the PVC waits for the provisioner moon-retainer to be created. Finally we copy or write the event message into the requested location:
xxxxxxxxxx # /opt/course/13/pvc-126-reason Waiting for a volume to be created either by the external provisioner 'moon-retainer' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
Question 14 | Secret, Secret-Volume, Secret-Env
Solve this question on instance: ssh ckad9043
You need to make changes on an existing Pod in Namespacemoon called secret-handler. Create a new Secretsecret1 which contains user=test and pass=pwd. The Secret‘s content should be available in Podsecret-handler as environment variables SECRET1_USER and SECRET1_PASS. The yaml for Podsecret-handler is available at /opt/course/14/secret-handler.yaml.
There is existing yaml for another Secret at /opt/course/14/secret2.yaml, create this Secret and mount it inside the same Pod at /tmp/secret2. Your changes should be saved under /opt/course/14/secret-handler-new.yaml on ckad9043. Both Secrets should only be available in Namespacemoon.
Answer
xxxxxxxxxx k -n moon get pod # show pods k -n moon create secret -h # help k -n moon create secret generic -h # help k -n moon create secret generic secret1 --from-literal user=test --from-literal pass=pwd
Next we create the second Secret from the given location, making sure it’ll be created in Namespacemoon:
xxxxxxxxxx k -n moon -f /opt/course/14/secret2.yaml create
xxxxxxxxxx ➜ k -n moon get secret NAME TYPE DATA AGE default-token-rvzcf kubernetes.io/service-account-token 3 66m secret1 Opaque 2 4m3s secret2 Opaque 1 8s
We will now edit the Pod yaml:
xxxxxxxxxx cp /opt/course/14/secret-handler.yaml /opt/course/14/secret-handler-new.yaml vim /opt/course/14/secret-handler-new.yaml
There is also the possibility to import all keys from a Secret as env variables at once, though the env variable names will then be the same as in the Secret, which doesn’t work for the requirements here:
xxxxxxxxxx containers: - name: secret-handler ... envFrom: - secretRef: # also works for configMapRef name: secret1
Then we apply the changes:
xxxxxxxxxx k -f /opt/course/14/secret-handler.yaml delete --force --grace-period=0 k -f /opt/course/14/secret-handler-new.yaml create
Instead of running delete and create we can also use recreate:
xxxxxxxxxx k -f /opt/course/14/secret-handler-new.yaml replace --force --grace-period=0
It was not requested directly, but you should always confirm it’s working:
➜ k -n moon exec secret-handler -- cat /tmp/secret2/key 12345678
Question 15 | ConfigMap, Configmap-Volume
Solve this question on instance: ssh ckad9043
Team Moonpie has a nginx server Deployment called web-moon in Namespacemoon. Someone started configuring it but it was never completed. To complete please create a ConfigMap called configmap-web-moon-html containing the content of file /opt/course/15/web-moon.html under the data key-name index.html.
The Deploymentweb-moon is already configured to work with this ConfigMap and serve its content. Test the nginx configuration for example using curl from a temporary nginx:alpinePod.
Answer
Let’s check the existing Pods:
xxxxxxxxxx ➜ k -n moon get pod NAME READY STATUS RESTARTS AGE secret-handler 1/1 Running 0 55m web-moon-847496c686-2rzj4 0/1 ContainerCreating 0 33s web-moon-847496c686-9nwwj 0/1 ContainerCreating 0 33s web-moon-847496c686-cxdbx 0/1 ContainerCreating 0 33s web-moon-847496c686-hvqlw 0/1 ContainerCreating 0 33s web-moon-847496c686-tj7ct 0/1 ContainerCreating 0 33s
xxxxxxxxxx ➜ k -n moon describe pod web-moon-847496c686-2rzj4 ... Warning FailedMount 31s (x7 over 63s) kubelet, gke-test-default-pool-ce83a51a-p6s4 MountVolume.SetUp failed for volume "html-volume" : configmaps "configmap-web-moon-html" not found
Good so far, now let’s create the missing ConfigMap:
xxxxxxxxxx k -n moon create configmap -h # help
k -n moon create configmap configmap-web-moon-html --from-file=index.html=/opt/course/15/web-moon.html # important to set the index.html key
This should create a ConfigMap with yaml like:
xxxxxxxxxx apiVersion: v1 data: index.html: | # notice the key index.html, this will be the filename when mounted <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Web Moon Webpage</title> </head> <body> This is some great content. </body> </html> kind: ConfigMap metadata: creationTimestamp: null name: configmap-web-moon-html namespace: moon
After waiting a bit or deleting/recreating (k -n moon rollout restart deploy web-moon) the Pods we should see:
xxxxxxxxxx ➜ k -n moon get pod NAME READY STATUS RESTARTS AGE secret-handler 1/1 Running 0 59m web-moon-847496c686-2rzj4 1/1 Running 0 4m28s web-moon-847496c686-9nwwj 1/1 Running 0 4m28s web-moon-847496c686-cxdbx 1/1 Running 0 4m28s web-moon-847496c686-hvqlw 1/1 Running 0 4m28s web-moon-847496c686-tj7ct 1/1 Running 0 4m28s
Looking much better. Finally we check if the nginx returns the correct content:
xxxxxxxxxx k -n moon get pod -o wide # get pod cluster IPs
Then use one IP to test the configuration:
xxxxxxxxxx ➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.44.0.78 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 161 100 161 0 0 80500 0 --:--:-- --:--:-- --:--:-- 157k <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Web Moon Webpage</title> </head> <body> This is some great content. </body>
For debugging or further checks we could find out more about the Pods volume mounts:
xxxxxxxxxx ➜ k -n moon describe pod web-moon-c77655cc-dc8v4 | grep -A2 Mounts: Mounts: /usr/share/nginx/html from html-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-rvzcf (ro)
Here it was important that the file will have the name index.html and not the original one web-moon.html which is controlled through the ConfigMap data key.
Question 16 | Logging sidecar
Solve this question on instance: ssh ckad7326
The Tech Lead of Mercury2D decided it’s time for more logging, to finally fight all these missing data incidents. There is an existing container named cleaner-con in Deploymentcleaner in Namespacemercury. This container mounts a volume and writes logs into a file called cleaner.log.
The yaml for the existing Deployment is available at /opt/course/16/cleaner.yaml. Persist your changes at /opt/course/16/cleaner-new.yaml on ckad7326 but also make sure the Deployment is running.
Create a sidecar container named logger-con, image busybox:1.31.0 , which mounts the same volume and writes the content of cleaner.log to stdout, you can use the tail -f command for this. This way it can be picked up by kubectl logs.
Check if the logs of the new container reveal something about the missing data incidents.
Answer
Sidecar containers in K8s are initContainers with restartPolicy: Always. Search for “Sidecar Containers” in the K8s Docs to familiarise yourself if necessary.
xxxxxxxxxx cp /opt/course/16/cleaner.yaml /opt/course/16/cleaner-new.yaml vim /opt/course/16/cleaner-new.yaml
Add a sidecar container which outputs the log file to stdout:
In earlier K8s versions it was necessary to define sidecar containers as additional application containers under containers: like this:
xxxxxxxxxx # LEGACY example of defining sidecar containers in earlier K8s versions apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null name: cleaner namespace: mercury spec: ... template: ... spec: ... initContainers: - name: init image: bash:5.0.11 ... containers: - name: cleaner-con image: bash:5.0.11 ... - name: logger-con # LEGACY example image: busybox:1.31.0 # LEGACY example command: ["sh", "-c", "tail -f /var/log/cleaner/cleaner.log"] # LEGACY example volumeMounts: # LEGACY example - name: logs # LEGACY example mountPath: /var/log/cleaner # LEGACY example
Then apply the changes and check the logs of the sidecar:
xxxxxxxxxx k -f /opt/course/16/cleaner-new.yaml apply
This will cause a deployment rollout of which we can get more details:
xxxxxxxxxx k -n mercury rollout history deploy cleaner k -n mercury rollout history deploy cleaner --revision 1 k -n mercury rollout history deploy cleaner --revision 2
Check Pod statuses:
xxxxxxxxxx ➜ k -n mercury get pod NAME READY STATUS RESTARTS AGE cleaner-86b7758668-9pw6t 2/2 Running 0 6s cleaner-86b7758668-qgh4v 0/2 Init:0/1 0 1s
➜ k -n mercury get pod NAME READY STATUS RESTARTS AGE cleaner-86b7758668-9pw6t 2/2 Running 0 14s cleaner-86b7758668-qgh4v 2/2 Running 0 9s
Finally check the logs of the logging sidecar container:
xxxxxxxxxx ➜ k -n mercury logs cleaner-576967576c-cqtgx -c logger-con init Wed Sep 11 10:45:44 UTC 2099: remove random file Wed Sep 11 10:45:45 UTC 2099: remove random file ...
Mystery solved, something is removing files at random ;) It’s important to understand how containers can communicate with each other using volumes.
Question 17 | InitContainer
Solve this question on instance: ssh ckad5601
Last lunch you told your coworker from department Mars Inc how amazing InitContainers are. Now he would like to see one in action. There is a Deployment yaml at /opt/course/17/test-init-container.yaml. This Deployment spins up a single Pod of image nginx:1.17.3-alpine and serves files from a mounted volume, which is empty right now.
Create an InitContainer named init-con which also mounts that volume and creates a file index.html with content check this out! in the root of the mounted volume. For this test we ignore that it doesn’t contain valid html.
The InitContainer should be using image busybox:1.31.0. Test your implementation for example using curl from a temporary nginx:alpinePod.
xxxxxxxxxx k -f 17_test-init-container.yaml create
Finally we test the configuration:
xxxxxxxxxx k -n mars get pod -o wide # to get the cluster IP
xxxxxxxxxx ➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl 10.0.0.67 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed check this out!
Beautiful.
Question 18 | Service misconfiguration
Solve this question on instance: ssh ckad5601
There seems to be an issue in Namespacemars where the ClusterIP service manager-api-svc should make the Pods of Deploymentmanager-api-deployment available inside the cluster.
You can test this with curl manager-api-svc.mars:4444 from a temporary nginx:alpinePod. Check for the misconfiguration and apply a fix.
Answer
First let’s get an overview:
xxxxxxxxxx ➜ k -n mars get all NAME READY STATUS RESTARTS AGE pod/manager-api-deployment-dbcc6657d-bg2hh 1/1 Running 0 98m pod/manager-api-deployment-dbcc6657d-f5fv4 1/1 Running 0 98m pod/manager-api-deployment-dbcc6657d-httjv 1/1 Running 0 98m pod/manager-api-deployment-dbcc6657d-k98xn 1/1 Running 0 98m pod/test-init-container-5db7c99857-htx6b 1/1 Running 0 2m19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/manager-api-svc ClusterIP 10.15.241.159 <none> 4444/TCP 99m
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/manager-api-deployment 4/4 4 4 98m deployment.apps/test-init-container 1/1 1 1 2m19s ...
Everything seems to be running, but we can’t seem to get a connection:
xxxxxxxxxx ➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444 If you don't see a command prompt, try pressing enter. 0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0 curl: (28) Connection timed out after 1000 milliseconds pod "tmp" deleted pod mars/tmp terminated (Error)
Ok, let’s try to connect to one pod directly:
xxxxxxxxxx k -n mars get pod -o wide # get cluster IP
xxxxxxxxxx ➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 10.0.1.14 % Total % Received % Xferd Average Speed Time Time Time Current <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
The Pods itself seem to work. Let’s investigate the Service a bit:
xxxxxxxxxx ➜ k -n mars describe service manager-api-svc Name: manager-api-svc Namespace: mars Labels: app=manager-api-svc ... Endpoints: <none> ...
Endpoint inspection is also possible using:
xxxxxxxxxx k -n mars get ep
No endpoints - No good. We check the Service yaml:
xxxxxxxxxx k -n mars edit service manager-api-svc
xxxxxxxxxx # k -n mars edit service manager-api-svc apiVersion: v1 kind: Service metadata: ... labels: app: manager-api-svc name: manager-api-svc namespace: mars ... spec: clusterIP: 10.3.244.121 ports: - name: 4444-80 port: 4444 protocol: TCP targetPort: 80 selector: #id: manager-api-deployment # wrong selector, needs to point to pod! id: manager-api-pod sessionAffinity: None type: ClusterIP
Though Pods are usually never created without a Deployment or ReplicaSet, Services always select for Pods directly. This gives great flexibility because Pods could be created through various customized ways. After saving the new selector we check the Service again for endpoints:
xxxxxxxxxx ➜ k -n mars get ep NAME ENDPOINTS AGE manager-api-svc 10.0.0.30:80,10.0.1.30:80,10.0.1.31:80 + 1 more... 41m
Endpoints - Good! Now we try connecting again:
xxxxxxxxxx ➜ k -n mars run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 612 100 612 0 0 99k 0 --:--:-- --:--:-- --:--:-- 99k <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
And we fixed it. Good to know is how to be able to use Kubernetes DNS resolution from a different Namespace. Not necessary, but we could spin up the temporary Pod in default Namespace:
xxxxxxxxxx ➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc:4444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (6) Could not resolve host: manager-api-svc pod "tmp" deleted pod default/tmp terminated (Error)
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 manager-api-svc.mars:4444 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 612 100 612 0 0 68000 0 --:--:-- --:--:-- --:--:-- 68000 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title>
Short manager-api-svc.mars or long manager-api-svc.mars.svc.cluster.local work.
Question 19 | Service ClusterIP->NodePort
Solve this question on instance: ssh ckad5601
In Namespacejupiter you’ll find an apache Deployment (with one replica) named jupiter-crew-deploy and a ClusterIP Service called jupiter-crew-svc which exposes it. Change this service to a NodePort one to make it available on all nodes on port 30100.
Test the NodePort Service using the internal IP of all available nodes and the port 30100 using curl, you can reach the internal node IPs directly from your main terminal. On which nodes is the Service reachable? On which node is the Pod running?
Answer
First we get an overview:
xxxxxxxxxx ➜ k -n jupiter get all NAME READY STATUS RESTARTS AGE pod/jupiter-crew-deploy-8cdf99bc9-klwqt 1/1 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/jupiter-crew-svc ClusterIP 10.100.254.66 <none> 8080/TCP 34m ...
(Optional) Next we check if the ClusterIP Service actually works:
xxxxxxxxxx ➜ k -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 45 100 45 0 0 5000 0 --:--:-- --:--:-- --:--:-- 5000 <html><body><h1>It works!</h1></body></html>
The Service is working great. Next we change the Service type to NodePort and set the port:
xxxxxxxxxx k -n jupiter edit service jupiter-crew-svc
xxxxxxxxxx ➜ k -n jupiter get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jupiter-crew-svc NodePort 10.3.245.70 <none> 8080:30100/TCP 3m52s
(Optional) And we confirm that the service is still reachable internally:
xxxxxxxxxx ➜ k -n jupiter run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 jupiter-crew-svc:8080 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed <html><body><h1>It works!</h1></body></html>
Nice. A NodePort Service kind of lies on top of a ClusterIP one, making the ClusterIP Service reachable on the Node IPs (internal and external). Next we get the internal IPs of all nodes to check the connectivity:
xxxxxxxxxx ➜ k get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP ... cluster1-controlplane1 Ready control-plane 18h v1.32.0 192.168.100.11 ...
Here we only have one node in the cluster, but the Service would be reachable on all of them. Even if the Pod is just running on one specific node, the Service makes it available through port 30100 on the internal and external IP addresses of all nodes. This is at least the common/default behaviour but can depend on cluster configuration.
Question 20 | NetworkPolicy
Solve this question on instance: ssh ckad7326
In Namespacevenus you’ll find two Deployments named api and frontend. Both Deployments are exposed inside the cluster using Services. Create a NetworkPolicy named np1 which restricts outgoing tcp connections from Deploymentfrontend and only allows those going to Deploymentapi. Make sure the NetworkPolicy still allows outgoing traffic on UDP/TCP ports 53 for DNS resolution.
Test using: wget www.google.com and wget api:2222 from a Pod of Deploymentfrontend.
Answer
INFO: For learning NetworkPolicies check out https://editor.cilium.io. But you’re not allowed to use it during the exam.
First we get an overview:
xxxxxxxxxx ➜ k -n venus get all NAME READY STATUS RESTARTS AGE pod/api-5979b95578-gktxp 1/1 Running 0 57s pod/api-5979b95578-lhcl5 1/1 Running 0 57s pod/frontend-789cbdc677-c9v8h 1/1 Running 0 57s pod/frontend-789cbdc677-npk2m 1/1 Running 0 57s pod/frontend-789cbdc677-pl67g 1/1 Running 0 57s pod/frontend-789cbdc677-rjt5r 1/1 Running 0 57s pod/frontend-789cbdc677-xgf5n 1/1 Running 0 57s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/api ClusterIP 10.3.255.137 <none> 2222/TCP 37s service/frontend ClusterIP 10.3.255.135 <none> 80/TCP 57s ...
(Optional) This is not necessary but we could check if the Services are working inside the cluster:
xxxxxxxxxx ➜ k -n venus run tmp --restart=Never --rm -i --image=busybox -i -- wget -O- frontend:80 Connecting to frontend:80 (10.3.245.9:80) <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
➜ k -n venus run tmp --restart=Never --rm --image=busybox -i -- wget -O- api:2222 Connecting to api:2222 (10.3.250.233:2222) <html><body><h1>It works!</h1></body></html>
Then we use any frontendPod and check if it can reach external names and the apiService:
xxxxxxxxxx ➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- www.google.com Connecting to www.google.com (216.58.205.227:80) - 100% |********************************| 12955 0:00:00 ETA <!doctype html><html itemscope="" itemtype="http://schema.org/WebPage" lang="en"><head> ...
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- api:2222 <html><body><h1>It works!</h1></body></html> Connecting to api:2222 (10.3.255.137:2222) - 100% |********************************| 45 0:00:00 ETA ...
We see Pods of frontend can reach the api and external names.
xxxxxxxxxx vim 20_np1.yaml
Now we head to https://kubernetes.io/docs, search for NetworkPolicy, copy the example code and adjust it to:
xxxxxxxxxx # 20_np1.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: np1 namespace: venus spec: podSelector: matchLabels: id: frontend # label of the pods this policy should be applied on policyTypes: - Egress # we only want to control egress egress: - to: # 1st egress rule - podSelector: # allow egress only to pods with api label matchLabels: id: api - ports: # 2nd egress rule - port: 53 # allow DNS UDP protocol: UDP - port: 53 # allow DNS TCP protocol: TCP
Notice that we specify two egress rules in the yaml above. If we specify multiple egress rules then these are connected using a logical OR. So in the example above we do:
xxxxxxxxxx allow outgoing traffic if (destination pod has label id:api) OR ((port is 53 UDP) OR (port is 53 TCP))
Let’s have a look at example code which wouldn’t work in our case:
xxxxxxxxxx # this example does not work in our case ... egress: - to: # 1st AND ONLY egress rule - podSelector: # allow egress only to pods with api label matchLabels: id: api ports: # STILL THE SAME RULE but just an additional selector - port: 53 # allow DNS UDP protocol: UDP - port: 53 # allow DNS TCP protocol: TCP
In the yaml above we only specify one egress rule with two selectors. It can be translated into:
xxxxxxxxxx allow outgoing traffic if (destination pod has label id:api) AND ((port is 53 UDP) OR (port is 53 TCP))
Apply the correct policy:
xxxxxxxxxx k -f 20_np1.yaml create
And try again, external is not working any longer:
xxxxxxxxxx ➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- www.google.de Connecting to www.google.de:2222 (216.58.207.67:80) ^C
➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- -T 5 www.google.de:80 Connecting to www.google.com (172.217.203.104:80) wget: download timed out command terminated with exit code 1
Internal connection to api work as before:
xxxxxxxxxx ➜ k -n venus exec frontend-789cbdc677-c9v8h -- wget -O- api:2222 <html><body><h1>It works!</h1></body></html> Connecting to api:2222 (10.3.255.137:2222) - 100% |********************************| 45 0:00:00 ETA
Question 21 | Requests and Limits, ServiceAccount
Solve this question on instance: ssh ckad7326
Team Neptune needs 3 Pods of image httpd:2.4-alpine, create a Deployment named neptune-10ab for this. The containers should be named neptune-pod-10ab. Each container should have a memory request of 20Mi and a memory limit of 50Mi.
Team Neptune has it’s own ServiceAccountneptune-sa-v2 under which the Pods should run. The Deployment should be in Namespaceneptune.
Answer:
xxxxxxxxxx k -n neptune create deployment -h # help k -n neptune create deploy -h # deploy is short for deployment
xxxxxxxxxx k create -f 21.yaml # namespace already set in yaml
To verify all Pods are running we do:
xxxxxxxxxx ➜ k -n neptune get pod | grep neptune-10ab neptune-10ab-7d4b8d45b-4nzj5 1/1 Running 0 57s neptune-10ab-7d4b8d45b-lzwrf 1/1 Running 0 17s neptune-10ab-7d4b8d45b-z5hcc 1/1 Running 0 17s
Question 22 | Labels, Annotations
Solve this question on instance: ssh ckad9043
Team Sunny needs to identify some of their Pods in namespace sun. They ask you to add a new label protected: true to all Pods with an existing label type: worker or type: runner. Also add an annotation protected: do not delete this pod to all Pods having the new label protected: true.
If we would only like to get pods with certain labels we can run:
xxxxxxxxxx k -n sun get pod -l type=runner # only pods with label runner
We can use this label filtering also when using other commands, like setting new labels:
xxxxxxxxxx k label -h # help k -n sun label pod -l type=runner protected=true # run for label runner k -n sun label pod -l type=worker protected=true # run for label worker
Or we could run:
xxxxxxxxxx k -n sun label pod -l "type in (worker,runner)" protected=true
This is a preview of the CKAD Simulator content. The full CKAD Simulator contains 22 different questions. These preview questions are in addition to the provided ones and can also be solved in the interactive environment.
Preview Question 1
Solve this question on instance: ssh ckad9043
In Namespacepluto there is a Deployment named project-23-api. It has been working okay for a while but Team Pluto needs it to be more reliable. Implement a liveness-probe which checks the container to be reachable on port 80. Initially the probe should wait 10, periodically 15 seconds.
The original Deployment yaml is available at /opt/course/p1/project-23-api.yaml. Save your changes at /opt/course/p1/project-23-api-new.yaml and apply the changes.
Answer
First we get an overview:
x ➜ k -n pluto get all -o wide NAME READY STATUS ... IP ... pod/holy-api 1/1 Running ... 10.12.0.26 ... pod/project-23-api-784857f54c-dx6h6 1/1 Running ... 10.12.2.15 ... pod/project-23-api-784857f54c-sj8df 1/1 Running ... 10.12.1.18 ... pod/project-23-api-784857f54c-t4xmh 1/1 Running ... 10.12.0.23 ...
NAME READY UP-TO-DATE AVAILABLE ... deployment.apps/project-23-api 3/3 3 3 ...
To note: we see another Pod here called holy-api which is part of another section. This is often the case in the provided scenarios, so be careful to only manipulate the resources you need to. Just like in the real world and in the exam.
Next we use nginx:alpine and curl to check if one Pod is accessible on port 80:
xxxxxxxxxx ➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 10.12.2.15 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
We could also use busybox and wget for this:
xxxxxxxxxx ➜ k run tmp --restart=Never --rm --image=busybox -i -- wget -O- 10.12.2.15 Connecting to 10.12.2.15 (10.12.2.15:80) writing to stdout - 100% |********************************| 612 0:00:00 ETA written to stdout <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title>
Now that we’re sure the Deployment works we can continue with altering the provided yaml:
xxxxxxxxxx cp /opt/course/p1/project-23-api.yaml /opt/course/p1/project-23-api-new.yaml vim /opt/course/p1/project-23-api-new.yaml
Team Sun needs a new Deployment named sunny with 4 replicas of image nginx:1.17.3-alpine in Namespacesun. The Deployment and its Pods should use the existing ServiceAccountsa-sun-deploy.
Expose the Deployment internally using a ClusterIP Service named sun-srv on port 9999. The nginx containers should run as default on port 80. The management of Team Sun would like to execute a command to check that all Pods are running on occasion. Write that command into file /opt/course/p2/sunny_status_command.sh. The command should use kubectl.
Answer
xxxxxxxxxx k -n sun create deployment -h #help
k -n sun create deployment sunny --image=nginx:1.17.3-alpine --dry-run=client -oyaml > p2_sunny.yaml
xxxxxxxxxx ➜ k create -f p2_sunny.yaml deployment.apps/sunny created
➜ k -n sun get pod NAME READY STATUS RESTARTS AGE 0509649a 1/1 Running 0 149m 0509649b 1/1 Running 0 149m 1428721e 1/1 Running 0 149m ... sunny-64df8dbdbb-9mxbw 1/1 Running 0 10s sunny-64df8dbdbb-mp5cf 1/1 Running 0 10s sunny-64df8dbdbb-pggdf 1/1 Running 0 6s sunny-64df8dbdbb-zvqth 1/1 Running 0 7s
Confirmed, the AGE column is always in important information about if changes were applied. Next we expose the Pods by created the Service:
xxxxxxxxxx k -n sun expose -h # help k -n sun expose deployment sunny --name sun-srv --port 9999 --target-port 80
Using expose instead of kubectl create service clusterip is faster because it already sets the correct selector-labels. The previous command would produce this yaml:
xxxxxxxxxx # k -n sun expose deployment sunny --name sun-srv --port 9999 --target-port 80 apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: app: sunny name: sun-srv # required by task spec: ports: - port: 9999 # service port protocol: TCP targetPort: 80 # target port selector: app: sunny # selector is important status: loadBalancer: {}
Let’s test the Service using wget from a temporary Pod:
xxxxxxxxxx ➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 sun-srv.sun:9999 Connecting to sun-srv.sun:9999 (10.23.253.120:9999) <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> ...
Because the Service is in a different Namespace as our temporary Pod, it is reachable using the names sun-srv.sun or fully: sun-srv.sun.svc.cluster.local.
Finally we need a command which can be executed to check if all Pods are runing, this can be done with:
xxxxxxxxxx vim /opt/course/p2/sunny_status_command.sh
xxxxxxxxxx # /opt/course/p2/sunny_status_command.sh kubectl -n sun get deployment sunny
To run the command:
xxxxxxxxxx ➜ sh /opt/course/p2/sunny_status_command.sh NAME READY UP-TO-DATE AVAILABLE AGE sunny 4/4 4 4 13m
Preview Question 3
Solve this question on instance: ssh ckad5601
Management of EarthAG recorded that one of their Services stopped working. Dirk, the administrator, left already for the long weekend. All the information they could give you is that it was located in Namespaceearth and that it stopped working after the latest rollout. All Services of EarthAG should be reachable from inside the cluster.
Find the Service, fix any issues and confirm it’s working again. Write the reason of the error into file /opt/course/p3/ticket-654.txt so Dirk knows what the issue was.
Answer
First we get an overview of the resources in Namespaceearth:
First impression could be that all Pods are in status RUNNING. But looking closely we see that some of the Pods are not ready, which also confirms what we see about one Deployment and one ReplicaSet. This could be our error to further investigate.
Another approach could be to check the Services for missing endpoints:
xxxxxxxxxx ➜ k -n earth get ep NAME ENDPOINTS AGE earth-2x3-api-svc 10.0.0.10:80,10.0.1.5:80,10.0.2.4:80 116m earth-2x3-web-svc 10.0.0.11:80,10.0.0.12:80,10.0.1.6:80 + 3 more... 116m earth-3cc-web
Serviceearth-3cc-web doesn’t have endpoints. This could be a selector/label misconfiguration or the endpoints are actually not available/ready.
Checking all Services for connectivity should show the same (this step is optional and just for demonstration):
xxxxxxxxxx ➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-2x3-api-svc.earth:4546 ... <html><body><h1>It works!</h1></body></html>
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-2x3-web-svc.earth:4545 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 45 100 45 0 0 5000 0 --:--:-- --:--:-- --:--:-- 5000 <html><body><h1>It works!</h1></body></html>
➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-3cc-web.earth:6363 If you don't see a command prompt, try pressing enter. 0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0 curl: (28) Connection timed out after 5000 milliseconds pod "tmp" deleted pod default/tmp terminated (Error)
Notice that we use here for example earth-2x3-api-svc.earth. We could also spin up a temporary Pod in Namespaceearth and connect directly to earth-2x3-api-svc.
We get no connection to earth-3cc-web.earth:6363. Let’s look at the Deploymentearth-3cc-web. Here we see that the requested amount of replicas is not available/ready:
xxxxxxxxxx ➜ k -n earth get deploy earth-3cc-web NAME READY UP-TO-DATE AVAILABLE AGE earth-3cc-web 0/4 4 0 7m18s
To continue we check the Deployment yaml for some misconfiguration:
xxxxxxxxxx k -n earth edit deploy earth-3cc-web
xxxxxxxxxx # k -n earth edit deploy earth-3cc-web apiVersion: extensions/v1beta1 kind: Deployment metadata: ... generation: 3 # there have been rollouts name: earth-3cc-web namespace: earth ... spec: ... template: metadata: creationTimestamp: null labels: id: earth-3cc-web spec: containers: - image: nginx:1.16.1-alpine imagePullPolicy: IfNotPresent name: nginx readinessProbe: failureThreshold: 3 initialDelaySeconds: 10 periodSeconds: 20 successThreshold: 1 tcpSocket: port: 82 # this port doesn't seem to be right, should be 80 timeoutSeconds: 1 ...
We change the readiness-probe port, save and check the Pods:
xxxxxxxxxx ➜ k -n earth get pod -l id=earth-3cc-web NAME READY STATUS RESTARTS AGE earth-3cc-web-d49645966-52vb9 0/1 Running 0 6s earth-3cc-web-d49645966-5tts6 0/1 Running 0 6s earth-3cc-web-d49645966-db5gp 0/1 Running 0 6s earth-3cc-web-d49645966-mk7gr 0/1 Running 0 6s
Running, but still not in ready state. Wait 10 seconds (initialDelaySeconds of readinessProbe) and check again:
xxxxxxxxxx ➜ k -n earth get pod -l id=earth-3cc-web NAME READY STATUS RESTARTS AGE earth-3cc-web-d49645966-52vb9 1/1 Running 0 32s earth-3cc-web-d49645966-5tts6 1/1 Running 0 32s earth-3cc-web-d49645966-db5gp 1/1 Running 0 32s earth-3cc-web-d49645966-mk7gr 1/1 Running 0 32s
Let’s check the service again:
xxxxxxxxxx ➜ k run tmp --restart=Never --rm -i --image=nginx:alpine -- curl -m 5 earth-3cc-web.earth:6363 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 612 100 612 0 0 55636 0 --:--:-- --:--:-- --:--:-- 55636 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> ...
We did it! Finally we write the reason into the requested location:
xxxxxxxxxx vim /opt/course/p3/ticket-654.txt
xxxxxxxxxx # /opt/course/p3/ticket-654.txt yo Dirk, wrong port for readinessProbe defined!
CKAD Tips Kubernetes 1.32
In this section we’ll provide some tips on how to handle the CKAD exam and browser terminal.
Knowledge
Study all topics as proposed in the curriculum till you feel comfortable with all
You’ll be provided with a browser terminal which uses Ubuntu/Debian. The standard shells included with a minimal install will be available, including bash.
Laggin
There could be some lagging, definitely make sure you are using a good internet connection because your webcam and screen are uploading all the time.
Kubectl autocompletion and commands
Autocompletion is configured by default, as well as the k alias source and others:
kubectl with k alias and Bash autocompletion
yq and jqfor YAML/JSON processing
tmux for terminal multiplexing
curl and wget for testing web services
man and man pages for further documentation
Copy & Paste
Copy and pasting will work like normal in a Linux Environment:
What always works: copy+paste using right mouse context menu What works in Terminal: Ctrl+Shift+c and Ctrl+Shift+v What works in other apps like Firefox: Ctrl+c and Ctrl+v
Score
There are 15-20 questions in the exam. Your results will be automatically checked according to the handbook. If you don’t agree with the results you can request a review by contacting the Linux Foundation Support.
Notepad & Skipping Questions
You have access to a simple notepad in the browser which can be used for storing any kind of plain text. It might makes sense to use this for saving skipped question numbers. This way it’s possible to move some questions to the end.
Servers
Each question needs to be solved on a specific instance other than your main terminal. You’ll need to connect to the correct instance via ssh, the command is provided before each question.
The exam will now be taken using the PSI Secure Browser, which can be downloaded using the newest versions of Microsoft Edge, Safari, Chrome, or Firefox
Multiple monitors will no longer be permitted
Use of personal bookmarks will no longer be permitted
The new ExamUI includes improved features such as:
A remote desktop configured with the tools and software needed to complete the tasks
A timer that displays the actual time remaining (in minutes) and provides an alert with 30, 15, or 5 minute remaining
The content panel remains the same (presented on the Left Hand Side of the ExamUI)
In the real exam, each question has to be solved on a different instance to which you connect via ssh. This means it’s not advised to configure bash aliases because they wouldn’t be available on the instances accessed by ssh.
Be fast
Use the history command to reuse already entered commands or use even faster history search through **Ctrl r **.
If a command takes some time to execute, like sometimes kubectl delete pod x. You can put a task in the background using Ctrl z and pull it back into foreground running command fg.
You can delete pods fast with:
k delete pod x --grace-period 0 --force
Vim
Be great with vim.
Settings
In case you face a situation where vim is not configured properly and you face for example issues with pasting copied content you should be able to configure via ~/.vimrc or by entering manually in vim settings mode:
set tabstop=2 set expandtab set shiftwidth=2
The expandtab make sure to use spaces for tabs.
Note that changes in ~/.vimrc will not be transferred when connecting to other instances via ssh.
Mark lines: Esc+V (then arrow keys) Copy marked lines: y Cut marked lines: d Past lines: p or P
Indent multiple lines
To indent multiple lines press Esc and type :set shiftwidth=2. First mark multiple lines using Shift v and the up/down keys. Then to indent the marked lines press > or <. You can then press . to repeat the action.