Kubernetes CKA 考试学习

Kubernetes CKA 考试学习

这篇文章里包括应对Kubernetes CKA 考试的技巧和练习题。

目录

  1. 考试技巧
  2. 考试练习

1. 考试技巧

  1. 设置kubectl的alias和命令补全,这几行命令不需要记忆。将这几行命令保存在考试界面右上角的记事本里面,重新连接时需要再次执行一下。
source <(kubectl completion bash) 
alias k=kubectl
complete -F __start_kubectl k

获取方法,https://kubernetes.io/docs => Reference => kubectl => kubectl Cheat Sheet。

在这里插入图片描述

  1. 设置kubectl常用命令段为环境变量,将这几行命令保存在考试界面右上角的记事本里面,重新连接时需要再次执行一下。
export dr="--dry-run=client -o yaml"
  1. 设置vim使用2个空格的tab,需要记忆。
vi ~/.vimrc

追加

set tabstop=2
set expandtab
set shiftwidth=2

保存

:wq

2. 考试练习

1. Task weight: 2%

Use context: kubectl config use-context k8s-c1-H

You’re ask to find out following information about the cluster k8s-c1-H :

How many master nodes are available?
How many worker nodes are available?
What is the Pod CIDR of cluster1-worker1?
What is the Service CIDR?
Which Networking (or CNI Plugin) is configured and where is its config file?
Which suffix will static pods have that run on cluster1-worker1?
Write your answers into file /opt/course/14/cluster-info, structured like this:

# /opt/course/14/cluster-info
1: [ANSWER]
2: [ANSWER]
3: [ANSWER]
4: [ANSWER]
5: [ANSWER]
6: [ANSWER]

Answer:
How many master and worker nodes are available?

➜ k get node
NAME               STATUS   ROLES    AGE   VERSION
cluster1-master1   Ready    master   27h   v1.21.0
cluster1-worker1   Ready       27h   v1.21.0
cluster1-worker2   Ready       27h   v1.21.0

We see one master and two workers.

What is the Pod CIDR of cluster1-master1?
The fastest way might just be to describe all nodes and look manually for the PodCIDR entry:

k describe node
​
k describe node | less -p PodCIDR

Or we can use jsonpath for this, but better be fast than tidy:

➜ k get node -o jsonpath="{range .items[*]}{.metadata.name} {.spec.podCIDR}{'\n'}"
cluster1-master1 10.244.0.0/24
cluster1-worker1 10.244.1.0/24
cluster1-worker2 10.244.2.0/24

What is the Service CIDR?

➜ ssh cluster1-master1
​
➜ root@cluster1-master1:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep range- --service-cluster-ip-range=10.96.0.0/12

Which Networking (or CNI Plugin) is configured and where is its config file?

➜ root@cluster1-master1:~# find /etc/cni/net.d/
/etc/cni/net.d/
/etc/cni/net.d/10-weave.conflist
​
➜ root@cluster1-master1:~# cat /etc/cni/net.d/10-weave.conflist
{"cniVersion": "0.3.0","name": "weave",
...

By default the kubelet looks into /etc/cni/net.d to discover the CNI plugins. This will be the same on every master and worker nodes.

Which suffix will static pods have that run on cluster1-worker1?
The suffix is the node hostname with a leading hyphen. It used to be -static in earlier Kubernetes versions.

Result
The resulting /opt/course/14/cluster-info could look like:

# /opt/course/14/cluster-info
​
# How many master nodes are available?
1: 1
​
# How many worker nodes are available?
2: 2
​
# What is the Pod CIDR of cluster1-worker1?
3: 10.244.1.0/24
​
# What is the Service CIDR?
4: 10.96.0.0/12
​
# Which Networking (or CNI Plugin) is configured and where is its config file?
5: Weave, /etc/cni/net.d/10-weave.conflist
​
# Which suffix will static pods have that run on cluster1-worker1?
6: -cluster1-worker1

2. Task weight: 3%

Use context: kubectl config use-context k8s-c2-AC

Write a command into /opt/course/15/cluster_events.sh which shows the latest events in the whole cluster, ordered by time. Use kubectl for it.

Now kill the kube-proxy Pod running on node cluster2-worker1 and write the events this caused into /opt/course/15/pod_kill.log.

Finally kill the main docker container of the kube-proxy Pod on node cluster2-worker1 and write the events into /opt/course/15/container_kill.log.

Do you notice differences in the events both actions caused?

Answer:

# /opt/course/15/cluster_events.sh
kubectl get events -A --sort-by=.metadata.creationTimestamp

Now we kill the kube-proxy Pod:

k -n kube-system get pod -o wide | grep proxy # find pod running on cluster2-worker1
​
k -n kube-system delete pod kube-proxy-z64cg

Now check the events:

sh /opt/course/15/cluster_events.sh

Write the events the killing caused into /opt/course/15/pod_kill.log:

# /opt/course/15/pod_kill.log
kube-system   9s          Normal    Killing           pod/kube-proxy-jsv7t   ...
kube-system   3s          Normal    SuccessfulCreate  daemonset/kube-proxy   ...
kube-system      Normal    Scheduled         pod/kube-proxy-m52sx   ...
default       2s          Normal    Starting          node/cluster2-worker1  ...
kube-system   2s          Normal    Created           pod/kube-proxy-m52sx   ...
kube-system   2s          Normal    Pulled            pod/kube-proxy-m52sx   ...
kube-system   2s          Normal    Started           pod/kube-proxy-m52sx   ...

Finally we will try to provoke events by killing the docker container belonging to the main container of the kube-proxy Pod:

➜ ssh cluster2-worker1
​
➜ root@cluster2-worker1:~# docker ps | grep kube-proxy
5d4958901f3a        9b65a0f78b09           "/usr/local/bin/kube…"  5 minutes ago  ...
f8c56804a9c7        k8s.gcr.io/pause:3.1   "/pause"                5 minutes ago  ...
​
➜ root@cluster2-worker1:~# docker container rm 5d4958901f3a --force
​
➜ root@cluster2-worker1:~# docker ps | grep kube-proxy
52095b7d8107        9b65a0f78b09           "/usr/local/bin/kube…"   5 seconds ago  ...
f8c56804a9c7        k8s.gcr.io/pause:3.1   "/pause"                 6 minutes ago  ...

We killed the main container (5d4958901f3a), but also noticed that a new container (52095b7d8107) was directly created. Thanks Kubernetes!

Now we see if this caused events again and we write tho


本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场,不承担相关法律责任。如若转载,请注明出处。 如若内容造成侵权/违法违规/事实不符,请点击【内容举报】进行投诉反馈!

相关文章

立即
投稿

微信公众账号

微信扫一扫加关注

返回
顶部