kube-admin和kube-scheduler总是莫名的重启,集群状态还ok,没有问题
看截图信息
# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d8c4cb4d-8xghq 1/1 Running 0 38m
coredns-6d8c4cb4d-q65vq 1/1 Running 0 38m
etcd-host-10-19-83-151 1/1 Running 4 23h
kube-apiserver-master 1/1 Running 1 23h
kube-controller-manager-master 1/1 Running 31 (25m ago) 23h
kube-flannel-ds-amd64-2pwps 1/1 Running 0 61m
kube-flannel-ds-amd64-svfg6 1/1 Running 0 61m
kube-flannel-ds-amd64-xmppt 1/1 Running 1 61m
kube-proxy-d4bb2 1/1 Running 0 23h
kube-proxy-k2skv 1/1 Running 1 23h
kube-proxy-x9k76 1/1 Running 1 (23h ago) 23h
kube-scheduler-master 1/1 Running 32 (25m ago) 23h
查看详细信息后发现,探针一直有探测失败的情况
# kubectl describe po kube-controller-manager-master -n kube-system
Name: kube-controller-manager-master
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
......
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Pulled 84m (x6 over 5h36m) kubelet Container image "registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4" already present on machineNormal Created 84m (x6 over 5h36m) kubelet Created container kube-controller-managerNormal Started 84m (x6 over 5h36m) kubelet Started container kube-controller-managerWarning Unhealthy 33m kubelet Liveness probe failed: Get "https://127.0.0.1:10257/healthz": read tcp 127.0.0.1:59840->127.0.0.1:10257: read: connection reset by peerNormal Pulled 33m (x4 over 44m) kubelet Container image "registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4" already present on machineNormal Created 32m (x4 over 44m) kubelet Created container kube-controller-managerNormal Started 32m (x4 over 44m) kubelet Started container kube-controller-managerWarning BackOff 30m (x11 over 43m) kubelet Back-off restarting failed containerWarning Unhealthy 28m (x4 over 43m) kubelet Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused
依次检查cs,没有发现问题
# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
修改kube-controller-manager.yaml文件
containers:- command:- kube-controller-manager- --allocate-node-cidrs=true- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf- --bind-address=127.0.0.1- --client-ca-file=/etc/kubernetes/pki/ca.crt- --cluster-cidr=10.244.0.0/16- --cluster-name=kubernetes- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key- --controllers=*,bootstrapsigner,tokencleaner- --kubeconfig=/etc/kubernetes/controller-manager.conf- --leader-elect=true- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt- --root-ca-file=/etc/kubernetes/pki/ca.crt- --service-account-private-key-file=/etc/kubernetes/pki/sa.key- --service-cluster-ip-range=10.96.0.0/12- --use-service-account-credentials=true- --address=127.0.0.1 #添加这条信息image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4imagePullPolicy: IfNotPresent
修改kube-scheduler.yaml文件
spec:containers:- command:- kube-scheduler- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf- --bind-address=127.0.0.1- --kubeconfig=/etc/kubernetes/scheduler.conf- --leader-elect=true- --address=127.0.0.1 #添加此行image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.4imagePullPolicy: IfNotPresent
修改完以后,再重新启动
# kubectl delete -f kube-controller-manager.yaml
Error from server (NotFound): error when deleting "kube-controller-manager.yaml": pods "kube-controller-manager" not found# kubectl delete -f kube-scheduler.yaml
Error from server (NotFound): error when deleting "kube-scheduler.yaml": pods "kube-scheduler" not found# kubectl apply -f kube-scheduler.yaml
pod/kube-scheduler created# kubectl apply -f kube-controller-manager.yaml
pod/kube-controller-manager created
查看po的日志信息,看日志信息,貌似加入 -address参数,也没有作用,但是问题确实是解决了。这个报错,因为只有一个master,此时,master运行到了其他的node节点上,故目录下没有相应的文件,删掉,指定运行在master主机上即可
# kubectl logs kube-controller-manager -n kube-system
Flag --address has been deprecated, This flag has no effect now and will be removed in v1.24.
I0310 09:01:13.528382 1 serving.go:348] Generated self-signed cert in-memory
unable to create request header authentication config: open /etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory
再次检查,暂时没有重启,在继续观察,是否还会重启
# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-6d8c4cb4d-8xghq 1/1 Running 0 73m 10.244.2.186 node2
coredns-6d8c4cb4d-q65vq 1/1 Running 0 73m 10.244.1.49 node1
etcd-master 1/1 Running 4 (77m ago) 24h 10.19.83.151 master
kube-apiserver-master 1/1 Running 1 (77m ago) 24h 10.19.83.151 master
kube-controller-manager-master 1/1 Running 0 13m 10.19.83.151 master
kube-flannel-ds-amd64-2pwps 1/1 Running 0 96m 10.19.83.154 node2
kube-flannel-ds-amd64-svfg6 1/1 Running 0 96m 10.19.83.153 node1
kube-flannel-ds-amd64-xmppt 1/1 Running 1 (77m ago) 96m 10.19.83.151 master
kube-proxy-d4bb2 1/1 Running 0 24h 10.19.83.154 node2
kube-proxy-k2skv 1/1 Running 1 24h 10.19.83.151 master
kube-proxy-x9k76 1/1 Running 1 (24h ago) 24h 10.19.83.153 node1
kube-scheduler 1/1 Running 0 10m 10.19.83.153 node1
kube-scheduler-master 1/1 Running 0 11m 10.19.83.151 master
本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场,不承担相关法律责任。如若转载,请注明出处。 如若内容造成侵权/违法违规/事实不符,请点击【内容举报】进行投诉反馈!
