Skip to content

Note k8s issues

用于记录个人所遇到到了有关 k8s 疑难杂症。

1. k8s 集群 master 连不上 node 节点

1.1 问题描述

master 节点与 node 节点连接不上,重置集群kubeadm reset -f ,重置 calico,都无效。

1.2 相关日志

集群状态

1
2
3
4
5
  ~ kubectl get nodes
NAME     STATUS     ROLES           AGE   VERSION
master   NotReady   control-plane   52m   v1.28.15
node1    Ready      <none>          35m   v1.28.15
node2    Ready      <none>          34m   v1.28.15

ipvsadm 状态

1
2
3
4
  ~ sudo ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn

node1 网卡

ip a      
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 52:54:00:a0:2d:57 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.181/24 brd 192.168.1.255 scope global enp1s0
       valid_lft forever preferred_lft forever
    inet6 240e:3a5:1eee:4000:5054:ff:fea0:2d57/64 scope global dynamic mngtmpaddr noprefixroute 
       valid_lft 199448sec preferred_lft 113048sec
    inet6 fe80::5054:ff:fea0:2d57/64 scope link 
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:6d:3d:48:81 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
8: vxlan.calico: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ether 66:f9:37:c3:7e:94 brd ff:ff:ff:ff:ff:ff
    inet 10.244.166.132/32 scope global vxlan.calico
       valid_lft forever preferred_lft forever
    inet6 fe80::64f9:37ff:fec3:7e94/64 scope link 
       valid_lft forever preferred_lft forever

可以看见 node1 节点还有个 calico 的虚拟网卡,就是它的存在导致后面 calico 部署不起来。

1.3 解决方案

删除 node1 节点的 calico 虚拟网卡

sudo ip link delete vxlan.calico

重置集群,重置 calico。初始化集群,node 节点加入集群,部署 calico。用 sudo ipvsadm -Ln 查看 ipvsadm 信息,若为空,则需要重新设置集群使用 ipvsadm 模式 ,根据如下章节执行命令。

2. k8s 集群损坏

2.1 问题描述

虚拟机意外关机导致 etcd、api server 都挂掉了,端口也没有被占用。 sudo service kubectl status 服务正在运行,日志同样报连接不上 api server。

2.2 相关日志

kubectl 报错 kubectl error info

kube apiserver 报错

docker logs k8s_kube-apiserver_kube-apiserver-master_kube-system_2719279a5a17d38293e23dfff00a6f0f_7919 
I0424 17:56:58.895864       1 options.go:220] external host was not specified, using 192.168.1.180
I0424 17:56:58.898966       1 server.go:148] Version: v1.28.15
I0424 17:56:58.899035       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0424 17:56:59.343596       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
W0424 17:56:59.344278       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:56:59.344281       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
I0424 17:56:59.351284       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0424 17:56:59.351316       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
I0424 17:56:59.352363       1 instance.go:298] Using reconciler: lease
W0424 17:56:59.353076       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:00.345122       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:00.345633       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:00.354183       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:01.714975       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:01.833037       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:01.835258       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:04.114336       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:04.327173       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:04.799148       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:08.645039       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:09.211916       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:09.324247       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:14.636899       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:14.851881       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
W0424 17:57:15.281520       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
F0424 17:57:19.356094       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded

2.3 解决方案

# 查看最近可用的 etcd 备份,我的 etcd 备份路径在 /var/lib/etcd-restore
sudo ls -lt /var/lib/etcd-restore
sudo systemctl stop docker kubelet
sudo rm -rf /var/lib/etcd

# 恢复 etcd,需自行更换备份文件
sudo etcdutl snapshot restore /var/lib/etcd-restore/snapshot-2025 --data-dir=/var/lib/etcd
sudo systemctl start docker kubelet
# 等几秒看看 etcd 是否启动
sudo ETCDCTL_API=3 etcdctl --endpoints=https://192.168.1.180:2379 \
  --cert=/etc/kubernetes/pki/etcd/server.crt \
  --key=/etc/kubernetes/pki/etcd/server.key \
  --cacert=/etc/kubernetes/pki/etcd/ca.crt \
  endpoint health
https://192.168.1.180:2379 is healthy: successfully committed proposal: took = 17.206514ms

# 看看 docker pod 是否启动
docker ps -a

3. 同一节点上 etcd多次部署,导致 etcd 启动失败

3.1 问题描述

在 k8s 高可用集群中,一台 control plane 主机在部署时因某些原因导致,需要重新执行部署脚本。在部署 etcd 环节中,etcd 启动失败,etcd 日志如 章节 3.2 所示。

日志问题所在: discovery failed: member 108e0bce57184dd7 has already been bootstrapped

原因是多次重复部署 etcd 服务,这台节点(master,ID 为 108e0bce57184dd7)已经初始化过了,再次尝试以 "初始化" 的方式启动时就会报错。

3.2 相关日志

etcd 日志

Jun 05 22:35:57 master etcd[4260]: {"level":"warn","ts":"2025-06-05T22:35:57.219241+0800","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.219644+0800","caller":"etcdmain/config.go:353","msg":"loaded server configuration, other configuration command line flags and environment variables will be ignored if provided","path":"/etc/etcd/etcd.config.yml"}
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.219670+0800","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["/usr/local/bin/etcd","--config-file=/etc/etcd/etcd.config.yml"]}
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.219737+0800","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"}
Jun 05 22:35:57 master etcd[4260]: {"level":"warn","ts":"2025-06-05T22:35:57.219766+0800","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."}
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.219793+0800","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.1.180:2380"]}
Jun 05 22:35:57 master etcd[4260]: {"level":"warn","ts":"2025-06-05T22:35:57.219806+0800","caller":"embed/config.go:936","msg":"ignoring peer auto TLS since certs given"}
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.219836+0800","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/etcd.pem, key = /etc/kubernetes/pki/etcd/etcd-key.pem, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/et>
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.220623+0800","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["http://127.0.0.1:2379","https://192.168.1.180:2379"]}
Jun 05 22:35:57 master etcd[4260]: {"level":"warn","ts":"2025-06-05T22:35:57.220663+0800","caller":"embed/config.go:914","msg":"ignoring client auto TLS since certs given"}
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.220925+0800","caller":"embed/etcd.go:620","msg":"pprof is enabled","path":"/debug/pprof"}
Jun 05 22:35:57 master etcd[4260]: {"level":"warn","ts":"2025-06-05T22:35:57.220974+0800","caller":"embed/etcd.go:627","msg":"scheme is http or unix while key and cert files are present; ignoring key and cert files","client-url":"http://127.0.0.1:2379"}
Jun 05 22:35:57 master etcd[4260]: {"level":"warn","ts":"2025-06-05T22:35:57.220994+0800","caller":"embed/etcd.go:630","msg":"scheme is http or unix while --client-cert-auth is enabled; ignoring client cert auth for this URL","client-url":"http://127.0.0.1:2379"}
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.221119+0800","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"amd64","max-cpu-set":4,"max-cpu-available":4,"member-initialized":f>
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.222563+0800","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"1.232933ms"}
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.239787+0800","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"master","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.180:2380"],"advertise-client-urls":["https://192.168.1.180:2379"]}
Jun 05 22:35:57 master etcd[4260]: {"level":"warn","ts":"2025-06-05T22:35:57.240179+0800","caller":"embed/config_logging.go:170","msg":"rejected connection on peer endpoint","remote-addr":"192.168.1.172:48532","server-name":"","error":"set tcp 192.168.1.180:2380: use of closed network connection"}
Jun 05 22:35:57 master etcd[4260]: {"level":"info","ts":"2025-06-05T22:35:57.240285+0800","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"master","data-dir":"/var/lib/etcd","advertise-peer-urls":["https://192.168.1.180:2380"],"advertise-client-urls":["https://192.168.1.180:2379"]}
Jun 05 22:35:57 master etcd[4260]: {"level":"fatal","ts":"2025-06-05T22:35:57.240319+0800","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"member 108e0bce57184dd7 has already been bootstrapped","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmai>
Jun 05 22:35:57 master systemd[1]: etcd.service: Main process exited, code=exited, status=1/FAILURE

解决方案

清空 /var/lib/etcd 下文件,修改 /etc/etcd/etcd.config.yml 中的 initial-cluster-state 为 existing,然后重启 etcd

1
2
3
4
5
sudo systemctl stop etcd
sudo rm -rf /var/lib/etcd

# 修改 /etc/etcd/etcd.config.yml 文件中,下面这行,把 new 改为 existing
initial-cluster-state: new  # new 是在刚部署 etcd 时使用的,让该节点初始化并向 etcd 集群注册。

注意:若是集群都挂了,只剩一台 etcd 节点,可以修改配置文件夹 中下面的几个参数,用原集群的数据拉起一个新集群

initial-cluster-state: new      # 从单节点重新拉起集群
force-new-cluster: true #即使已经存在旧的集群元数据(如 /var/lib/etcd/member 目录中),也要强制创建一个新集群。

1

问题描述

相关日志

解决方案