k8s报错解决
1、 Jul 18 02:25:58 lab3 etcd[5649]: the server is already initialized as member before, starting as etcd member... https://www.cnblogs.com/ericnie/p/6886016.html [root@lab3 k8s]# systemctl start etcd Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details [root@lab3 k8s]# journalctl -xe Jul 18 02:25:58 lab3 etcd[5649]: the server is already initialized as member before, starting as etcd member... 核心语句 raft save state and entries error: open /var/lib/etcd/default.etcd/member/wal/0.tmp: is a directory 解决: 进入相关目录,删除0.tmp,然后就可以启动啦! 删除后,把node3 上的配置的目录全部删除,然后再重新配置。 2、 WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. [root@lab1 ~]# systemctl status kube-scheduler -l ● kube-scheduler.service - Kubernetes Scheduler Plugin Loaded: loaded (/etc/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: failed (Result: start-limit) since Thu 2018-07-19 01:49:06 EDT; 13min ago Docs: https://github.com/kubernetes/kubernetes Process: 13107 ExecStart=/usr/local/kubernetes/bin/kube-scheduler $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBECONFIG $KUBE_SCHEDULER_ARGS (code=exited, status=1/FAILURE) Main PID: 13107 (code=exited, status=1/FAILURE) Jul 19 01:49:06 lab1 systemd[1]: kube-scheduler.service: main process exited, code=exited, status=1/FAILURE Jul 19 01:49:06 lab1 kube-scheduler[13107]: W0719 01:49:06.562968 13107 options.go:148] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP. 原因:没有仔细按照文档操作,文档配置这一步:生成kubeconfig,我是全部复制进去了,其实分开了好多小步骤, 解决:重新安装,每一步都要做,不要省事, 3、 问题: [root@lab1 ~]# systemctl status kubelet ● kubelet.service - Kubernetes Kubelet Server Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: failed (Result: start-limit) since Thu 2018-07-19 21:38:57 EDT; 3s ago Docs: https://github.com/kubernetes/kubernetes Process: 3243 ExecStart=/usr/local/kubernetes/bin/kubelet $KUBE_LOGTOSTDERR $KUBE_LOG_LEVEL $KUBELET_CONFIG $KUBELET_HOSTNAME $KUBELET_POD_INFRA_CONTAINER $KUBELET_ARGS (code=exited, status=255) Main PID: 3243 (code=exited, status=255) 解决:node节点也安装k8s文件,文档在node加点没有安装k8s ,所以报错 cd /server/software/k8s wget https://dl.k8s.io/v1.11.0/kubernetes-server-linux-amd64.tar.gz tar xf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin mkdir -pv /usr/local/kubernetes-v1.11.0/bin cp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubectl /usr/local/kubernetes-v1.11.0/bin ln -sv /usr/local/kubernetes-v1.11.0 /usr/local/kubernetes cp /usr/local/kubernetes/bin/kubectl /usr/local/bin/kubectl kubectl version cd $HOME 4、 问题: [root@lab2 k8s]# kubectl version Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? 解决: 方法一: [root@lab2 kubernetes]# export KUBECONFIG=/etc/kubernetes/admin.conf # 这句话是加授权的意思, 方法二: 把master节点的配置文件admin.conf 复制到 node节点的 /etc/kubernetes/ 然后执行: rm -rf $HOME/.kube mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get no 5、 报错: [root@lab1 flannel]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE kube-flannel-ds-4hdsh 0/1 ErrImagePull 0 1m kube-flannel-ds-7gmwt 0/1 ErrImagePull 0 1m kube-flannel-ds-cbk5z 0/1 ErrImagePull 0 1m [root@lab1 flannel]# 解决:等一会,这个启动比较慢,上次启动没起来,吓一跳,过了几分钟就running起来, [root@lab1 flannel]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE kube-flannel-ds-4hdsh 1/1 Running 0 6m kube-flannel-ds-7gmwt 1/1 Running 0 6m kube-flannel-ds-cbk5z 1/1 Running 0 6m 6、 coredns 无法启动, [root@lab1 coredns]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-6975654877-d6q9z 0/1 ContainerCreating 0 21s coredns-6975654877-k48wq 0/1 ContainerCreating 0 21s kube-flannel-ds-d2tff 1/1 Running 0 3m kube-flannel-ds-qnnpg 1/1 Running 0 3m kube-flannel-ds-t2pxx 1/1 Running 0 3m 解决: 配置使用flannel网络kube-flannel.yml,这步要修改网卡,把kube-flannel.yml 里面的- --iface=eth1 修改成自己本机的网卡 7、 [root@lab1 ~]# systemctl status etcd Aug 16 17:01:07 lab1 etcd[9526]: failed to dial d35b4e3738b04cd7 on stream MsgApp v2 (dial tcp 10.1.1.111:2380: getsockopt:...efused) 解决: master的防火墙没有关, 关掉就可以 8、 [root@lab1 ~]# kubectl get no Unable to connect to the server: Forbidden 解决: 实在找不到原因,重启这三台 就好 9、 问题: 创建flnal 后 一会runing 一会挂掉 解决: 安装kube-kube-proxy ,不要选择ipvs模式,centos7环境,ipvs模式在1.11.0不行, 在1.11.1之后就ok了 10、 下面报错,与此同时,测试的数据库也出现了mysql连接问题。 [root@lab2 ~]# kubectl get no E0828 11:06:56.233812 2504 round_trippers.go:169] CancelRequest not implemented E0828 11:06:56.235504 2504 round_trippers.go:169] CancelRequest not implemented E0828 11:06:56.235505 2504 round_trippers.go:169] CancelRequest not implemented E0828 11:06:56.236281 2504 round_trippers.go:169] CancelRequest not implemented E0828 11:06:56.236765 2504 round_trippers.go:169] CancelRequest not implemented E0828 11:06:56.236772 2504 round_trippers.go:169] CancelRequest not implemented E0828 11:06:56.237298 2504 round_trippers.go:169] CancelRequest not implemented 解决: [root@lab1 ~]# 什么都不管用, 第一种情况是云主机之间不畅通, 第二种情况是被黑了, 第一种情况占的大 11、 pod一直处理ContainerCreating 状态 [root@node2 coredns]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE coredns-55f86bf584-4rzwj 0/1 ContainerCreating 0 8s coredns-55f86bf584-dp8gp 0/1 ContainerCreating 0 8s 解决: http://www.mamicode.com/info-detail-2310522.html 查看/etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt (该链接就是上图中的说明) 是一个软链接,但是链接过去后并没有真实的/etc/rhsm,所以需要使用yum安装: yum install *rhsm* -y wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem 这两个命令会生成/etc/rhsm/ca/redhat-uep.pem文件. 重启docker systemctl restart docker [root@node2 coredns]# docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest latest: Pulling from rhel7/pod-infrastructure 26e5ed6899db: Pull complete 66dbe984a319: Pull complete 9138e7863e08: Pull complete Digest: sha256:92d43c37297da3ab187fc2b9e9ebfb243c1110d446c783ae1b989088495db931 Status: Image is up to date for registry.access.redhat.com/rhel7/pod-infrastructure:latest [root@node2 coredns]# kubectl delete -f . [root@node2 coredns]# kubectl create -f . [root@node2 coredns]# kubectl get po -n kube-system NAME READY STATUS RESTARTS AGE coredns-55f86bf584-4rzwj 1/1 Running 0 5m coredns-55f86bf584-dp8gp 1/1 Running 0 5m 12、 pod一直处理terminating 状态 使用命令 kubectl delete pods --all --grace-period=0 –force 强制删除 重启 kube-apiserver 重启 kubelet docker 实在不行就重启系统 重启后如果发现还在就再强制删除