centOS7.6 k8s env k8s 1.23.2 k8s docker 版本对应 yum list docker-ce --showduplicates | sort -r yum --showduplicate list kubelet docker 版本? this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 20.10 yum downgrade --setopt=obsoletes=0 -y docker-ce-18.06.1.ce-3.el7 docker-ce-cli-18.06.1.ce-3.el7 containerd.io yum install --setopt=obsoletes=0 -y docker-ce-18.06.1.ce-3.el7 docker-ce-cli-18.06.1.ce-3.el7 containerd.io yum downgrade --setopt=obsoletes=0 -y docker-ce-20.10.12-3.el7 docker-ce-cli-20.10.12-3.el7 containerd.io --ignore-preflight-errors=… 这个参数会跳过对docker-ce的版本检查 journalctl -xefu kubelet kubeadm init --kubernetes-version=v1.23.2 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=172.16.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=192.168.1.198 --ignore-preflight-errors=Swap --ignore-preflight-errors=all --v=5 [root@qbkj-k8s-node01 ~]# docker version Client: Docker Engine - Community Version: 23.0.1 API version: 1.42 Go version: go1.19.5 Git commit: a5ee5b1 Built: Thu Feb 9 19:51:00 2023 OS/Arch: linux/amd64 Context: default Server: Docker Engine - Community Engine: Version: 23.0.1 API version: 1.42 (minimum version 1.12) Go version: go1.19.5 Git commit: bc3805a Built: Thu Feb 9 19:48:42 2023 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.18 GitCommit: 2456e983eb9e37e47538f59ea18f2043c9a73640 runc: Version: 1.1.4 GitCommit: v1.1.4-0-g5fd4c4d docker-init: Version: 0.19.0 GitCommit: de40ad0 [root@qbkj-k8s-node01 ~]# kubectl version Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.2", GitCommit:"9d142434e3af351a628bffee3939e64c681afa4d", GitTreeState:"clean", BuildDate:"2022-01-19T17:35:46Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? [root@qbkj-k8s-node01 ~]# kubeadm init \ --kubernetes-version=v1.23.2 \ --image-repository registry.aliyuncs.com/google_containers \ --pod-network-cidr=172.16.0.0/16 \ --service-cidr=10.96.0.0/12 \ --apiserver-advertise-address=192.168.1.198 \ --ignore-preflight-errors=Swap Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.1.198:6443 --token ysiq1e.sey7vmxp6jmey5t2 \ --discovery-token-ca-cert-hash sha256:6e92ec003089ff087f2b0ccf1e094fb43c636c7a8436dce378ec050425fec1c3 systemctl status kubelet.service k8s master init 之前 master 节点 kubelet 启动不了是正常的。待k8s init ok kubelet 自动 running init kube-apiserver --advertise-address=192.168.1.198 --allow-privileged=true --authorization-mode=Node,RBAC --client-ca-file=/etc/kubernetes/pki/ca.crt --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-account-signing-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 --client-ca-file=/etc/kubernetes/pki/ca.crt --cluster-cidr=172.16.0.0/16 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt --cluster-signing-key-file=/etc/kubernetes/pki/ca.key --controllers=*,bootstrapsigner,tokencleaner --kubeconfig=/etc/kubernetes/controller-manager.conf --leader-elect=true --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --root-ca-file=/etc/kubernetes/pki/ca.crt --service-account-private-key-file=/etc/kubernetes/pki/sa.key --service-cluster-ip-range=10.96.0.0/12 --use-service-account-credentials=true etcd --advertise-client-urls=https://192.168.1.198:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://192.168.1.198:2380 --initial-cluster=qbkj-k8s-master01=https://192.168.1.198:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://192.168.1.198:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://192.168.1.198:2380 --name=qbkj-k8s-master01 --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=qbkj-k8s-master01 kube-scheduler --authentication-kubeconfig=/etc/kubernetes/scheduler.conf --authorization-kubeconfig=/etc/kubernetes/scheduler.conf --bind-address=127.0.0.1 --kubeconfig=/etc/kubernetes/scheduler.conf --leader-elect=true kubeadm join 192.168.1.198:6443 --token ysiq1e.sey7vmxp6jmey5t2 \ > --discovery-token-ca-cert-hash sha256:6e92ec003089ff087f2b0ccf1e094fb43c636c7a8436dce378ec050425fec1c3 [preflight] Running pre-flight checks [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 20.10 [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml namespace/kubernetes-dashboard created serviceaccount/kubernetes-dashboard created service/kubernetes-dashboard created secret/kubernetes-dashboard-certs created secret/kubernetes-dashboard-csrf created secret/kubernetes-dashboard-key-holder created configmap/kubernetes-dashboard-settings created role.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created deployment.apps/kubernetes-dashboard created service/dashboard-metrics-scraper created Warning: spec.template.metadata.annotations[seccomp.security.alpha.kubernetes.io/pod]: deprecated since v1.19, non-functional in v1.25+; use the "seccompProfile" field instead deployment.apps/dashboard-metrics-scraper created journalctl -u kubelet --no-pager Pod 状态一直 ContainerCreating 遂手动执行命令删除pod,kubectl delete pods <podname> -n <namespace> 无奈,命令执行后旧pod一直处于Terminating,只好强制删除 kubectl delete pods <podname> -n <namespace> --grace-period=0 --force 前面一切正常,执行到最后两个pod时,pod状态一直处于ContainerCreating kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' --port=8001 http://192.168.1.198:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
公司测试环境web: https://192.168.106.130:31592/#/login kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.106.130 --service-cidr=192.168.200.0/21 --pod-network-cidr=10.10.0.0/16 --ignore-preflight-errors=all --v=5 kubeadm join 192.168.106.130:6443 --token g3iu6f.v54vkceghrtktuxb --discovery-token-ca-cert-hash sha256:53f12ff7a46e0ec8dcfe4f53f7f49b6b84302eb4d138c57dea1c7a913ebd2166 --ignore-preflight-errors=all --v=5 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl proxy --address='0.0.0.0' --accept-hosts='^*$' --port=8009 # 给主节点加标签 kubectl label node k8s-master type=master # 删除之前创建的资源 kubectl delete all --all -n kubernetes-dashboard kubectl apply -f recommended.yaml http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ kubectl get pods --all-namespaces -o wide kubectl cluster-info kubectl get svc,pods -n kubernetes-dashboard kubectl get nodes kubectl get pods -n kube-system kubectl get pods -A kubectl get namespaces kubectl get ns kubectl get pod --all-namespaces kubectl get pod -A kubectl create namespace dream21th-one kubectl create ns dream21th-two kubectl delete namespace dream21th-one docker pull tomcat:9.0.20-jre8-alpine kubectl run tomcat9-test --image=tomcat:9.0.20-jre8-alpine --port=8080 扩容成3个 kubectl scale --replicas=3 deployment/tomcat9-test kubectl get pod -o wide kubectl get deployment kubectl get deployment -o wide kubectl get svc kubectl cluster-info kubectl get cs kubectl get nodes kubectl get rc,services kubectl describe nodes k8s-master kubectl describe pods tomcat9-test-569b5bf455-9bvzs # 使用 pod.yaml 文件中指定的类型和名称删除 pod。 kubectl delete -f pod.yaml # 删除标签名= <label-name> 的所有 pod 和服务。 kubectl delete pods,services -l name=<label-name> # 删除所有具有标签名称= <label-name> 的 pod 和服务,包括未初始化的那些。 kubectl delete pods,services -l name=<label-name> --include-uninitialized # 删除所有 pod,包括未初始化的 pod。 kubectl delete pods --all kubectl exec <pod-name> date # 从 pod 返回日志快照。 kubectl logs <pod-name> # 从 pod <pod-name> 开始流式传输日志。这类似于 'tail -f' Linux 命令。 kubectl logs -f <pod-name> kubectl describe pods -n kube-system coredns-6d8c4cb4d-78kn2 kubeadm token create --print-join-command kubectl get pods --all-namespaces -o wide kubectl get services --all-namespaces http://192.168.106.130:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login kubectl -n kubernetes-dashboard edit service kubernetes-dashboard kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}" apt list kubelet -a apt-get install -y kubelet=1.23.15-00 kubeadm=1.23.15-00 kubectl=1.23.15-00 kubectl -n kube-system get cm kubeadm-config kubeadm reset rm -rf $HOME/.kube/config kubeadm join 192.168.106.130:6443 --token paxkgp.9occrpqh93wj6f7q --discovery-token-ca-cert-hash sha256:53f12ff7a46e0ec8dcfe4f53f7f49b6b84302eb4d138c57dea1c7a913ebd2166 --ignore-preflight-errors=all --v=5 sync && sync && sleep 10 && echo 3 > /proc/sys/vm/drop_cache apt-mark hold kubelet apt-mark unhold kubelet apt dist-upgrade package_name 粘贴时取消自动换行 :set paste :set nopaste no matches for kind "Deployment" in version "v1" yaml文件内apiVersion改为“apps/v1” kubectl scale -n default deployment tomcat-deploy --replicas=1 # 追踪名称空间 nsA 下容器组 pod1 的日志 kubectl logs -f pod1 -n nsA # 追踪名称空间 nsA 下容器组 pod1 中容器 container1 的日志 kubectl logs -f pod1 -c container1 -n nsA # 查看容器组 nginx 下所有容器的日志 kubectl logs nginx --all-containers=true # 查看带有 app=nginx 标签的所有容器组所有容器的日志 kubectl logs -lapp=nginx --all-containers=true # 查看容器组 nginx 最近20行日志 kubectl logs --tail=20 nginx # 查看容器组 nginx 过去1个小时的日志 kubectl logs --since=1h nginx ----------------------------------- K8S 查看 Pod 日志 kubectl get pod -o wide docker cp rui 816aabb3e318:/ docker exec -it 816aabb3e318 bash