为方便笔记 文章是照搬复制的 总结了文章合并在一起 减少踩坑的问题
文章借鉴
1.Linux部署Kubernetes流程 (ormissia.github.io)
2.linux安装部署k8s(kubernetes)和解决遇到的坑 - 简书 (jianshu.com)
3.部署k8s的时候kube-flannel.yml下载不下来解决_chen_haoren的博客-CSDN博客
初始化环境
防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
SELinux
vi /etc/selinux/config
SELINUX=disabled
Swap
vi /etc/fstab
注释掉swap
这一行
/.swapfile none swap sw,comment=cloudconfig 0 0
重启系统之后查看关闭是否成功
free -m
显示如下内容,swap
关闭成功
total used free shared buff/cache available
Mem: 23114 402 22299 32 411 20597
Swap: 0 0 0
ulimit
echo "ulimit -n 65535" >> /etc/profile
echo "* hard nofile 65535" >> /etc/security/limits.conf
重启之后检查是否配置成功
ulimit -n
SSH免密(非必须)
执行命令,一路回车,即可获得当前节点的公钥
ssh-keygen -t rsa
cat id_rsa.pub
iptables
查看br_netfilter
模块是否开启
lsmod | grep br_netfilter
如果没有看到输出则执行命令开启
modprobe br_netfilter
作为Linux节点的iptables正确查看桥接流量的要求,应该确保net.bridge.bridge-nf-call-iptables
在sysctl
配置中设置为1
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
安装Docker
yum install -y yum-utils
yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io -y
systemctl start docker
systemctl enable docker
修改Docker Cgroup Driver 为 systemd(原作者说----大坑一)(目前没改 没有出现任何问题 修改以后反而启动不了docker)
vi /usr/lib/systemd/system/docker.service
--修改配置文件中的启动参数
#ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
重启Docker
systemctl daemon-reload
systemctl restart docker
如遇docker镜像拉去失败 可修改源地址加速器
打开finalshell文件管理器中打开/etc/docker/daemon.json(如果没有此文件,可以手动创建空文件),输入
{
"registry-mirrors": ["https://******.mirror.aliyuncs.com"]
}
地址可去阿里云镜像地址查询
systemctl daemon-reload
systemctl restart docker
安装Kubernetes
kubeadm
:引导集群的命令。kubelet
:在集群中的所有机器上运行的组件,并执行诸如启动Pod
和容器之类的操作。kubectl
:用于与集群通信的命令行实用程序
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#清除缓存
yum clean all
#把服务器的包信息下载到本地电脑缓存起来,makecache建立一个缓存
yum makecache
#列出kubectl可用的版本
yum list kubectl --showduplicates | sort -r
#列出信息如下:
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
已加载插件:fastestmirror, langpacks
已安装的软件包
可安装的软件包
安装kubelet,kubeadm,kubectl
#安装最新版本,也可安装指定版本
yum install -y kubelet kubeadm kubectl
#安装指定版本的kubelet,kubeadm,kubectl
yum install -y kubelet-1.19.3-0 kubeadm-1.19.3-0 kubectl-1.19.3-0
#查看kubelet版本
kubelet --version
#版本如下:
Kubernetes v1.19.3
#查看kubeadm版本
kubeadm version
#版本信息如下:
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
启动kubelet并设置开机启动服务
#重新加载配置文件
systemctl daemon-reload
#启动kubelet
systemctl start kubelet
#查看kubelet启动状态
systemctl status kubelet
#没启动成功,报错先不管,后面的kubeadm init会拉起
#设置开机自启动
systemctl enable kubelet
#查看kubelet开机启动状态 enabled:开启, disabled:关闭
systemctl is-enabled kubelet
#查看日志
journalctl -xefu kubelet
至此,集群中所有节点都是相同操作
初始化master
使用kubeadm
作为集群的初始化工具
--apiserver-advertise-address=192.168.1.5 为Master的IP
--image-repository registry.aliyuncs.com/google_containers 指定镜像仓库,如果不指定默认是k8s.gcr.io,国内需要FQ才能下载镜像
#执行初始化命令
kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.0.5 --kubernetes-version=v1.19.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16
#报错一: [ERROR Swap]: running with swap on is not supported. Please disable swap
#报错如下: 如果没关闭swap, 需要关闭swap 或者使用 --ignore-preflight-errors=swap
W0525 15:17:52.768575 19864 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.2. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
#报错二:The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused
报错如下:
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
#添加文件: 主要是这个配置:--cgroup-driver=systemd
vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
#Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/var/lib/kubelet/pki"
#Environment="KUBELET_EXTRA_ARGS=--v=2 --fail-swap-on=false --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.1"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
#EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
#成功, 打印如下信息表示成功:
W0511 11:11:24.998096 15272 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.6. Latest validated version: 19.03
[WARNING Hostname]: hostname "k8s-master" could not be reached
[WARNING Hostname]: hostname "k8s-master": lookup k8s-master on 100.125.1.250:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.1.0.1 192.168.0.147]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 16.501683 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: rt0fpo.4axz6cd6eqpm1ihf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.5:6443 --token rt0fpo.4axz6.....m1ihf \
--discovery-token-ca-cert-hash sha256:ac20e89e8bf43b56......516a41305c1c1fd5c7
一定要记住输出的最后一个命令: kubeadm join...
###记住这个命令,后续添加节点时,需要此命令
###kubeadm join 192.168.0.5:6443 --token rt0fpo.4axz6....
#按提示要求执行如下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看k8s集群节点
#查看节点
kubectl get node
#输出如下:
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 4m13s v1.19.3
#发现状态是NotReady,是因为没有安装网络插件
#查看kubelet的日志
journalctl -xef -u kubelet -n 20
#输出如下: 提示未安装cni 网络插件
May 11 11:15:26 k8s-master kubelet[16678]: W0511 11:15:26.356793 16678 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d
May 11 11:15:28 k8s-master kubelet[16678]: E0511 11:15:28.237122 16678 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
安装flannel网络插件(CNI)(此处有个问题 国内拉取镜像 几乎0成功)
#创建文件夹
mkdir flannel && cd flannel
#下载文件
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kube-flannel.yml里需要下载镜像,我这里提前先下载
docker pull quay.io/coreos/flannel:v0.14.0-rc1
#创建flannel网络插件
kubectl apply -f kube-flannel.yml
#过一会查看k8s集群节点,变成Ready状态了
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 9m39s v1.19.3
如若拉去失败 建议在/etc/hosts文件添加一条 (199.232.68.133 raw.githubusercontent.com) 不确定一直能不能有用 如果没有用只能另行百度查代理了
拉去成功后 查询节点状态
kubectl get pod --all-namespaces
# 输出
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78fcd69978-2vcqt 1/1 Running 1 (126m ago) 12h
kube-system coredns-78fcd69978-xg98g 1/1 Running 1 (126m ago) 12h
kube-system etcd-arm-node-1 1/1 Running 2 (126m ago) 12h
kube-system kube-apiserver-arm-node-1 1/1 Running 2 (126m ago) 12h
kube-system kube-controller-manager-arm-node-1 1/1 Running 2 (126m ago) 12h
kube-system kube-flannel-ds-kdpgp 1/1 Running 1 (126m ago) 3h32m
kube-system kube-proxy-q5m5k 1/1 Running 1 (126m ago) 12h
kube-system kube-scheduler-arm-node-1 1/1 Running 2 (126m ago) 12h
kubectl get nodes
# 输出
NAME STATUS ROLES AGE VERSION
arm-node-1 Ready control-plane,master 13h v1.22.2
对于其他节点,安装完kubeadm
,kubelet
,kebectl
三个组件后,直接运行master
节点kubeadm init
完成之后的输出结果中的最后一条命令 须在一行执行
kubeadm join --control-plane 10.0.0.105:6443 --token axqrzz.ouonxxxxxxgdvgjz --discovery-token-ca-cert-hash sha256:3a57ff0d78b1f85xxxxxxxxxxxxxxxxxxxxe0f90f47037a31a33
kubectl get nodes
# 输出
NAME STATUS ROLES AGE VERSION
arm-node-1 Ready control-plane,master 13h v1.22.2
arm-node-2 Ready <none> 170m v1.22.2
本文来自博客园,作者:Sleepy-Person,转载请注明原文链接:https://www.cnblogs.com/Sleepy-Person/p/16775000.html
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· DeepSeek 开源周回顾「GitHub 热点速览」
· 物流快递公司核心技术能力-地址解析分单基础技术分享
· .NET 10首个预览版发布:重大改进与新特性概览!
· AI与.NET技术实操系列(二):开始使用ML.NET
· 单线程的Redis速度为什么快?