使用docker部署k8s集群 2020年11月 k8s版本 1.19.4

网上的版本乱七八糟,各种版本,很多比较旧的根本不能用,自己找了很多算是搭建好了,安装过程中有一些细节我可能没写全,不过重要的步骤应该都记录下了,做一个总结。

时间:2020年11月

k8s版本:1.19.4

更新:更简单的部署方式请看这篇:

https://www.cnblogs.com/codenoob/p/14138333.html

更换源

在这之前最好先把yum源换成阿里源,然后 yum 更新一下。这个自己去看看阿里云的文档弄一下就行。
https://developer.aliyun.com/article/704987

接下来需要下载docker 和 kubernetes相关软件,需要把这两个源弄一下

在 /etc/yum.repos.d 路径下面:

添加 kubernetes.repo 文件

[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

安装docker,master node节点都需要安装

# 卸载原来的docker
sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine

# 安装依赖
sudo yum update -y && sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
  
# 添加官方yum库
sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
    
# 安装docker
sudo yum install docker-ce docker-ce-cli containerd.io

# 查看docker版本
docker --version

# 开机启动
systemctl enable --now docker

修改docker cgroup驱动,与k8s一致,使用systemd

# 修改docker cgroup驱动:native.cgroupdriver=systemd
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

systemctl restart docker  # 重启使配置生效

安装 kubelet kubeadm kubectl

master、node节点都需要安装
# 开头更换了k8s软件源,应该可以直接安装的
# 关闭SElinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 安装kubelet kubeadm kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

systemctl enable --now kubelet  # 开机启动kubelet

# centos7用户还需要设置路由:
yum install -y bridge-utils.x86_64
modprobe  br_netfilter  # 加载br_netfilter模块,使用lsmod查看开启的模块
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 重新加载所有配置文件

systemctl disable --now firewalld  # 关闭防火墙

# k8s要求关闭swap  (qxl)
swapoff -a && sysctl -w vm.swappiness=0  # 关闭swap
sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab  # 取消开机挂载swap
上面的步骤需要在每个节点上面都执行,也可以做完一台机器之后克隆一下

创建集群准备工作

# Master端:
kubeadm config images pull # 拉取集群所需镜像,这个需要FQ

# --- 不能FQ可以尝试以下办法 ---
kubeadm config images list # 列出所需镜像
k8s.gcr.io/kube-apiserver:v1.19.4
k8s.gcr.io/kube-controller-manager:v1.19.4
k8s.gcr.io/kube-scheduler:v1.19.4
k8s.gcr.io/kube-proxy:v1.19.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

# 根据所需镜像名字先拉取国内资源
#你需要的镜像不一定和我的一样,按照上一条命令列出的镜像以及版本为准
(下面的是我执行的命令,一般来说应该就是版本不太一样)
#安装镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.19.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.19.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

#修改镜像tag
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.4  k8s.gcr.io/kube-proxy:v1.19.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.19.4  k8s.gcr.io/kube-controller-manager:v1.19.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.4  k8s.gcr.io/kube-apiserver:v1.19.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.19.4  k8s.gcr.io/kube-scheduler:v1.19.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0  k8s.gcr.io/etcd:3.4.13-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0  k8s.gcr.io/coredns:1.7.0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2  k8s.gcr.io/pause:3.2

#删除旧的镜像
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.19.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.19.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.19.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

# Node端:拉去以下两个就够了
# 根据所需镜像名字先拉取国内资源
#安装镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.4
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

#修改镜像tag
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.4  k8s.gcr.io/kube-proxy:v1.19.4
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2  k8s.gcr.io/pause:3.2

#删除旧的镜像
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.19.4
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

使用kubeadm创建集群

这步之前需要保证 kubelet 正常,kubelet有时候会出现异常,关掉重启几回就OK了,我也不知道为什么

systemctl status kubelet
# 初始化Master(Master需要至少2核)此处会各种报错,异常...成功与否就在此
kubeadm init --apiserver-advertise-address 192.168.200.25 --pod-network-cidr 10.244.0.0/16 # --kubernetes-version 1.19.4
# --apiserver-advertise-address 指定与其它节点通信的接口,这里应该写本机的ip就可以
# --pod-network-cidr 指定pod网络子网,使用fannel网络必须使用这个CIDR
# --kubernetes-version 指定k8s版本,这个不带会使用最新的,但是获取版本是需要联外网的,可能会超时,这时候可以指定一个版本

如果上述步骤执行失败,出现各种错误,要执行

kubeadm reset

之后再重新init,想要查看具体错误信息可以在后面加上 --v=6

在使用kubeadm的时候出现kubelet node found bug:
这主要由于--apiserver引起,可以去掉这个参数试试。

kubeadm init --apiserver-advertise-address 192.168.200.25 --pod-network-cidr 10.244.0.0/16 --v=6

初始化成功之后应该会输出和下面差不多的界面

# 初始化结果:
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.503375 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: w2i0mh.5fxxz8vk5k8db0wq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

#每个机器创建的master以下部分都不同,需要自己保存好
kubeadm join 192.168.200.25:6443 --token our9a0.zl490imi6t81tn5u \
    --discovery-token-ca-cert-hash sha256:b93f710eb9b389a69f0cd0d6dcf7c82e389a68f009eb6b2028f69d54b099de16 

一定要把最后输出的那句话保存起来

这个是你创建的其他节点加入该集群命令验证

普通用户设置权限

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

应用flannel网络

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

node节点加入集群

# 把你安装好的master节点最后那句话复制执行就行,你的跟我的肯定不一样
# node1:
kubeadm join 192.168.20.5:6443 --token w2i0mh.5fxxz8vk5k8db0wq \
    --discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252 
# node2:
kubeadm join 192.168.20.5:6443 --token w2i0mh.5fxxz8vk5k8db0wq \
    --discovery-token-ca-cert-hash sha256:65e82e987f50908f3640df7e05c7a91f390a02726c9142808faa739d4dc24252 

输出日志

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

然后就可以在master节点看一下是否正常

kubectl get nodes
NAME                    STATUS   ROLES    AGE   VERSION
lijun-k8s-1.novalocal   Ready    master   15h   v1.19.4
lijun-k8s-2.novalocal   Ready    <none>   15h   v1.19.4

备注:

kubeadm init 启动一个 Kubernetes 主节点
kubeadm join 启动一个 Kubernetes 工作节点并且将其加入到集群
kubeadm upgrade 更新一个 Kubernetes 集群到新版本
kubeadm config 如果使用 v1.7.x 或者更低版本的 kubeadm 初始化集群,您需要对集群做一些配置以便使用 kubeadm upgrade 命令
kubeadm token 管理 kubeadm join 使用的令牌
kubeadm reset 还原 kubeadm init 或者 kubeadm join 对主机所做的任何更改
posted @ 2020-12-02 14:21  navist2020  阅读(2259)  评论(0编辑  收藏  举报