kubeadm 安装k8s

1.准备操作

关闭防火墙

systemctl stop firewalld
systemctl disable firewalld

关闭selinux

setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 

关闭swap

swapoff -a    临时关闭
free             可以通过这个命令查看swap是否关闭了
vim /etc/fstab  永久关闭

时间同步

systemctl restart chronyd
systemctl enable chronyd

开启 bridge-nf-call-iptables

为什么 kubernetes 环境要求开启 bridge-nf-call-iptables ?

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf

# 加载网桥过滤模块
modprobe br_netfilter
# 查看网桥过滤模块是否加载成功
lsmod | grep br_netfilter

安装配置ipvs

在Kubernetes中Service有两种带来模型,一种是基于iptables的,一种是基于ipvs的两者比较的话,ipvs的性能明显要高一些,但是如果要使用它,需要手动载入ipvs模块

# 安装系统所需软件
yum install -y telnet nc lrzsz lsof bash-completion.noarch vim wget net-tools epel-release
# 1.安装ipset和ipvsadm
yum install ipset ipvsadm -y
# 2.添加需要加载的模块写入脚本文件
# 高版本内核报错:已解决:modprobe: FATAL: Module nf_conntrack_ipv4 not found;解决办法
# https://blog.csdn.net/weibo1230123/article/details/121698332 
cat <<EOF> /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# 3.为脚本添加执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules
# 4.执行脚本文件
sh /etc/sysconfig/modules/ipvs.modules
# 5.查看对应的模块是否加载成功
lsmod | grep -e -ip_vs -e nf_conntrack_ipv4

添加主机名和ip对应的关系

[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   	localhost
192.168.18.170 	harbor.wzs.net
192.168.18.98	k8s-master
192.168.18.99	k8s-node01
192.168.18.100	k8s-node02
192.168.18.103	k8s-node03

2.安装基础环境

安装docker

yum -y install yum-utils
yum-config-manager --add-repo  https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y --setopt=obsoletes=0 docker-ce-19.03.9-3.el7
systemctl start docker

配置镜像加速,修改docker data-root目录

mkdir -p /data/docker /etc/docker
cat<< EOF > /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://gpkhi0nk.mirror.aliyuncs.com"],
  "data-root": "/data/docker",
  "exec-opts": ["native.cgroupdriver=systemd"],
    "log-driver": "json-file",
  "log-opts": {
   "max-size": "100m",
   "max-file": "4"
    }
}
EOF

systemctl daemon-reload
systemctl enable docker
systemctl restart docker

3.安装配置k8s

添加kubernetes阿里云yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

列出可以安装的k8s版本

yum list kubelet kubeadm kubectl  --showduplicates|sort -r

安装指定版本kubeadm、kubelet、kubectl

yum install -y kubelet-1.20.12-0 kubectl-1.20.12-0 kubeadm-1.20.12-0

准备镜像

# 在安装kubernetes集群之前,必须要提前准备好集群需要的镜像,所需镜像可以通过下面命令查看
# kubeadm config images list

# 下载镜像
# 此镜像kubernetes的仓库中,由于网络原因,无法连接,下面提供了一种替换方案
images=(
	kube-apiserver:v1.20.12
	kube-controller-manager:v1.20.12
	kube-scheduler:v1.20.12
	kube-proxy:v1.20.12
	pause:3.2
	etcd:3.4.13-0
	coredns:1.7.0
)

for imageName in ${images[@]};do
	docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
	docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
	docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName 
done

初始化k8s 记得修改apiserver地址

k8s config 配置文件

# kubeadm config print init-defaults > kubeadm-config.yaml
# ls
kubeadm-config.yaml
# vim kubeadm-config.yaml 
# cat kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.0.0.10
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: 
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers 
kind: ClusterConfiguration
kubernetesVersion: 1.23.12
networking:
  dnsDomain: cluster.local
  podSubnet: 172.244.0.0/16
  serviceSubnet: 172.100.0.0/16
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: cgroupfs

可以使用国内的镜像仓库

kubeadm init \
--apiserver-advertise-address=`hostname -I|awk '{print $1}'` \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.20.12 \
**--service-cidr=172.100.0.0/16 \**
**--pod-network-cidr=172.244.0.0/16**

检查初始化结果

................
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

kubeadm join 192.168.18.98:6443 --token zxgoub.q7pcbx3s2dfrwywd \
    --discovery-token-ca-cert-hash sha256:639f9093b1250954edff0f11544860ed4b491c04678e70b343a0d378f8330117

保存join token,新加node需要使用

执行以下命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

默认master节点不可跑业务pod,若想在master节点也跑pod,解除限制如下配置

kubectl taint nodes --all node-role.kubernetes.io/master-
#如果不允许调度
#kubectl taint nodes master1 node-role.kubernetes.io/master=:NoSchedule
#污点可选参数
      NoSchedule: 一定不能被调度
      PreferNoSchedule: 尽量不要调度
      NoExecute: 不仅不会调度, 还会驱逐Node上已有的Pod

安装网络插件

mkdir -p /data/k8s/yaml/kube-system
cd /data/k8s/yaml/kube-system

flannel

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl create -f kube-flannel.yml

calico

# curl -O覆盖文件
# 若下面yaml链接地址失效,请参照官方文档 https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart
curl https://docs.projectcalico.org/manifests/calico.yaml -O

1、指定pod网段
该yaml文件中默认CIDR为192.168.0.0/16,需要与初始化时kube-config.yaml中的配置一致,如果不同请下载该yaml修改后运行 
grep '# value' calico.yaml

如果是kubeadm部署的k8s,则对应项为
--pod-network-cidr=10.244.0.0/16

2、指定网卡
# Cluster type to identify the deployment type
  - name: CLUSTER_TYPE
  value: "k8s,bgp"
# 下面添加
  - name: IP_AUTODETECTION_METHOD
    value: "interface=eth0"
    # eth0为本地网卡名字
calico 自动探查互联网卡,如果有多快网卡,则可以配置用于互联的网络接口命名正则表达式,如上面的 eth0 (根据自己服务器的网络接口名修改);

# 修改完毕安装
kubectl apply -f calico.yaml

weave

kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')

登录到master检查结果

kubectl get no

kubectl命令自动补全

yum install -y bash-completion 
echo "source /usr/share/bash-completion/bash_completion" >> /etc/profile 
echo "source <(kubectl completion bash)" >> /etc/profile

kubernetes在集群启动之后,会默认创建几个namespace

# kubectl  get namespace
NAME              STATUS   AGE
default           Active   45h     #  所有未指定Namespace的对象都会被分配在default命名空间
kube-node-lease   Active   45h     #  集群节点之间的心跳维护,v1.13开始引入
kube-public       Active   45h     #  此命名空间下的资源可以被所有人访问(包括未认证用户)
kube-system       Active   45h     #  所有由Kubernetes系统创建的资源都处于这个命名空间

# kubectl describe ns default 
Name:         default
Labels:       <none>
Annotations:  <none>
Status:       Active # Active 命名空间正在使用中  Terminating 正在删除命名空间

# ResourceQuota 针对namespace做的资源限制
# LimitRange针对namespace中的每个组件做的资源限制

No resource quota.
No LimitRange resource.

4.安装使用过程中的问题

忘记token的操作:查看token,如果token失效,则重新生成一个

kubeadm token list
# 生成 新的token
kubeadm token create --print-join-command

获取ca证书sha256编码hash值

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

节点加入集群

kubeadm join 192.168.18.98:6443 --token zxgoub.q7pcbx3s2dfrwywd \
    --discovery-token-ca-cert-hash sha256:639f9093b1250954edff0f11544860ed4b491c04678e70b343a0d378f8330117

如果一直卡在 “Running pre-flight checks” 上,则很可能是时间未同步,token失效导致

如果还有其它错误如 Port 10250 is in use,执行如下命令

重置集群节点

#在master节点之外的节点进行操作
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
##重启kubelet
systemctl restart kubelet
##重启docker
systemctl restart docker

然后再重新执行kubeadm join ... 操作

posted @ 2021-08-22 12:25  思维无界限  阅读(136)  评论(0编辑  收藏  举报