k8s 1.2之k8s安装和搭建集群
1.实验,一台master主机和两台work主机搭建k8s集群
ssh-copy-id master
去/root/.ssh目录下有个authorized_keys文件就是免密登录
3.关闭交换分区,提升性能
#临时关闭 swapoff -a
#永久关闭vim /etc/fstab 注释掉
/dev/mapper/centos-swap swap swap defaults 0 0
4.为什么要关闭swap交换分区?
[root@master sysctl.d]# cat /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-ip6tables=1 net.bridge.bridge-nf-call-iptables=1 net.ipv4.ip_forward=1
[root@master sysctl.d]# sysctl -p vm.max_map_count = 262144 net.ipv4.ip_forward = 1
7.加载网桥过滤模块
[root@master sysctl.d]# modprobe br_netfilter
8。查看网桥过滤器是否成功开启
[root@master sysctl.d]# lsmod |grep br_netfilter br_netfilter 22256 0 bridge 151336 1 br_netfilter
9.配置ipvs功能,安装ipset和ipvsadm
yum install -y ipvsadm ipset
10.添加需要加载的模块写入脚本文件中
[root@master yum.repos.d]# cat /etc/sysconfig/modules/ipvs.modules #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_sh modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- nf_conntrack_ipv4
对脚本添加执行权限
[root@master ]# chmod +x /etc/sysconfig/modules/ipvs.modules
启动脚本 /etc/sysconfig/modules/ipvs.modules
[root@master yum.repos.d]# lsmod | grep ip_vs
ip_vs_wrr 12697 0
ip_vs_rr 12600 0
ip_vs_sh 12688 0
ip_vs 145458 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 139264 7 ip_vs,nf_nat,nf_nat_ipv4,xt_conntrack,nf_nat_masquerade_ipv4,nf_conntrack_netlink,nf_conntrack_ipv4
libcrc32c 12644 4 xfs,ip_vs,nf_nat,nf_conntrack
12.kubernetes集群部署
三台机器都安装docker-ce kubelet kubeadm kubectl
(1)安装docker-ce源
要安装docker需要去阿里云搜docker-ce的源,再用yum 安装docker-ce就可以
kubelet kubeadm kubectl可以用Yum list kubelet搜索可安装的软件包
[root@master ~]# yum list kubelet 已加载插件:fastestmirror Loading mirror speeds from cached hostfile * base: mirrors.aliyun.com * extras: mirrors.aliyun.com * updates: mirrors.aliyun.com 已安装的软件包 kubelet.x86_64 1.24.2-0 @kubernetes
(2)安装kubelet kubeadm kubectl 先不启动kubelet
1 | yum install -y kubeadm kubelet kubectl |
(3)查看docker版本
[root@master ~]# docker --version Docker version 20.10.15, build fd82621
(4)使用kubeadm初始化k8s集群
先查看下部署集群所需要的镜像,可以看到下面都是镜像
[root@master1 ~]# kubeadm config images list --kubernetes-version=v1.21.0
k8s.gcr.io/kube-apiserver:v1.21.0
k8s.gcr.io/kube-controller-manager:v1.21.0
k8s.gcr.io/kube-scheduler:v1.21.0
k8s.gcr.io/kube-proxy:v1.21.0
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0
可以用二进制方法下载这几个容器,依次操作这几个镜像,注意:kubeadm默认从k8s.grc.io拉取镜像,但是k8s.gcr.io访问不到,所以需要指定从registry.aliyuncs.com/google_containers仓库拉取镜像。
[root@master1 ~]# images=( > kube-apiserver:v1.21.0 > kube-controller-manager:v1.21.0 > kube-scheduler:v1.21.0 > kube-proxy:v1.21.0 > pause:3.4.1 > etcd:3.4.13-0 > coredns/coredns:v1.8.0 > )
[root@work1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.3 #从阿里云下载镜像
[root@work1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.3 k8s.gcr.io/kube-scheduler:v1.24.3 #给镜像打标签k8s.gcr.io/kube-scheduler:v1.24.3
[root@work1 ~]# docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.24.3 #删除不需要用的阿里云镜像文件
或者用for循环下载这几个镜像并打标签,把原镜像删除
[root@master ~]# for imageName in ${images[@]};do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done
(5)查看镜像是否下载好了
[root@work2 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.17.4 6dec7cfde1e5 2 years ago 116MB
k8s.gcr.io/kube-apiserver v1.17.4 2e1ba57fe95a 2 years ago 171MB
k8s.gcr.io/kube-controller-manager v1.17.4 7f997fcf3e94 2 years ago 161MB
k8s.gcr.io/kube-scheduler v1.17.4 5db16c1c7aff 2 years ago 94.4MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 2 years ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 2 years ago 288MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 4 years ago 742kB
Master节点创建集群(该操作只在master主机执行)
可以用kubeadm --help查看帮助信息
[root@master ~]# kubeadm init --kubernetes-version=v1.24.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=192.168.213.4 --image-repository registry.aliyuncs.com/google_containers
注释:--kubernetes-version=v1.17.4 \ # kubernetes版本
--pod-network-cidr=10.244.0.0/16 #k8s给pod节点分配的ip地址
--service-cidr=10.96.0.0/12 \ #给客户访问的备选地址
--apiserver-advertise-address=192.168.213.4 #master的ip
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.213.8:6443 --token 1bl657.59pad6tz14nvhp3j \ --discovery-token-ca-cert-hash sha256:61fb5e8ca294bea610601f26535cc0f5c991185c665b4d842adcffc909c5d417
根据提示操作
[root@master ~]# mkdir -p $HOME/.kube [root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
可以去https://kubernetes.io/docs/concepts/cluster-administration/addons/网址下载.yaml文件,也可以wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@master ~]# kubectl apply -f /root/kube-flannel.yml namespace/kube-flannel created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created
在node子节点执行
[root@work1 ~]# kubeadm join 192.168.213.3:6443 --token eafk7y.fe1nzff9ptjs3tuk \ > --discovery-token-ca-cert-hash sha256:363741efccddbabf7f93d50bd0914dfd8d059909306a8542b0e06a4172264d8f W0720 09:10:26.480050 2844 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
查看下集群节点状态,可以看到是notready状态,执行下网络文件就可以
kubernetes支持多种网络插件,如:flannel、calico、canal等(面试常问flannel跟calico的区别?)
只在master节点安装flannel插件即可,该插件使用的是DaemonSet控制器,该控制器会在每个节点上都运行
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 3m16s v1.17.4
work1 NotReady <none> 17s v1.17.4
work2 NotReady <none> 32s v1.17.4
work3 NotReady <none> 32s v1.17.4
root@master ~]# kubectl apply -f kube-flannel.yml namespace/kube-flannel unchanged clusterrole.rbac.authorization.k8s.io/flannel unchanged clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged serviceaccount/flannel unchanged configmap/kube-flannel-cfg unchanged daemonset.apps/kube-flannel-ds unchanged
[root@master ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 7m22s v1.17.4 work1 Ready worker 4m23s v1.17.4 work2 Ready worker 4m38s v1.17.4 work3 Ready worker 4m38s v1.17.4
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 记一次.NET内存居高不下排查解决与启示
· DeepSeek 开源周回顾「GitHub 热点速览」
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了