Kubernetes安装
节点:
- master
- node1
- node2
1、设置hostname
# 在master节点 # hostnamectl set-hostname master #设置master节点的hostname # 在node1节点 # hostnamectl set-hostname node1 #设置node1节点的hostname # 在node2节点 # hostnamectl set-hostname node2 #设置node2节点的hostname
2、/etc/hosts # 三节点一致
192.168.10.11 master 192.168.10.13 node1 192.168.10.15 node2
3、基本设置,三节点都设置
# 关防火墙、Selinux systemctl stop firewalld systemctl disable firewalld setenforce 0 sed -i 's#(SELINUX=).*#\1disabled#' /etc/selinux/config # 关swap swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab #关闭开机自动挂载 # 参数相关,iptables过滤 cat <<EOF > /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward=1 vm.max_map_count=262144 EOF modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
# yum
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all && yum makecache
安装docker
yum install docker-ce
安装 kubeadm, kubelet 和 kubectl
yum install kubeadm kubelet kubectl
systemctl enable kubelet #自动启动
初始化kubeadm
kubeadm config print init-defaults > kubeadm.yaml cat kubeadm.yaml ### apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages: - signing - authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 1.2.3.4 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: master taints: - effect: NoSchedule key: node-role.kubernetes.io/master --- apiServer: timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controllerManager: {} dns: type: CoreDNS etcd: local: dataDir: /var/lib/etcd imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.19.0 networking: dnsDomain: cluster.local serviceSubnet: 10.96.0.0/12 scheduler: {}
修改yaml中的配置:
#advertiseAddress: 1.2.3.4
advertiseAddress: 192.168.10.11 # master地址
# imageRepository: k8s.gcr.io
imageRepository: registry.aliyuncs.com/google_containers # 使用aliyuncs镜像
下载镜像
#查看 #kubeadm config images list --config kubeadm.yaml registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0 registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0 registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0 registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0 registry.aliyuncs.com/google_containers/pause:3.2 registry.aliyuncs.com/google_containers/etcd:3.4.13-0 registry.aliyuncs.com/google_containers/coredns:1.7.0 kubeadm config images pull --config kubeadm.yaml # pull image
初始化
# 如果没有执行前面的pull,这里也会自动去pull kubeadm init --config kubeadm.yaml
成功后出行提示
To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.10.11:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1c4305f032f4bf534f628c32f5039084f4b103c922ff71b12a5f0f98d1ca9a4f
接下来按照上述提示信息操作,配置kubectl客户端的认证
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
添加到集群中,到node1,node2上执行
kubeadm join 192.168.10.11:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:1c4305f032f4bf534f628c32f5039084f4b103c922ff71b12a5f0f98d1ca9a4f
查看状态
kubectl get nodes
kubectl get all
……
安装失败后,重新安装
kubeadm reset