kubeadm搭建单master-多node节点k8s集群
kubeadm搭建单master-多node节点k8s集群
一、环境规划
1.1、实验环境规划
K8S集群角色 | Ip | 主机名 | 安装的组件 |
---|---|---|---|
控制节点 | 192.168.40.180 | k8s-master1 | apiserver、controller-manager、scheduler、etcd、docker、calico、kube-proxy |
工作节点 | 192.168.40.181 | k8s-node1 | kubelet、kube-proxy、docker、calico、coredns |
工作节点 | 192.168.40.182 | k8s-node2 | kubelet、kube-proxy、docker、calico、coredns |
实验环境规划:
- 操作系统:centos7.6
- 配置: 4Gib内存/4vCPU/100G硬盘
- 网络:Vmware NAT模式
k8s网络环境规划:
-
k8s版本:
v1.20.6
-
Pod网段:
10.244.0.0/16
-
Service网段:
10.10.0.0/16
1.2、节点初始化
1)配置静态ip地址
# 把虚拟机或者物理机配置成静态ip地址,这样机器重新启动后ip地址也不会发生改变。以master1主机修改静态IP为例
~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.40.180 # 按实验规划修改
NETMASK=255.255.255.0
GATEWAY=192.168.40.2
DNS1=223.5.5.5
# 重启网络
~]# systemctl restart network
# 测试网络连通信
~]# ping baidu.com
PING baidu.com (39.156.69.79) 56(84) bytes of data.
64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=1 ttl=128 time=63.2 ms
64 bytes from 39.156.69.79 (39.156.69.79): icmp_seq=2 ttl=128 time=47.3 ms
2)配置主机名
~]# hostnamectl set-hostname <主机名> && bash
3)配置hosts文件
# 所有机器
cat >> /etc/hosts << EOF
192.168.40.180 k8s-master1
192.168.40.181 k8s-node1
192.168.40.182 k8s-node2
EOF
# 测试
~]# ping k8s-master1
PING k8s-master1 (192.168.40.180) 56(84) bytes of data.
64 bytes from k8s-master1 (192.168.40.180): icmp_seq=1 ttl=64 time=0.015 ms
64 bytes from k8s-master1 (192.168.40.180): icmp_seq=2 ttl=64 time=0.047 ms
4)配置主机之间无密码登录
# 生成ssh 密钥对,一路回车,不输入密码
ssh-keygen -t rsa
# 把本地的ssh公钥文件安装到远程主机对应的账户
ssh-copy-id -i .ssh/id_rsa.pub k8s-master1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node1
ssh-copy-id -i .ssh/id_rsa.pub k8s-node2
5)关闭firewalld防火墙
systemctl stop firewalld && systemctl disable firewalld
6)关闭selinux
# 临时关闭
setenforce 0
# 永久关闭
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 查看
getenforce
7)关闭交换分区swap
#临时关闭
swapoff -a
#永久关闭:注释swap挂载,给swap开头加一下注释
sed -ri 's/.*swap.*/#&/' /etc/fstab
#注意:如果是克隆的虚拟机,需要删除UUID一行
8)修改内核参数
# 1、加载br_netfilter模块
modprobe br_netfilter
# 2、验证模块是否加载成功
lsmod |grep br_netfilter
# 3、修改内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
# 4、使刚才修改的内核参数生效
sysctl -p /etc/sysctl.d/k8s.conf
9)配置阿里云repo源
# 备份
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# 下载新的CentOS-Base.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 生成缓存
yum clean all && yum makecache
10)配置时间同步
# 安装ntpdate命令,
yum install ntpdate -y
# 跟网络源做同步
ntpdate cn.pool.ntp.org
# 把时间同步做成计划任务
crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
# 重启crond服务
service crond restart
11)安装iptables
# 安装iptables
yum install iptables-services -y
# 禁用iptables
service iptables stop && systemctl disable iptables
# 清空防火墙规则
iptables -F
12)开启ipvs
不开启ipvs将会使用iptables进行数据包转发,但是效率低,所以官网推荐需要开通ipvs。
# 创建ipvs.modules文件
~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ 0 -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
# 执行脚本
~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
13)安装基础软件包
~]# yum install -y yum-utils device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntp libaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack ntpdate telnet rsync
14)安装docker-ce
~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
~]# yum install docker-ce docker-ce-cli containerd.io -y
~]# systemctl start docker && systemctl enable docker.service && systemctl status docker
15)配置docker镜像加速器
# 注意:修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以
~]# tee /etc/docker/daemon.json << 'EOF'
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com", "https://rncxm540.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
~]# systemctl daemon-reload && systemctl restart docker && systemctl status docker
二、kubeadm部署集群
2.1、配置kubernetes的repo源
[root@k8s-master1 ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
# 将k8s-master1上Kubernetes的repo源复制给k8s-node1和k8s-node2
[root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node1:/etc/yum.repos.d/
[root@k8s-master1 ~]# scp /etc/yum.repos.d/kubernetes.repo k8s-node2:/etc/yum.repos.d/
2.2、安装初始化需要的软件包
[root@k8s-master1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-master1 ~]# systemctl enable kubelet && systemctl start kubelet
[root@k8s-master1 ~]# systemctl status kubelet
[root@k8s-node1 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-node1 ~]# systemctl enable kubelet && systemctl start kubelet
[root@k8s-node1 ~]# systemctl status kubelet
[root@k8s-node2 ~]# yum install -y kubelet-1.20.6 kubeadm-1.20.6 kubectl-1.20.6
[root@k8s-node2 ~]# systemctl enable kubelet && systemctl start kubelet
[root@k8s-node2 ~]# systemctl status kubelet
2.3、kubeadm初始化k8s集群
1)kubeadm初始化
[root@k8s-master1 ~]# kubeadm init --kubernetes-version=1.20.6 --apiserver-advertise-address=192.168.40.180 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=SystemVerification
[init] Using Kubernetes version: v1.20.6
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 92.005918 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: jybm37.w3g3mx8qc73hypm3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.40.180:6443 --token jybm37.w3g3mx8qc73hypm3 \
--discovery-token-ca-cert-hash sha256:c8e2661a2099c73475f0dcfb0679de5746f53a93d230b25e45c6ea3ce3f0d7c1
2)配置kubectl的配置文件config
[root@k8s-master1 ~]# mkdir -p $HOME/.kube
[root@k8s-master1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane,master 2m11s v1.20.6
# 此时集群状态还是NotReady状态,因为没有安装网络插件。
2.4、扩容集群-添加第一个node节点
# 1.在k8s-master1上查看加入节点的命令:
[root@k8s-master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.40.180:6443 --token mwk781.dqzihv2yt97f4v6v --discovery-token-ca-cert-hash sha256:c8e2661a2099c73475f0dcfb0679de5746f53a93d230b25e45c6ea3ce3f0d7c1
# 2.把k8s-node1加入k8s集群:
[root@k8s-node1 ~]# kubeadm join 192.168.40.180:6443 --token mwk781.dqzihv2yt97f4v6v --discovery-token-ca-cert-hash sha256:c8e2661a2099c73475f0dcfb0679de5746f53a93d230b25e45c6ea3ce3f0d7c1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 3.在k8s-master1上查看集群节点状况
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane,master 11m v1.20.6
k8s-node1 NotReady <none> 58s v1.20.6
2.5、扩容集群-添加第二个node节点
# 1.在k8s-master1上查看加入节点的命令:
[root@k8s-master1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.40.180:6443 --token lz5xqh.b9u5o7o0ndn25gn1 --discovery-token-ca-cert-hash sha256:c8e2661a2099c73475f0dcfb0679de5746f53a93d230b25e45c6ea3ce3f0d7c1
# 2.把k8s-node2加入k8s集群:
[root@k8s-node2 ~]# kubeadm join 192.168.40.180:6443 --token lz5xqh.b9u5o7o0ndn25gn1 --discovery-token-ca-cert-hash sha256:c8e2661a2099c73475f0dcfb0679de5746f53a93d230b25e45c6ea3ce3f0d7c1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
# 3.在k8s-master1上查看集群节点状况
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane,master 13m v1.20.6
k8s-node1 NotReady <none> 3m6s v1.20.6
k8s-node2 NotReady <none> 22s v1.20.6
# 4.给节点打标签
[root@k8s-master1 ~]# kubectl label node k8s-node1 node-role.kubernetes.io/worker=worker
node/k8s-node1 labeled
[root@k8s-master1 ~]# kubectl label node k8s-node2 node-role.kubernetes.io/worker=worker
node/k8s-node2 labeled
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady control-plane,master 14m v1.20.6
k8s-node1 NotReady worker 3m48s v1.20.6
k8s-node2 NotReady worker 64s v1.20.6
2.6、部署Calico
配置文件地址:https://docs.projectcalico.org/manifests/calico.yaml
# 上传calico.yaml到k8s-master1上,使用yaml文件安装calico 网络插件 。
[root@k8s-master1 ~]# kubectl apply -f calico.yaml
[root@k8s-master1 ~]# kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6949477b58-9t9k8 1/1 Running 0 34s 10.244.159.129 k8s-master1 <none> <none>
calico-node-66b47 1/1 Running 0 35s 192.168.40.180 k8s-master1 <none> <none>
calico-node-6svrr 1/1 Running 0 35s 192.168.40.182 k8s-node2 <none> <none>
calico-node-zgnkl 1/1 Running 0 35s 192.168.40.181 k8s-node1 <none> <none>
coredns-7f89b7bc75-4jvmv 1/1 Running 0 28m 10.244.36.65 k8s-node1 <none> <none>
coredns-7f89b7bc75-zr5mf 1/1 Running 0 28m 10.244.169.129 k8s-node2 <none> <none>
etcd-k8s-master1 1/1 Running 0 28m 192.168.40.180 k8s-master1 <none> <none>
kube-apiserver-k8s-master1 1/1 Running 0 28m 192.168.40.180 k8s-master1 <none> <none>
kube-controller-manager-k8s-master1 1/1 Running 0 28m 192.168.40.180 k8s-master1 <none> <none>
kube-proxy-8fzc4 1/1 Running 0 15m 192.168.40.182 k8s-node2 <none> <none>
kube-proxy-n2v4j 1/1 Running 0 28m 192.168.40.180 k8s-master1 <none> <none>
kube-proxy-r9ccp 1/1 Running 0 17m 192.168.40.181 k8s-node1 <none> <none>
kube-scheduler-k8s-master1 1/1 Running 0 28m 192.168.40.180 k8s-master1 <none> <none>
[root@k8s-master1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane,master 28m v1.20.6
k8s-node1 Ready worker 18m v1.20.6
k8s-node2 Ready worker 15m v1.20.6
# 测试网络连通性
[root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # ping baidu.com
PING baidu.com (39.156.69.79): 56 data bytes
64 bytes from 39.156.69.79: seq=0 ttl=127 time=43.188 ms
64 bytes from 39.156.69.79: seq=1 ttl=127 time=38.878 ms
2.7、测试部署tomcat服务
[root@k8s-master1 work]# cat tomcat.yaml
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
namespace: default
labels:
app: myapp
env: dev
spec:
containers:
- name: tomcat-pod-java
ports:
- containerPort: 8080
image: tomcat:8.5-jre8-alpine
imagePullPolicy: IfNotPresent
- name: busybox
image: busybox:latest
command:
- "/bin/sh"
- "-c"
- "sleep 3600"
[root@k8s-master1 work]# cat tomcat-service.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30080
selector:
app: myapp
env: dev
[root@k8s-master1 work]# kubectl apply -f tomcat.yaml
pod/demo-pod created
[root@k8s-master1 work]# kubectl apply -f tomcat-service.yaml
service/tomcat created
[root@k8s-master1 work]# kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-pod 2/2 Running 0 102s
[root@k8s-master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 35m
tomcat NodePort 10.106.85.230 <none> 8080:30080/TCP 21s
浏览器访问测试:
2.8、测试coredns服务
# busybox要用指定的1.28版本,不能用最新版本,最新版本,nslookup会解析不到dns和ip
[root@k8s-master1 ~]# kubectl run busybox --image busybox:1.28 --restart=Never --rm -it busybox -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes.default.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default.svc.cluster.local
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
三、dashboard部署
3.1、安装dashboard
1)部署yaml文件并查看
[root@k8s-master1 ~]# kubectl apply -f kubernetes-dashboard.yaml
[root@k8s-master1 ~]# kubectl get pods -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-7445d59dfd-rks7c 1/1 Running 0 115s
kubernetes-dashboard-54f5b6dc4b-mnnd2 1/1 Running 0 115s
# 查看dashboard前端的service
[root@k8s-master1 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.111.106.98 <none> 8000/TCP 3m2s
kubernetes-dashboard ClusterIP 10.98.164.1 <none> 443/TCP 3m2s
2)修改service type类型变成NodePort
# 把type: ClusterIP变成 type: NodePort,保存退出即可
[root@k8s-master1 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
[root@k8s-master1 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.111.106.98 <none> 8000/TCP 6m1s
kubernetes-dashboard NodePort 10.98.164.1 <none> 443:30379/TCP 6m1s
3)浏览器访问
上面可看到service类型是NodePort
,访问任何一个工作节点ip: 30379端口即可访问kubernetes dashboard,在浏览器(使用火狐浏览器)访问如下地址:
https://192.168.40.180:30379
3.2、通过token访问dashboard
# 1.创建管理员token,具有查看任何空间的权限,可以管理所有资源对象
[root@k8s-master1 ~]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
# 2.查看kubernetes-dashboard名称空间下的secret
[root@k8s-master1 ~]# kubectl get secret -n kubernetes-dashboard
NAME TYPE DATA AGE
default-token-fppc9 kubernetes.io/service-account-token 3 19m
kubernetes-dashboard-certs Opaque 0 19m
kubernetes-dashboard-csrf Opaque 1 19m
kubernetes-dashboard-key-holder Opaque 2 19m
kubernetes-dashboard-token-bzx6g kubernetes.io/service-account-token 3 19m
# 3.找到对应的带有token的kubernetes-dashboard-token-bzx6g
[root@k8s-master1 ~]# kubectl describe secret kubernetes-dashboard-token-bzx6g -n kubernetes-dashboard
...
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImRTYUlhaUZXeFBzeHpjcmNXS1p6WENybDRsVXkyVGN3ZUJWRjZnNWVNYjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1ieng2ZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI3MjFkYzkxLWI0M2YtNDc5YS1hMjJmLTZlYjhjNTE0ZTllNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.ZndeWZWYY7c-vFir6uVaTxR-EZ5MIZByGgLIoBAtxYQebhYVtCxNIPhnrNBLcmcmdbfmuqWEU9M5T-zpSEX5aAPKhuJNo-zpKW9N-COhuLXPDjcesct5XmBFeL6Duc322TRm-4aQto6ZUJ4dkT-KRwhS1EzGZ5VZoz_m4pi-f_dFWNLEnrd25qPswAdIHVkAPe28WtJkLIjfoGmTd0hGfu9_uz0rOzQn5MoV-hRPtvVd4ziIeC9ETwKKVp14RlakV3r2Y0ZDxOqlNhI4PAlwbBOoqbpa3WHLTuuh0Fm0jAdZdKVGhS1T6N1kcC0_BTWsq0caK21FVyyjGka60YvKIg
# 4.通过token访问dashboard
3.3、通过kubeconfig文件访问dashboard
# 1、创建cluster集群
[root@k8s-master1 ~]# cd /etc/kubernetes/pki
[root@k8s-master1 pki]# kubectl config set-cluster kubernetes --certificate-authority=./ca.crt --server="https://192.168.40.180:6443" --embed-certs=true --kubeconfig=/root/dashboard-admin.conf
# 2、创建credentials:需要使用上面的kubernetes-dashboard-token-bzx6g对应的token信息
[root@k8s-master1 pki]# DEF_NS_ADMIN_TOKEN=$(kubectl get secret kubernetes-dashboard-token-bzx6g -n kubernetes-dashboard -o jsonpath={.data.token}|base64 -d)
[root@k8s-master1 pki]# echo $DEF_NS_ADMIN_TOKEN
eyJhbGciOiJSUzI1NiIsImtpZCI6ImRTYUlhaUZXeFBzeHpjcmNXS1p6WENybDRsVXkyVGN3ZUJWRjZnNWVNYjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1ieng2ZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI3MjFkYzkxLWI0M2YtNDc5YS1hMjJmLTZlYjhjNTE0ZTllNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.ZndeWZWYY7c-vFir6uVaTxR-EZ5MIZByGgLIoBAtxYQebhYVtCxNIPhnrNBLcmcmdbfmuqWEU9M5T-zpSEX5aAPKhuJNo-zpKW9N-COhuLXPDjcesct5XmBFeL6Duc322TRm-4aQto6ZUJ4dkT-KRwhS1EzGZ5VZoz_m4pi-f_dFWNLEnrd25qPswAdIHVkAPe28WtJkLIjfoGmTd0hGfu9_uz0rOzQn5MoV-hRPtvVd4ziIeC9ETwKKVp14RlakV3r2Y0ZDxOqlNhI4PAlwbBOoqbpa3WHLTuuh0Fm0jAdZdKVGhS1T6N1kcC0_BTWsq0caK21FVyyjGka60YvKIg
[root@k8s-master1 pki]# kubectl config set-credentials dashboard-admin --token=$DEF_NS_ADMIN_TOKEN --kubeconfig=/root/dashboard-admin.conf
# 3、创建context
[root@k8s-master1 pki]# kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/dashboard-admin.conf
# 4、切换context的current-context是dashboard-admin@kubernetes
[root@k8s-master1 pki]# kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/root/dashboard-admin.conf
# 5、查看生成的文件dashboard-admin.conf
[root@k8s-master1 pki]# cat /root/dashboard-admin.conf
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EY3dPREV5TWpJeE5Gb1hEVE14TURjd05qRXlNakl4TkZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTmU3CnRYdTBaRk1RUnZRcGtUVExxN1dEdnFBeDIwblkxSUR1WHlGWmR6VElsREtYWGFpTjdUNFp4dnVKdWRETFJjdk8KcFFHTjlQR0d5bTM2b05GRWo2RDVhek9xWGJJTHp4N2IrODRQV1VnTFhSd1IvYzRReG8vYzNYNmZLWFJucnVaeApVN1BJMDViVzlzeUVrVk1kM3ZpT25iQnVYTDBpNDViRGlzVHlZNUdRZGZTK3c3eGVxTWVoclV6N04vMUtlV2JLCkF0ZnZkUXJWUTlDT3hFVGcwRWRjbUt5R0RDc0JrVUhLY3BQZ1RidXVuUGZ2bm1yWWRsNWtlZmFHMWkzR1ZPY1oKdWhVQVpCck4xaWNocUsrV0Q2a3NTLzQwLzg3Nlg3WlQyeFNPbVJxTUQyVGlHQzJvWlRhclRQOE9VUVFqc0ZkdwplNUNlaEFXKzZHS3BCOTd6KzJFQ0F3RUFBYU5DTUVBd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZCN2JxeGV1WFdYUVg5UDhJenRCbS9sWVRHS3hNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFDUGN1dUdLa1QwemdRa0x4S2JrTU9pbGQ2akYvNENyYklEaG5SeDk4dEkya1EvNzVXbQpaNURoeldnKytrcUdoQUVSZXFoMVd4MXNHV0RTaG41elJScmNNT1BOOVBmdVpJcmVUUUllL0tuZDdTMXZyNUxGCk80NlE5QXEwVlZYSU5kMEdZcmJPNURpaTdBc2Ewc0FwSk16RzZoRHZPYlFCRGh3RURxa3VkM2tlZ0xuNUZXTUwKdUZoU2Voa1F4VWxUOVJoRkhzemZxVnBsTGVpN05uT1dxR0xIOHhTSFdacTV3aFI1a1laYUpJblM0L1gwZVdnKwpGNXM0WWpVWWZHOHRNQTZLNTR6eFVJSnM0Nnd2ek9yOEVwUWlKLzh1SnhnM052aFpBZG1oTVMvRTNLTmF5TCtoClU0a2NNcUlxWUYyYzBqY1BJK0wxeHU4WkVHMCtXaWY0N2tYSAotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.40.180:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: dashboard-admin
name: dashboard-admin@kubernetes
current-context: dashboard-admin@kubernetes
kind: Config
preferences: {}
users:
- name: dashboard-admin
user:
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImRTYUlhaUZXeFBzeHpjcmNXS1p6WENybDRsVXkyVGN3ZUJWRjZnNWVNYjgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1ieng2ZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImI3MjFkYzkxLWI0M2YtNDc5YS1hMjJmLTZlYjhjNTE0ZTllNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.ZndeWZWYY7c-vFir6uVaTxR-EZ5MIZByGgLIoBAtxYQebhYVtCxNIPhnrNBLcmcmdbfmuqWEU9M5T-zpSEX5aAPKhuJNo-zpKW9N-COhuLXPDjcesct5XmBFeL6Duc322TRm-4aQto6ZUJ4dkT-KRwhS1EzGZ5VZoz_m4pi-f_dFWNLEnrd25qPswAdIHVkAPe28WtJkLIjfoGmTd0hGfu9_uz0rOzQn5MoV-hRPtvVd4ziIeC9ETwKKVp14RlakV3r2Y0ZDxOqlNhI4PAlwbBOoqbpa3WHLTuuh0Fm0jAdZdKVGhS1T6N1kcC0_BTWsq0caK21FVyyjGka60YvKIg
# 6、把dashboard-admin.conf复制到桌面,浏览器访问时使用kubeconfig认证,把dashboard-admin.conf导入到web界面,就可以登陆了
3.4、通过kubernetes-dashboard创建容器
1)点开右上角红色箭头标注的 "+",如下图所示
2)出现页面中做如下配置
3)在dashboard的左侧选择Services
4)看到刚才创建的nginx的service在宿主机映射的端口是30094,在浏览器访问:192.168.40.180:30094
四、metrics-server部署
metrics-server
是一个集群范围内的资源数据集和工具,metrics-server只是显示数据,并不提供数据存储服务,主要关注的是资源度量API的实现,比如CPU、文件描述符、内存、请求延时等指标,metric-server收集数据给k8s集群内使用,如kubectl,hpa,scheduler等
4.1、安装metrics-server
1)在/etc/kubernetes/manifests里面修改apiserver配置
注意:这个是k8s在1.17的新特性,如果是1.16版本的可以不用添加,1.17以后要添加。这个参数的作用是Aggregation允许在不修改Kubernetes核心代码的同时扩展Kubernetes API。
[root@k8s-master1~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --enable-aggregator-routing=true # 增加的内容
2)重新更新apiserver配置
[root@k8s-master1 ~]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
[root@k8s-master1 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6949477b58-9t9k8 1/1 Running 0 91m
calico-node-66b47 1/1 Running 0 91m
calico-node-6svrr 1/1 Running 0 91m
calico-node-zgnkl 1/1 Running 0 91m
coredns-7f89b7bc75-4jvmv 1/1 Running 0 119m
coredns-7f89b7bc75-zr5mf 1/1 Running 0 119m
etcd-k8s-master1 1/1 Running 0 119m
kube-apiserver 0/1 CrashLoopBackOff 1 24s # 删除该pod
kube-apiserver-k8s-master1 1/1 Running 0 24s
kube-controller-manager-k8s-master1 1/1 Running 1 119m
kube-proxy-8fzc4 1/1 Running 0 106m
kube-proxy-n2v4j 1/1 Running 0 119m
kube-proxy-r9ccp 1/1 Running 0 108m
kube-scheduler-k8s-master1 1/1 Running 1 119m
# 把CrashLoopBackOff状态的pod删除
[root@k8s-master1 ~]# kubectl delete pods kube-apiserver -n kube-system
3)部署metrics-server
[root@k8s-master1 ~]# cat metrics.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:metrics-server
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
rules:
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- deployments
verbs:
- get
- list
- update
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: metrics-server-config
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
data:
NannyConfiguration: |-
apiVersion: nannyconfig/v1alpha1
kind: NannyConfiguration
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
version: v0.3.6
spec:
selector:
matchLabels:
k8s-app: metrics-server
version: v0.3.6
template:
metadata:
name: metrics-server
labels:
k8s-app: metrics-server
version: v0.3.6
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.6
imagePullPolicy: IfNotPresent
command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
ports:
- containerPort: 443
name: https
protocol: TCP
- name: metrics-server-nanny
image: k8s.gcr.io/addon-resizer:1.8.4
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 5m
memory: 50Mi
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: metrics-server-config-volume
mountPath: /etc/config
command:
- /pod_nanny
- --config-dir=/etc/config
- --cpu=300m
- --extra-cpu=20m
- --memory=200Mi
- --extra-memory=10Mi
- --threshold=5
- --deployment=metrics-server
- --container=metrics-server
- --poll-period=300000
- --estimator=exponential
- --minClusterSize=2
volumes:
- name: metrics-server-config-volume
configMap:
name: metrics-server-config
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Metrics-server"
spec:
selector:
k8s-app: metrics-server
ports:
- port: 443
protocol: TCP
targetPort: https
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.metrics.k8s.io
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
service:
name: metrics-server
namespace: kube-system
group: metrics.k8s.io
version: v1beta1
insecureSkipTLSVerify: true
groupPriorityMinimum: 100
versionPriority: 100
[root@k8s-master1 ~]# kubectl apply -f metrics.yaml
[root@k8s-master1 ~]# kubectl get pods -n kube-system | grep metrics
metrics-server-6595f875d6-dx8w6 2/2 Running 0 8s
4.2、kubectl top命令
[root@k8s-master1 ~]# kubectl top pods -n kube-system
NAME CPU(cores) MEMORY(bytes)
calico-kube-controllers-6949477b58-9t9k8 4m 26Mi
calico-node-66b47 74m 82Mi
calico-node-6svrr 77m 98Mi
calico-node-zgnkl 83m 97Mi
coredns-7f89b7bc75-4jvmv 6m 50Mi
coredns-7f89b7bc75-zr5mf 7m 46Mi
etcd-k8s-master1 35m 54Mi
kube-apiserver-k8s-master1 118m 390Mi
kube-controller-manager-k8s-master1 37m 50Mi
kube-proxy-8fzc4 1m 14Mi
kube-proxy-n2v4j 1m 23Mi
kube-proxy-r9ccp 1m 15Mi
kube-scheduler-k8s-master1 7m 20Mi
metrics-server-6595f875d6-dx8w6 2m 16Mi
[root@k8s-master1 ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master1 417m 20% 1282Mi 68%
k8s-node1 233m 5% 1612Mi 42%
k8s-node2 262m 6% 1575Mi 41%
五、其他问题
5.1、scheduler、controller-manager端口变成物理机可以监听的端口
1)问题引出
[root@k8s-master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
默认在1.19之后10252和10251都是绑定在127.0.0.1的,如果想要通过prometheus监控,会采集不到数据,所以可以把端口绑定到物理机
2)kube-scheduler配置修改
[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --bind-address=192.168.40.180
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.40.180
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
startupProbe:
failureThreshold: 24
httpGet:
host: 192.168.40.180
path: /healthz
port: 10259
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
status: {}
修改如下内容:
1) 把--bind-address=127.0.0.1变成--bind-address=192.168.40.180
2) 把httpGet:字段下的hosts由127.0.0.1变成192.168.40.180
3) 把—port=0删除
#注意:192.168.40.180是k8s的控制节点k8s-master1的ip
3)kube-controller-manager配置修改
[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- --bind-address=192.168.40.180
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --cluster-cidr=10.244.0.0/16
- --cluster-name=kubernetes
- --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/pki/ca.crt
- --service-account-private-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.96.0.0/12
- --use-service-account-credentials=true
image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.6
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.40.180
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-controller-manager
resources:
requests:
cpu: 200m
startupProbe:
failureThreshold: 24
httpGet:
host: 192.168.40.180
path: /healthz
port: 10257
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/kubernetes/controller-manager.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/kubernetes/controller-manager.conf
type: FileOrCreate
name: kubeconfig
status: {}
修改如下内容:
1) 把--bind-address=127.0.0.1变成--bind-address=192.168.40.180
2) 把httpGet:字段下的hosts由127.0.0.1变成192.168.40.180
3) 把—port=0删除
#注意:192.168.40.180是k8s的控制节点k8s-master1的ip
4)重启kubelet
[root@k8s-master1 ~]# systemctl restart kubelet
[root@k8s-master1 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8s-master1 ~]# ss -antulp | grep :10251
tcp LISTEN 0 128 :::10251 :::* users:(("kube-scheduler",pid=122787,fd=7))
[root@k8s-master1 ~]# ss -antulp | grep :10252
tcp LISTEN 0 128 :::10252 :::* users:(("kube-controller",pid=125280,fd=7))
-------------------------------------------
个性签名:独学而无友,则孤陋而寡闻。做一个灵魂有趣的人!