Kubernetes
kubernetes基础环境准备
kubernetes
----------------------
Kubernetes-流程图
kube-apiserver:访问入口,验证授权
kube-scheduler: 容器调度器,不同节点调度
kuber-controller-manger: 管理控制中心,维护pod副本数量
Node
kubelet:控制器
kube-proxy: 代理服务,实现流量转发
核心组件:
apiserver:提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制controller manager:负责维护集群的状态,比如故障检测、自动扩展、滚动更新等
scheduler:负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上
kubelet:负责维护容器的生命周期,同时也负责volume (CVI)和网络(CNI)的管理;
container runtime:containerd、docker负责镜像管理以及Pod和容器的真正运行(CRI);
kube-proxy:负责为service提供cluster内部的服务发现和负载均衡;etcd:保存了整个集群的状态
#可选组件:
kube-dns:负责为整个集群提供DNS服务
Ingress controller:为服务提供外网入口
Heapster:提供资源监控
Dashboard:提供GUI
Federation:提供跨可用区的集群
Fluentd-elasticsearch:提供集群日志采集、存储与查询
2.1:安装方式:2.1.1:部署工具:
使用批量部署工具如(ansiblel saltstack)、手动二进制、kubeadm、 apt-getlygm等方式安装,以守护进程的方式启动在宿主机上,类似于是Nginx一样使用service脚本启动。
2.1.2: kubeadm:
https:/v1-18.docs.kubernetes.io/zh/docs/setup/independentcreate-cluster-kubeadm/
#kubeadm项目成熟度及维护周期。
使用k8s官方提供的部署工具kubeadm自动安装,需要在master和node节点上安装docker等组件,然后初始化,把管理端的控制服务和node上的服务都以pod的方式运行。
2.1.3:安装注意事项:
禁用swap
关闭selinux
关闭iptables,
优化内核参数及资源限制参数
net.bridge.bridge-nf-call-ip6dables = 1
net.bridge.bridge-nf-call-iptables = 1 #二层的网桥在转发包时会被宿主机iptables的FORWARD规则匹配
master :
kube-scheduler
kube-controller-manager
kubr-apiserver
高可用master
172.31.7.201 master1
172.31.7.201 master2
172.31.7.203 master3
单机master
两个haproxy
172.31.7.204 haproxy1
172.31.7.205 haproxy2
一个harbor
172.31.7.206 harbor
三个node
172.31.7.207 node1
172.31.7.208 node2
172.31.7.209 node3
kubernetes安装-命令简介
172.31.7.204 haproxy1 与 172.31.7.205 haproxy2
===================================================
方法一: apt安装
# apt install keepalived haproxy -y 或者
方法二: compose安装
haproxy install
# cd /usr/local/src
上传 D:\和彩云同步文件夹\scripte file\SERVER\Haproxy一键安装脚本\haproxy-2.0.15-onekeyinstall.tar.gz
# tar xvf haproxy-2.0.15-onekeyinstall.tar.gz
# bash haproxy-install.sh
keepalived install
D:\和彩云同步文件夹\scripte file\SERVER\Keeplived\Ubuntu1804-不联网安装\keepalived-2.0.20.tar.gz
172.31.7.204 keepalived1
---------------------
install keepalived haproxy
# apt install keepalived haproxy -y
find 配置file
# find / -name keepalived*
# cp /usr/share/doc/keepalived/samples/keepalived.conf.vrrp /etc/keepalived/keepalived.conf
create 建 VIP
# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_2 {
interface eth0
smtp_alert
virtual_router_id 50
priority 50
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.31.7.188 dev eth0 label eth0:1 #修改为172.31.7.188
}
}
重启服务
# systemctl restart keepalived
# scp /etc/keepalived/keepalived.conf 172.31.7.205:/etc/keepalived
172.31.7.205 keepalived2
------------------------
# apt keepalived haproxy
# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state MASTER
interface eth0
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority 99 #修改高可用的优先级
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.31.7.188 dev eth0 label eth0:1
}
}
重启服务
# systemctl restart keepalived
测试服务
--------
keepalived迁移
172.31.7.204
systemctl stop keepalived
172.31.7.205
ifconfig
172.31.7.204 haproxy1
----------------------
配置ha1
# vim /etc/haproxy/haproxy.cfg
listen stats #添加状态页面
mode http
bind 0.0.0.0:9999
stats enable
log global
stats uri /haproxy-status
stats auth admin:1
listen k8s-apiserver-6443
bind 172.31.7.188:6443
mode tcp
balance source
server 172.31.7.201 172.31.7.201:6443 check inter 3s fall 3 rise 5
# server 172.31.7.202 172.31.7.202:6443 check inter 3s fall 3 rise 5
# server 172.31.7.203 172.31.7.203:6443 check inter 3s fall 3 rise 5
重新启动
# systemctl restart haproxy
# systemctl enable haproxy keepalived
# scp /etc/haproxy/haproxy.cfg 172.31.7.205:/etc/haproxy/
# scp /etc/sysctl.conf 172.31.7.205://etc/
访问状态页面
http://172.31.7.188:9999/haproxy-status
用户 admin
密码 1
172.31.7.205 haproxy2
----------------------
# vim /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.ipv4.tcp_tw_reuse = 0
net.core.somaxconn = 32768
net.netfilter.nf_conntrack_max=1000000
vm.swappiness = 0
vm.max_map_count=655360
fs.file-max=6553600
# systemctl enable haproxy keepalived
172.31.7.206 Harobor 镜像仓库的安装
====================================
安装docker
D:\和彩云同步文件夹\scripte file\SERVER\Docker-通用安装\docker-不联网二进制部署\docker-19.09.15-binary-install.tar.gz
tar xvf docker-19.09.15-binary-install.tar.gz && cd zhgedu
bash docker-install.sh
安装Harbor
D:\和彩云同步文件夹\Service optimization file\Harbor\Harbor-高可用\172.31.7.14-Harbor-master宿主机\harbor-安装\harbor-offline-installer-v-2.2.1.tgz
# mkdir /apps && mv harbor-offline-installer-v-2.2.1.tgz /apps/ && cd /apps/
# tar xvf harbor-offline-installer-v-2.2.1.tgz && cd harbor
# vim harbor.yml
hostname: harbor.jackie.com
# ./install.sh --with-trivy
添加本地解析
C:\Windows\System32\drivers\etc\hosts
172.31.7.206 harbor.jackie.com
访问:
harbor.jackie.com
用户 admin
密码 1
kubernetes安装-高可用环境
172.31.7.201 master1
=======================
清空原来的单机环境
# km reset
方法一: 初始化环境
# kubeadm init --apiserver-advertise-address=172.31.7.201 --control-plane-endpoint=172.31.7.188 --apiserver-bind-port=6443 --kubernetes-version=v1.20.5 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=jackie.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
方法二: 初始化环境
# kubeadm config print init-defaults > kube-flannel-1.yaml
# vim kube-flannel-1.yaml
# kubeadm init --config kubeadm-init.yaml #基于文件初始化master初始化
生成令牌
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join 172.31.7.188:6443 --token 9be810.vg59dzqr7bv5y5uf \
--discovery-token-ca-cert-hash sha256:4ba4c5292492b2153eeb7e4faaf6ffb6f2aa67d9311bdc48cb327f2eea77672e \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.31.7.188:6443 --token 9be810.vg59dzqr7bv5y5uf \
--discovery-token-ca-cert-hash sha256:4ba4c5292492b2153eeb7e4faaf6ffb6f2aa67d9311bdc48cb327f2eea77672e
执行命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看命令的使用
# kt get node
# kt get pod -A
创建flannel网络组件
# mkdir kube-flannel # cd kube-flannel
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kt apply -f kube-flannel.yml
当前maste生成证书用于添加新控制节点:
# kubeadm init phase upload-certs --upload-certs
添加master要使用的证书
[upload-certs] Using certificate key:
1ccc573830e47fb000db859de303f2ee5460b948ca26cb076172ef2c8c460c6f
添加master
kubeadm join 172.31.7.188:6443 --token 9be810.vg59dzqr7bv5y5uf \
--discovery-token-ca-cert-hash sha256:4ba4c5292492b2153eeb7e4faaf6ffb6f2aa67d9311bdc48cb327f2eea77672e \
--control-plane --certificate-key 1ccc573830e47fb000db859de303f2ee5460b948ca26cb076172ef2c8c460c6f
172.31.7.202 master2 172.31.7.203 master3
===========================================
内存2G 内核4核
同时执行加入高可用master节点
kubeadm join 172.31.7.188:6443 --token 9be810.vg59dzqr7bv5y5uf \
--discovery-token-ca-cert-hash sha256:4ba4c5292492b2153eeb7e4faaf6ffb6f2aa67d9311bdc48cb327f2eea77672e \
--control-plane --certificate-key 1ccc573830e47fb000db859de303f2ee5460b948ca26cb076172ef2c8c460c6f
pull镜像失败直接导入镜像
# dr load -i quay.io_coreos_flannel%3Av0.14.0-rc1.tar.gz
查看执行结果
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster.
查看以签发证书
# kt get csr
查看节点的详细信息
# kt get node -o wide
172.31.7.204 haproxy1 添加调度master
======================================
# vim /etc/haproxy/haproxy.cfg
listen k8s-apiserver-6443
bind 172.31.7.188:6443
mode tcp
balance source
server 172.31.7.201 172.31.7.201:6443 check inter 3s fall 3 rise 5 #打开全部设置
server 172.31.7.202 172.31.7.202:6443 check inter 3s fall 3 rise 5
server 172.31.7.203 172.31.7.203:6443 check inter 3s fall 3 rise 5
172.31.7.207 node1 172.31.7.208 node2 172.31.7.209 node3 #添加到miaster调度机中
=================================================================================
同时执行: 添加到miaster调度机中
kubeadm join 172.31.7.188:6443 --token 9be810.vg59dzqr7bv5y5uf \
--discovery-token-ca-cert-hash sha256:4ba4c5292492b2153eeb7e4faaf6ffb6f2aa67d9311bdc48cb327f2eea77672e
172.31.7.201 master1
=====================
创建测试容器
# kt run net-test1 --image=alpine sleep 360000
# kt run net-test2 --image=alpine sleep 360000
查所有pod容器的状态
# kt get pod -A
查看pod容器的更为具体的消息
# kt get pod -o wide
net-test1 1/1 Running 0 4m23s 10.100.5.2 node3 <none>
net-test2 1/1 Running 0 4m1s 10.100.4.2 node2 <none>
进入pod容器内部
# kt exec -it net-test1 sh
/ # ping 10.100.4.2 #两个pod能否跨主机通讯
安装dashboard插件
# cd /root/kube-flannel
# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
# mv recommended.yaml dashboard-2.2.0.yaml
# dr pull kubernetesui/dashboard:v2.2.0
# dr pull kubernetesui/metrics-scraper:v1.0.6 #显示容器资源利用率
推送镜像到harbor
# dr tag kubernetesui/dashboard:v2.2.0 harbor.jackie.com/zhgedu/dashboard:v2.2.0
# dr push harbor.jackie.com/zhgedu/dashboard:v2.2.0
# dr tag kubernetesui/metrics-scraper:v1.0.6 harbor.jackie.com/zhgedu/metrics-scraper:v1.0.6
# dr push harbor.jackie.com/zhgedu/metrics-scraper:v1.0.6
修改dashboard-2.2.0.yaml文件
# vim dashboard-2.2.0.yaml
- name: kubernetes-dashboard
image: harbor.jackie.com/zhgedu/dashboard:v2.2.0
- name: dashboard-metrics-scraper
image: harbor.jackie.com/zhgedu/metrics-scraper:v1.0.6 # 改为本地镜像
查看帮助增dashboard服务向外暴露的端口号
# kt explain service.spec
# vim dashboard-2.2.0.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #添加端口类型
ports:
- port: 443
targetPort: 8443
nodePort: 32002 #添加端口号为33002
selector:
k8s-app: kubernetes-dashboard
执行命令
# kt apply -f dashboard-2.2.0.yaml
测试结果
# kt get pod -A
kubernetes-dashboard dashboard-metrics-scraper-866b49f95-d2mvm 1/1 Running
kubernetes-dashboard kubernetes-dashboard-c6698b989-qj74t 1/1 Running
https://172.31.7.209:32002
创建dashboard使用用户,后在创建token值
# vim admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: default #资源空间改为默认的资源空间,就不报错
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: default
创建用户
# kt apply -f admin.user.yaml
获取token
# kt get secret -A | grep admin
# kt describe secret admin-user-token-v565z -n default
粘贴密钥
eyJhbGciOiJSUzI1NiIsImtpZCI6ImVydWc0aTVUT0I4bVRCWTE2NDRMb1dyZFdQZndkTmVIbmRneV9UWGNQeUEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJha3MtZGFzaGJvYXJkLWFkbWluLXRva2VuLXFtbHpiIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFrcy1kYXNoYm9hcmQtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI4NTE1MDA4NC0yMTcxLTRhNTAtYjI0NC1jZTJkNjdhMGZhZjAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWtzLWRhc2hib2FyZC1hZG1pbiJ9.X2rHkH61HaFFdVY4_hiq6zJYyN0NC3bR1jRUwcmgPQrz1FpigBzoC5jPygUdZlL4ZXa4puu_V3JscdZ8NOS--PeJeVlvhiUuLm9dgE_Di4z3s3UM72s99UC1Lo7tzpL3DrhA8IidzGnj3AP3Aw0wJ8GLLCR9KdQu32CS0u9xqh2ukT3I5xQ6TH0ThomeV1ChXO3Ke3XExxd6USzWHHDHfHbR-gz71UxpT2zn56zHYT5rhqtuWFsn-xXqXk6fBQ3D0H12XiwRphe5aLL6S3v-01Bw_S6PaflXTOEjrMkAW8ufb9GXI97ZJpakvtkg2J3JFyHaXeCq4949Cv6o8u8a_A
登录管理界面
https://172.31.7.207:32002
kubernetes运行nginx及tomcat服务示例
kube-nginx运行
测试运行nginx服务
# pwd
# vim nginx/nginx.yaml
apiVersion: apps/v1 #官方格式的yaml
kind: Deployment
metadata:
namespace: default
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.18.0 #使用nginx:1.18.0镜像,不成功的话就dr pull nginx:1.18.0
ports:
- containerPort: 80
---
kind: Service #创建类型为service的服务
apiVersion: v1
metadata:
labels:
app: test-nginx-service-label
name: test-nginx-service
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 30004
selector:
app: nginx
启动yaml文件
# kt apply -f nginx/nginx.yaml
查看创建nginx的pod的状态
# kt get pod
# kt get pod -o wide
访问结果:
http://172.31.7.207:30004/
http://172.31.7.208:30004/
http://172.31.7.209:30004/
进入容器
方法一
# kt get pod -A | grep nginx
# kt exec -it nginx-deployment-67dfd6c8f9-hrfh8 bash
容器中执行命令
# apt-get update
# apt install -y wget
# cd /usr/share/nginx/html
# cat /etc/nginx/conf.d/default.conf
# echo jackie welcome to beijing > /usr/share/nginx/html/index.html
# wget https://www.magedu.com/wp-content/uploads/2021/03/2021032406054665.jpg
# mv 2021032406054665.jpg 1.jpg
访问:
172.31.7.207:30004
172.31.7.207:30004/1.jpg
kube-tomcat运行
# vim tomcat/tomcat.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: default
name: tomcat-deployment
labels:
app: tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat
template:
metadata:
labels:
app: tomcat
spec:
containers:
- name: tomcat
image: tomcat #可以设置为自己想要的版本号,从自己的镜像仓库中导入
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
labels:
app: test-tomcat-service-label
name: test-tomcat-service
namespace: default
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 30005
selector:
app: tomcat
启动tomcat服务
# kt apply -f tomcat-m43.yaml
查看创建tomcat的容器的状态
# kt get pod
查看pod的详细信息
# ktdes pod tomcat-deployment-6c44f58b47-5nmbh
# ktdes tomcat-deployment-6c44f58b47-5nmbh
查master节点service服务有几个
# kt get svc
kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 43h
test-nginx-service NodePort 10.200.25.159 <none> 80:30004/TCP 82m
test-tomcat-service NodePort 10.200.68.190 <none> 80:30005/TCP 19m
查看部署终点网络IP
# kt get endpoints
# kt get ep
进入nginx的pod容器中
# kt exec -it nginx-deployment-67dfd6c8f9-hrfh8 bash
# cd /usr/share/nginx/html
进入tomcat的pod容器中
# kt exec -it tomcat-deployment-6c44f58b47-5nmbh bash
# cd /usr/local/tomcat/webapps/ && mkdir zhgedu && cd zhgedu
# echo jackie welcome to beijing > index.jsp
进入副本容器tomcat中
# kt get pod
# kt exec -it tomcat-deployment-6c44f58b47-pnbc6 bash
# cd webapps && mkdir zhgedu
# echo `hostname` > /usr/local/tomcat/webapps/zhgedu/index.jsp
测试结果
tomcat自动转发
172.31.7.207/30005/zhgedu
实现动态文件向toncat调度
# kt exec -it nginx-deployment-67dfd6c8f9-hrfh8 bash
# kt get ep #查看tomcat的server名称
# vim /etc/nginx/conf.d/default.conf
upstream zhgedu-host {
server test-tomcat-service; 加入tomcat的server名称
}
location /zhgedu { #添加调度规则使其能过调度到tomcat
proxy_pass http://zhgedu-host;
}
# error_page 404 /404.html;
重启容器内nginx服务
# nginx -s reload
通过nginx的地址访问tomcat
http://172.31.7.207:30004/zhgedu/
kube-nginx实现动静分离
172.31.7.204 haproxy1
========================
# vim /etc/haproxy/haproxy.cfg
listen k8s-zhgedu-80
bind 172.31.7.188:80
mode tcp
balance source
server 172.31.7.201 172.31.7.201:30004 check inter 3s fall 3 rise 5
server 172.31.7.202 172.31.7.202:30004 check inter 3s fall 3 rise 5
server 172.31.7.203 172.31.7.203:30004 check inter 3s fall 3 rise 5
重启动服务
# systemctl restart haproxy
访问:
http://172.31.7.188/
添加本机域名解析
C:\Windows\System32\drivers\etc\hosts
172.31.7.188 www.jackie.com
token管理
#kubeadm token --help
create #创注token,默认有效期24小时
delete #删除token
generate #生成并打印token,但不在服务器上创建,即将token用于其他操作
list #列出服务器所有的token
reset命令
# km reset #还原kubeadm操作
查看证书有效期:
# km alpha certs check-expiration
master节点更新证书有效期:
# km alpha certs renew --help
# km alpha certs renew all
更新服务后重启服务
172.31.7.204 haproxy1
----------------------
vim /etc/haproxy/haproxy.cfg
#server 172.31.7.203 172.31.7.203:30004 check inter 3s fall 3 rise 5 #注释服务
172.31.7.201 master1
----------------------
reboot
172.31.7.201 master1
----------------------
# scp /root/.kube/* 172.31.7.203:/root/.kube/
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 开发者必知的日志记录最佳实践
· SQL Server 2025 AI相关能力初探
· Linux系列:如何用 C#调用 C方法造成内存泄露
· AI与.NET技术实操系列(二):开始使用ML.NET
· 记一次.NET内存居高不下排查解决与启示
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· 开源Multi-agent AI智能体框架aevatar.ai,欢迎大家贡献代码
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· AI技术革命,工作效率10个最佳AI工具