(转)二进制K8s集群环境搭建

原文:https://www.cnblogs.com/k8sinaction/p/17352116.html#top

https://www.cnblogs.com/wxyyy/articles/17411340.html

1.1 高可用集群环境规划

1.1.1 服务器地址规划

类型服务器地址备注
Ansible(2台) 192.168.1.129/130 K8S集群部署服务器,资源紧张的时候可以和Haproxy等复用
K8s Master(3台) 192.168.1.101/102/103 K8s控制端,通过1个VIP做主备高可用
Harbor(2台) 192.168.1.104/105 高可用镜像服务器
Etcd(最少3台) 192.168.1.106/107/108 保存k8s集群数据的服务器
Haproxy(2台) 192.168.1.109/110 高可用api-server、harbor代理服务器,生产中如果有性能瓶颈,可以考虑将api-server和harbor的反向代理拆开
Node节点(3台以上) 192.168.1.111/112/113/xxx 真正运行容器的服务器,高可用环境至少两台,生产环境可以根据实际情况不断添加Node节点

1.2 服务器准备

服务器可以是私有云的虚拟机或物理机,也可以是公有云环境的虚拟机环境,如果是公司托管的IDC环境,可以直接将harbor和node节点部署在物理机环境,master节点、etcd、负载均衡等可以是虚拟机。

类型IP主机名VIP配置
K8s Master1 192.168.1.101 k8s-master1.idockerinaction.info 192.168.1.188/189/190/191 4C/8G/20G
K8s Master2 192.168.1.102 k8s-master2.idockerinaction.info 192.168.1.188/189/190/191 4C/8G/20G
K8s Master3 192.168.1.103 k8s-master3.idockerinaction.info 192.168.1.188/189/190/191 4C/8G/20G
Harbor1 192.168.1.104 k8s-harbor1.idockerinaction.info 192.168.1.192(Harbor高可用虚地址) 2C/4G/100G
Harbor2 192.168.1.105 k8s-harbor2.idockerinaction.info 192.168.1.192(Harbor高可用虚地址) 2C/4G/100G
etcd节点1 192.168.1.106 k8s-etcd1.idockerinaction.info   2C/4G/60G
etcd节点2 192.168.1.107 k8s-etcd2.idockerinaction.info   2C/4G/60G
etcd节点3 192.168.1.108 k8s-etcd3.idockerinaction.info   2C/4G/60G
Haproxy1 192.168.1.109 k8s-ha1.idockerinaction.info   2C/4G/20G
Haproxy2 192.168.1.110 k8s-ha2.idockerinaction.info   2C/4G/20G
Node节点1 192.168.1.111 k8s-node1.idockerinaction.info   2C/8G/60G
Node节点2 192.168.1.112 k8s-node2.idockerinaction.info   2C/8G/60G
Node节点3 192.168.1.113 k8s-node3.idockerinaction.info   2C/8G/60G
k8s部署节点 192.168.1.129 k8s-deploy.idockerinaction.info ansible deploy 2C/4G/100G

1.3 K8s软件清单

API 端口: 192.168.1.188:6443 #需要配置在负载均衡上实现反向代理
操作系统:ubuntu server 20.04.x/ubuntu server 22.04.x/CentOS 7.X(升级内核) #本测试环境为ubuntu 22.04
k8s版本: 1.26.x #本测试环境使用1.26.2和1.26.4
calico:3.24.5

1.4 基础环境准备

系统主机名配置、IP配置、系统参数优化、节点对时配置,以及依赖的负载均衡和Harbor的部署

1.4.1 系统配置

vi /etc/sysctl.conf

vi /etc/modules-load.d/modules.conf

vi /etc/security/limits.conf

rm -f /etc/machine-id && dbus-uuidgen --ensure=/etc/machine-id

#systemctl disable firewalld && systmectl stop firewalld #ubuntu最小安装没有firewalld

#systemctl disable iptables && systmectl stop iptables #ubuntu最下安装没有iptables

#sysctl,modules.conf,limits.conf的配置内容请参考kubeadm方式部署K8s的[博文](云原生学习笔记-DAY1 - jack_028 - 博客园)

1.4.2 高可用负载均衡

1.4.2.1 安装高可用负载均衡

api-server和harbor反向代理直接用haproxy。经测试,使用LVS做负载均衡会有问题,需要改很多配置。因为master node里面api-server的监听地址写死了

root@k8s-ha1:~# apt install keepalived haproxy
root@k8s-ha2:~# apt install keepalived haproxy

1.4.2.2 配置keepalived

k8s-ha1和k8s-ha2上的的配置均要修改

#k8s-ha1配置
vi /etc/keepalived/keepalived.conf #添加以下配置

! Configuration File for keepalived

global_defs {
    router_id LVS_MASTER
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 643975
    }
    virtual_ipaddress {
        192.168.1.188
        192.168.1.189
        192.168.1.190
        192.168.1.191
        192.168.1.192
    }
}

#k8s-ha2配置
vi /etc/keepalived/keepalived.conf #添加以下配置

! Configuration File for keepalived

global_defs {
    router_id LVS_SLAVE
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 90
    #nopreempt
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 643975
    }
    virtual_ipaddress {
        192.168.1.188
        192.168.1.189
        192.168.1.190
        192.168.1.191
        192.168.1.192
    }
}

1.4.2.3 配置haproxy

k8s-ha1和k8s-ha2上的的配置均要修改

vi /etc/haproxy/haproxy.cfg #添加以下配置

listen k8s-6443
  bind 192.168.1.188:6443
  mode tcp
  server 192.168.1.101 192.168.1.101:6443 check inter 3s fall 3 rise 3
  server 192.168.1.102 192.168.1.102:6443 check inter 3s fall 3 rise 3
  server 192.168.1.103 192.168.1.103:6443 check inter 3s fall 3 rise 3

listen k8s-harbor-80
  bind 192.168.1.192:80
  mode tcp
  server 192.168.1.104 192.168.1.104:80 check inter 3s fall 3 rise 3
  server 192.168.1.105 192.168.1.105:80 check inter 3s fall 3 rise 3

listen k8s-harbor-443
  bind 192.168.1.192:443
  mode tcp
  server 192.168.1.104 192.168.1.104:443 check inter 3s fall 3 rise 3
  server 192.168.1.105 192.168.1.105:443 check inter 3s fall 3 rise 3

vi /etc/sysctl.conf #添加一行
net.ipv4.ip_nonlocal_bind = 1

1.4.3 harbor节点安装docker运行时

kubernetes master节点和node节点使用containerd,harbor节点安装docker(目前harbor的安装脚本会强制检查docker及docker-compose是否安装)用于部署harbor,containerd可以使用apt或者进制等方式批量安装。

apt-get update
apt-get install  ca-certificates  curl  gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg |  gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
#apt-cache madison docker-ce | awk '{ print $3 }'
VERSION_STRING=5:20.10.19~3-0~ubuntu-jammy
apt-get install docker-ce=$VERSION_STRING docker-ce-cli=$VERSION_STRING containerd.io docker-buildx-plugin docker-compose-plugin
cp /usr/libexec/docker/cli-plugins/docker-compose /usr/bin/
docker run hello-world
docker info
docker-compose version

1.4.4 Harbor设置https访问

业务镜像将统一上传到Harbor的服务器并实现镜像分发,不再通过互联在线下载公共镜像,提高镜像分发效率及数据安全性
https://goharbor.io/docs/2.5.0/install-config/configure-https/

1.4.4.1 商业证书机构申请证书并下载

生产环境:推荐使用公有证书签发机构签发的证书(https://yundun.console.aliyun.com/?p=cas#/certExtend/buy)

免费证书申请步骤:登录阿里云->ssl证书->免费证书->创建证书->填写申请->验证信息->信息验证成功会立即签发

image

1.4.4.2 下载harbor离线包并解压

将步骤1下载的SSL证书文件并上传到harbor节点指定目录,修改harbor配置文件harbor.yml

root@k8s-harbor1:/usr/local/src# mkdir /apps/
root@k8s-harbor1:/usr/local/src# mv harbor-offline-installer-v2.5.6.tgz /apps/
root@k8s-harbor1:/usr/local/src# cd /apps/
root@k8s-harbor1:/apps# tar xvf harbor-offline-installer-v2.5.6.tgz
root@k8s-harbor1:/apps# cd harbor/
root@k8s-harbor1:/apps/harbor# mkdir certs
root@k8s-harbor1:/apps/harbor# cd certs/
root@k8s-harbor1:/apps/harbor/certs# unzip 9751772_harbor.idockerinaction.info_nginx.zip
root@k8s-harbor1:/apps/harbor/certs# cd ..
root@k8s-harbor1:/apps/harbor# cp harbor.yml.tmpl harbor.yml
root@k8s-harbor1:/apps/harbor# grep -v "#" harbor.yml | grep -v "^#"
#注意修改hstname,certificate,private_key,harbor_admin_password,password配置

hostname: harbor.idockerinaction.info
http:
  port: 80
https:
  port: 443
  certificate: /apps/harbor/certs/9751772_harbor.idockerinaction.info.pem
  private_key: /apps/harbor/certs/9751772_harbor.idockerinaction.info.key
harbor_admin_password: 123456
database:
  password: root123
  max_idle_conns: 100
  max_open_conns: 900
data_volume: /data

1.4.4.3 执行harbor部署

root@k8s-harbor1:/apps/harbor# ./install.sh --help
root@k8s-harbor1:/apps/harbor# ./install.sh --with-trivy --with-chartmuseum

1.4.4.4 harbor高可用配置

如果harbor有两个节点做高可用,每个harbor节点都要执行前面1.4.4.1-3这3步

1.4.4.4.1 修改keepalived配置

harbor域名要解析到负载均衡虚地址,keepalived配置在1.4.2.2已完成。实际配置只需要在keepalived.conf添加harbor虚地址192.168.1.192即可

1.4.4.4.2 harbor镜像复制配置

如果使用镜像复制的方式在两台harbor上同步镜像,需要浏览器通过IP地址登录每台haror,进行如下配置。在harbor域名使用商业签发证书并使用https访问的情况下,目标URL只能填域名才能通过连接测试,填IP不行。并且填域名的情况下,实际测试的时候两台harbor并不会同步镜像,暂时没有找到原因。生产环境可以考虑两台harbor使用共享存储的方式,这样就不需要镜像同步设置, 或者只使用一台harbor。harbor使用http访问的时候可以参考下面的方法设置同步

1.4.4.4.2.1 仓库管理,新建目标

目标名是项目名。如果有SSL证书,目标URL要填域名格式https://harbor.idockerinaction.info,填IP测试会不通过。如果没有使用SSL证书,则使用http://IP 的格式

image

1.4.4.4.2.2 复制管理,新建规则

目标仓库选择仓库管理里面创建的仓库,触发模式选事件驱动,勾选删除本地资源时同时也删除远程资源

image

1.4.4.5 web界面登录harbor测试

image

1.4.4.6 nerdctl测试登录harbor

找一个节点安装containerd和nerdctl测试登录harbor服务器,以验证是否能够登录harbor及pull/push镜像

root@ubuntu4:~# nerdctl login harbor.idockerinaction.info
root@ubuntu4:~# nerdctl tag nginx:latest harbor.idockerinaction.info/test/nginx:latest
root@ubuntu4:~# nerdctl push harbor.idockerinaction.info/test/nginx:latest
root@ubuntu4:~# nerdctl rmi nginx:latest
root@ubuntu4:~# nerdctl rmi harbor.idockerinaction.info/test/nginx:latest
root@ubuntu4:~# nerdctl pull harbor.idockerinaction.info/test/nginx:latest

1.4.4.7 docker测试登录harbor

找一个安装了docker的节点,测试login、push和pull镜像

docker login harbor.idockerinaction.info
docker pull harbor.idockerinaction.info/test/nginx:latest
docker push harbor.idockerinaction.info/test/nginx:latest

1.4.4.8 添加harbor service

harbor安装以后,默认不会自动启动,如果harbor节点重启后。需要手动执行docker-compose start启动。添加以下服务可以让harbor自启动

vi /etc/systemd/system/harbor.service #编辑harbor服务文件

[Unit]
Description=Harbor
After=docker.service systemd-networkd.service systemd-resolved.service
Requires=docker.service
Documentation=http://github.com/goharbor/harbor

[Service]
Type=simple
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml up
ExecStop=/usr/bin/docker-compose -f /apps/harbor/docker-compose.yml down

[Install]
WantedBy=multi-user.target

#enable harbor 服务
systemctl enable harbor

1.5 部署节点初始化

部署节点为192.168.1.129,主要作用如下

1、从互联网下载安装资源
2、可选将部分镜像修改tag后上传到公司内部镜像仓库服务器
3、对master进行初始化
4、对node进行初始化
5、后期集群维护
 添加及删除master节点
 添加就删除node节点
 etcd数据备份及恢复

部署项目为kubeasz,https://github.com/easzlab/kubeasz,部署节点需要使用docker下载部署kubernetes过程中的各种镜像及二进制文件等资源,因此需要安装docker环境,另外此节点为docker环境但是后期也可能会向harbor上传下载镜像,因此也需要docker login habror用来上传下载镜像。

apt-get update
apt-get install  ca-certificates  curl  gnupg
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg |  gpg --dearmor -o /etc/apt/keyrings/docker.gpg
chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
#apt-cache madison docker-ce | awk '{ print $3 }'
VERSION_STRING=5:20.10.19~3-0~ubuntu-jammy
apt-get install docker-ce=$VERSION_STRING docker-ce-cli=$VERSION_STRING containerd.io docker-buildx-plugin docker-compose-plugin
cp /usr/libexec/docker/cli-plugins/docker-compose /usr/bin/
docker run hello-world
docker info
docker-compose version
docker login harbor.idockerinaction.info
docker pull harbor.idockerinaction.info/test/nginx
docker tag hello-world harbor.idockerinaction.info/test/hello-world
docker push harbor.idockerinaction.info/test/hello-world
apt install ansible

1.6 kubeasz部署高可用kubernetes

项目地址:https://github.com/easzlab/kubeasz

kubeasz 致力于提供快速部署高可用k8s集群的工具, 同时也努力成为k8s实践、使用的参考书;基于二进制方式部署和利用ansible-playbook实现自动化;既提供一键安装脚本, 也可以根据安装指南分步行安装各个组件。
kubeasz 从每1个单独部件组装到完整的集群,提供最灵活的配置能力,几乎可以设置任何组件的任何参数;同时又为集群创建预置1套运行良好的默认配置,甚至自动化创建适合大规模集群的BGP Route Reflector网络模式。

1.6.1 免秘钥登录配置

将部署节点的公钥分发到master、node、etcd节点

apt install ansible
ssh-keygen -t rsa-sha2-512 -b 4096
apt install sshpass #安装sshpass命令用于同步公钥到各k8s服务器
vi key-scp.sh

#!/bin/bash
#目标主机列表
IP="
192.168.1.101
192.168.1.102
192.168.1.103
192.168.1.106
192.168.1.107
192.168.1.108
192.168.1.111
192.168.1.112
192.168.1.113
"
REMOTE_PORT="22"
REMOTE_USER="root"
REMOTE_PASS="123456"
for REMOTE_HOST in ${IP};do
 REMOTE_CMD="echo ${REMOTE_HOST} is successfully!"
 #添加目标远程主机的公钥
 ssh-keyscan -p "${REMOTE_PORT}" "${REMOTE_HOST}" >> ~/.ssh/known_hosts
 #通过sshpass配置免秘钥登录、并创建python3软连接
 sshpass -p "${REMOTE_PASS}" ssh-copy-id "${REMOTE_USER}@${REMOTE_HOST}"
 ssh ${REMOTE_HOST} ln -sv /usr/bin/python3 /usr/bin/python
 echo ${REMOTE_HOST} 免秘钥配置完成!
done

1.6.2 下载kubeasz项目及组件

下载前请看清楚所下载的kubeasz所支持的K8s版本

root@k8s-deploy:~# apt install git
root@k8s-deploy:~# export release=3.5.3
root@k8s-deploy:~# wget https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
root@k8s-deploy:~# vim ezdown #注意修改并自定义下载组件版本,如K8S_BIN_VER版本
root@k8s-deploy:~# chmod +x ./ezdown
root@k8s-deploy:~# ./ezdown -D
root@k8s-deploy:~# ll /etc/kubeasz/

1.6.3 生成部署集群所用的hosts及config.yml

root@k8s-deploy:~# cd /etc/kubeasz/
root@k8s-deploy:~# vim ezdown #需要修改并自定义下载组件版本,如K8S_BIN_VER版本
root@k8s-deploy:/etc/kubeasz# ./ezctl new k8s-cluster1
root@k8s-deploy:/etc/kubeasz# vim clusters/k8s-cluster1/hosts #需要修改节点IP和主机名,SERVICE_CIDR,CLUSTER_CIDR,bin_dir
root@k8s-deploy:/etc/kubeasz# vim clusters/k8s-cluster1/config.yml
#根据实际情况修改以下配置
K8S_VE,ETCD_DATA_DIR #目录建议是固态盘
CONTAINERD_STORAGE_DIR #尽量挂载高性能盘
MASTER_CERT_HOSTS #为API虚地址
MAX_PODS #单节点最大POD数量
CALICO_RR_ENABLED #如果超过50个NODE,建议开启这个
CALICO_NETWORKING_BACKEND #在公有云上,如果不支持BGP,要改成vxlan
dns_install #改为no,后面手动装coredns
ENABLE_LOCAL_DNS_CACHE #改为false
metricsserver_install #改为no,也是手动安装
dashboard_install #改为no

1.6.4 部署k8s集群

1.6.4.1 环境初始化

root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 01

1.6.4.2 部署etcd集群

可更改启动脚本路径及版本等自定义配置

root@k8s-deploy:/etc/kubeasz# ln -sv /usr/bin/python3 /usr/bin/python #这一步没执行的话,步骤setup 02会报错如下
 TASK [etcd : 创建etcd证书请求] ***********************************************************************************************************
fatal: [192.168.1.106]: FAILED! => {"msg": "Failed to get information on remote file (/etc/kubeasz/clusters/k8s-cluster1/ssl/etcd-csr.json): /bin/sh: 1: /usr/bin/python: not found\n"}

root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 02

各etcd服务器验证etcd服务, 在etcd node1上执行命令

root@etcd1:~# export NODE_IPS="192.168.1.106 192.168.1.107 192.168.1.108"
root@etcd1:~# for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done
https://192.168.1.106:2379 is healthy: successfully committed proposal: took = 20.353684ms
https://192.168.1.107:2379 is healthy: successfully committed proposal: took = 14.213269ms
https://192.168.1.108:2379 is healthy: successfully committed proposal: took = 12.753629ms

1.6.4.3 部署容器运行时containerd

在master和node节点部署容器运行时containerd,部署之前需要注意以下几点
1、如果证书是自签发的,需要将证书先分发到所有master和node节点的/etc/docker/certs.d/harbor.idockerinaction.info/目录下面, 可以自行选择用脚本或者剧本将crt文件复制过来,同时保证所有节点上都能够解析私有registry域名harbor.idockerinaction.info
2、如果证书签发机构签发的,则证书不需要执行分发步骤,证书可被信任。但是也要保证所有节点能够解析私有harbor域名,签发机构签发的域名一般是在互联网可以解析的,如果节点能上网,则通过互联网DNS解析即可,无需额外设置。如果不能上网,则需要使用内部DNS或者修改hosts文件的方式做解析
3、/etc/kubeasz/clusters/k8s-cluster1/confg.yml中的SANDBOX_IMAGE默认是从互联网下载的,建议改成私有registry的地址harbor.idockerinaction.info/baseimages/pause:3.9,同时将pause:3.9镜像上传到私有harbor
4、kubeasz containerd部署脚本默认没有安装nerdctl, 可以通过在deploy节点修改剧本将nerdctl部署加进去,相关修改如下

vi /etc/kubeasz/roles/containerd/tasks/main.yml
    - name: 准备containerd相关目录
      file: name={{ item }} state=directory
      with_items:
      - "{{ bin_dir }}" #修改二进制文件存放的目录,默认剧本里面是{{ bin_dir }}/containerd-bin
      - "/etc/containerd"
      - "/etc/nerdctl" #添加创建nerdctl的目录

    - name: 下载 containerd 二进制文件
      copy: src={{ item }} dest={{ bin_dir }}/containerd-bin/ mode=0755 #修改dest={{ bin_dir }}/, 默认值是dest={{ bin_dir }}/containerd-bin/,同时记得将nerdctl二进制放到/etc/kubeasz/bin/containerd-bin/下
      with_fileglob:
      - "{{ base_dir }}/bin/containerd-bin/*"
      tags: upgrade

    #添加以下三行配置,同时将nerdctl.toml.j2放到/etc/kubeasz/roles/containerd/templates/nerdctl.toml.j2
    - name: 创建 nerdctl 配置文件
      template: src=nerdctl.toml.j2 dest=/etc/nerdctl/nerdctl.toml
      tags: upgrade

vi /etc/kubeasz/roles/containerd/templates/nerdctl.toml.j2, 添加内容如下
namespace    = "k8s.io"
debug        = false
debug_full   = false
insecure_registry = true

vi /etc/kubeasz/roles/containerd/templates/containerd.service.j2,注意修改containerd里的路劲配置
Environment="PATH={{ bin_dir }}:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStart={{ bin_dir }}/containerd

5、在以上步骤做完以后执行命令开始部署containerd

root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 03

6、部署完成后验证containerd的运行和nerdctl是否能正常使用

root@k8s-master1:~# containerd -v
containerd github.com/containerd/containerd v1.6.20 2806fc1057397dbaeefbea0e4e17bddfbd388f38
root@k8s-master1:~# nerdctl version
WARN[0000] unable to determine buildctl version: exec: "buildctl": executable file not found in $PATH
Client:
 Version:       v1.3.0
 OS/Arch:       linux/amd64
 Git commit:    c6ddd63dea9aa438fdb0587c0d3d9ae61a60523e
 buildctl:
  Version:

Server:
 containerd:
  Version:      v1.6.20
  GitCommit:    2806fc1057397dbaeefbea0e4e17bddfbd388f38
 runc:
  Version:      1.1.5
  GitCommit:    v1.1.5-0-gf19387a6

root@k8s-master1:~# nerdctl images
REPOSITORY    TAG    IMAGE ID    CREATED    PLATFORM    SIZE    BLOB SIZE
root@k8s-master1:~# nerdctl pull harbor.idockerinaction.info/test/alpine:latest
WARN[0000] skipping verifying HTTPS certs for "harbor.idockerinaction.info"
harbor.idockerinaction.info/test/alpine:latest:                                   resolved       |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:e7d88de73db3d3fd9b2d63aa7f447a10fd0220b7cbf39803c803f2af9ba256b3: done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:c059bfaa849c4d8e4aecaeb3a10c2d9b3d85f5165c66ad3a4d937758128c4d18:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:59bf1c3509f33515622619af21ed55bbe26d24913cedbca106468a5fb37a50c3:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 0.5 s                                                                    total:  2.7 Mi (5.4 MiB/s)                        
root@k8s-master1:~# nerdctl images
REPOSITORY                                 TAG       IMAGE ID        CREATED          PLATFORM       SIZE       BLOB SIZE
harbor.idockerinaction.info/test/alpine    latest    e7d88de73db3    9 seconds ago    linux/amd64    5.6 MiB    2.7 MiB

1.6.4.4 部署kubernetes master节点

root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 04
root@k8s-deploy:/etc/kubeasz# kubectl get node

1.6.4.5 部署kubernetes node节点

root@k8s-deploy:/etc/kubeasz# vim roles/kube-node/tasks/main.yml #可自定义配置
root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 05

root@k8s-deploy:/etc/kubeasz# kubectl get nodes
NAME            STATUS                     ROLES    AGE   VERSION
192.168.1.101   Ready,SchedulingDisabled   master   10m   v1.26.2
192.168.1.102   Ready,SchedulingDisabled   master   10m   v1.26.2
192.168.1.111   Ready                      node     14s   v1.26.2
192.168.1.112   Ready                      node     14s   v1.26.2

1.6.4.6 部署网络服务calico

查看将要部署的calico版本

root@k8s-deploy:/etc/kubeasz# cat clusters/k8s-cluster1/config.yml |grep calico_ver:
calico_ver: "v3.24.5"

查看部署剧本默认配置的calico镜像

root@k8s-deploy:/etc/kubeasz# grep "image:"  roles/calico/templates/calico-v3.24.yaml.j2

          image: easzlab.io.local:5000/calico/cni:{{ calico_ver }}
          image: easzlab.io.local:5000/calico/node:{{ calico_ver }}
          image: easzlab.io.local:5000/calico/node:{{ calico_ver }}
          image: easzlab.io.local:5000/calico/kube-controllers:{{ calico_ver }}

查看部署节点本地calico镜像

root@k8s-deploy:/etc/kubeasz# docker images |grep calico

calico/kube-controllers                                     v3.24.5   38b76de417d5   5 months ago    71.4MB
easzlab.io.local:5000/calico/kube-controllers               v3.24.5   38b76de417d5   5 months ago    71.4MB
calico/cni                                                  v3.24.5   628dd7088041   5 months ago    198MB
easzlab.io.local:5000/calico/cni                            v3.24.5   628dd7088041   5 months ago    198MB
calico/node                                                 v3.24.5   54637cb36d4a   5 months ago    226MB
easzlab.io.local:5000/calico/node                           v3.24.5   54637cb36d4a   5 months ago    226MB

将本地址calico镜像重新打tag后上传到私有harbor

docker tag easzlab.io.local:5000/calico/kube-controllers:v3.24.5 harbor.idockerinaction.info/baseimages/calico-kube-controllers:v3.24.5
docker tag easzlab.io.local:5000/calico/cni:v3.24.5 harbor.idockerinaction.info/baseimages/calico-cni:v3.24.5
docker tag easzlab.io.local:5000/calico/node:v3.24.5 harbor.idockerinaction.info/baseimages/calico-node:v3.24.5

docker push harbor.idockerinaction.info/baseimages/calico-kube-controllers:v3.24.5
docker push harbor.idockerinaction.info/baseimages/calico-cni:v3.24.5
docker push harbor.idockerinaction.info/baseimages/calico-node:v3.24.5

修改部署剧本yaml文件中镜像地址

root@k8s-deploy:/etc/kubeasz# vi roles/calico/templates/calico-v3.24.yaml.j2
root@k8s-deploy:/etc/kubeasz# grep image: roles/calico/templates/calico-v3.24.yaml.j2
          image: harbor.idockerinaction.info/baseimages/calico-cni:v3.24.5
          image: harbor.idockerinaction.info/baseimages/calico-node:v3.24.5
          image: harbor.idockerinaction.info/baseimages/calico-node:v3.24.5
          image: harbor.idockerinaction.info/baseimages/calico-kube-controllers:v3.24.5

如果镜像不是放在公共仓库,而是私有仓库,需要在每个节点的containerd配置文件添加认证才能下载镜像,,测试环境用的是公共仓库,无需添加认证。之后执行如下命令在master和node节点安装calico

root@k8s-deploy:/etc/kubeasz# ./ezctl setup k8s-cluster1 06

在master和node上验证calico

root@k8s-master1:~# calicoctl node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.1.111 | node-to-node mesh | up    | 11:40:37 | Established |
| 192.168.1.112 | node-to-node mesh | up    | 11:40:40 | Established |
| 192.168.1.102 | node-to-node mesh | up    | 11:40:42 | Established |
+---------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

root@k8s-node1:~# calicoctl node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.1.101 | node-to-node mesh | up    | 11:40:37 | Established |
| 192.168.1.112 | node-to-node mesh | up    | 11:40:40 | Established |
| 192.168.1.102 | node-to-node mesh | up    | 11:40:42 | Established |
+---------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

1.6.4.7 验证Pod通信

root@k8s-deploy:~# scp .kube/config 192.168.1.101:/root/.kube/

root@k8s-master1:~# kubectl run net-test1 --image=alpine sleep 360000
pod/net-test1 created
root@k8s-master1:~# kubectl run net-test2 --image=alpine sleep 360000
pod/net-test2 created
root@k8s-master1:~# kubectl get pod -o wide
NAME        READY   STATUS    RESTARTS   AGE   IP             NODE            NOMINATED NODE   READINESS GATES
net-test1   1/1     Running   0          58s   10.200.81.1    192.168.1.112   <none>           <none>
net-test2   1/1     Running   0          32s   10.200.117.1   192.168.1.111   <none>           <none>
root@k8s-master1:~# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping 10.200.117.1
PING 10.200.117.1 (10.200.117.1): 56 data bytes
64 bytes from 10.200.117.1: seq=0 ttl=62 time=0.693 ms
64 bytes from 10.200.117.1: seq=1 ttl=62 time=0.755 ms
^C
---

2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.693/0.724/0.755 ms
/ # ping 223.5.5.5
PING 223.5.5.5 (223.5.5.5): 56 data bytes
64 bytes from 223.5.5.5: seq=0 ttl=118 time=5.147 ms
64 bytes from 223.5.5.5: seq=1 ttl=118 time=5.687 ms
^C
--- 223.5.5.5 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 5.147/5.417/5.687 ms
/ # ping 1.2.4.8
PING 1.2.4.8 (1.2.4.8): 56 data bytes
64 bytes from 1.2.4.8: seq=0 ttl=53 time=16.634 ms
64 bytes from 1.2.4.8: seq=1 ttl=53 time=16.252 ms
^C
--- 1.2.4.8 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 16.252/16.443/16.634 ms

1.6.5 K8s集群节点伸缩管理

集群管理主要是添加master、添加node、删除master、删除node等节点管理和监控
当前集群状态

1.6.5.1 添加node节点

root@k8s-deploy:/etc/kubeasz# ./ezctl add-node k8s-cluster1 192.168.1.113
root@k8s-master1:~# kubectl get nodes
NAME            STATUS                     ROLES    AGE   VERSION
192.168.1.101   Ready,SchedulingDisabled   master   28h   v1.26.2
192.168.1.102   Ready,SchedulingDisabled   master   28h   v1.26.2
192.168.1.111   Ready                      node     28h   v1.26.2
192.168.1.112   Ready                      node     28h   v1.26.2
192.168.1.113   Ready                      node     39s   v1.26.2

1.6.5.2 添加master节点

root@k8s-deploy:/etc/kubeasz# ./ezctl add-master k8s-cluster1 192.168.1.103

root@k8s-master1:~# kubectl get nodes
NAME            STATUS                     ROLES    AGE   VERSION
192.168.1.101   Ready,SchedulingDisabled   master   28h   v1.26.2
192.168.1.102   Ready,SchedulingDisabled   master   28h   v1.26.2
192.168.1.103   Ready,SchedulingDisabled   master   70s   v1.26.2
192.168.1.111   Ready                      node     28h   v1.26.2
192.168.1.112   Ready                      node     28h   v1.26.2
192.168.1.113   Ready                      node     14m   v1.26.2

1.6.5.3 删除node节点

root@k8s-deploy:/etc/kubeasz# ./ezctl del-node k8s-cluster1 192.168.1.113

root@k8s-master1:~# kubectl get nodes
NAME            STATUS                     ROLES    AGE   VERSION
192.168.1.101   Ready,SchedulingDisabled   master   29h   v1.26.2
192.168.1.102   Ready,SchedulingDisabled   master   29h   v1.26.2
192.168.1.103   Ready,SchedulingDisabled   master   15m   v1.26.2
192.168.1.111   Ready                      node     28h   v1.26.2
192.168.1.112   Ready                      node     28h   v1.26.2

#删除后如果想要将节点重新加回到集群,需要将节点重启再执行添加到集群的命令
root@k8s-deploy:/etc/kubeasz# ./ezctl add-node k8s-cluster1 192.168.1.113

1.6.5.4 验证calico状态

root@k8s-master1:~# calicoctl node status
Calico process is running.

IPv4 BGP status
+---------------+-------------------+-------+----------+-------------+
| PEER ADDRESS  |     PEER TYPE     | STATE |  SINCE   |    INFO     |
+---------------+-------------------+-------+----------+-------------+
| 192.168.1.102 | node-to-node mesh | up    | 15:23:48 | Established |
| 192.168.1.111 | node-to-node mesh | up    | 15:23:58 | Established |
| 192.168.1.112 | node-to-node mesh | up    | 15:24:08 | Established |
| 192.168.1.103 | node-to-node mesh | up    | 15:23:33 | Established |
| 192.168.1.113 | node-to-node mesh | up    | 15:43:32 | Established |
+---------------+-------------------+-------+----------+-------------+

IPv6 BGP status
No IPv6 peers found.

1.6.5.5 验证节点路由

root@k8s-master1:~# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         192.168.1.1     0.0.0.0         UG    0      0        0 eth0
10.200.81.0     192.168.1.112   255.255.255.192 UG    0      0        0 tunl0
10.200.82.192   192.168.1.103   255.255.255.192 UG    0      0        0 tunl0
10.200.117.0    192.168.1.111   255.255.255.192 UG    0      0        0 tunl0
10.200.171.128  0.0.0.0         255.255.255.192 U     0      0        0 *
10.200.182.128  192.168.1.113   255.255.255.192 UG    0      0        0 tunl0
10.200.222.192  192.168.1.102   255.255.255.192 UG    0      0        0 tunl0
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eth0

1.6.6 升级集群

升级前需要先从Kubernetes官网下载对应版本的二进制文件

1.6.6.1 手动方式升级集群

本次实验演示如何将K8s从1.26.2升级到1.26.4

方式1: 将二进制文件同步到其它路径,修改service文件加载新版本二进制文件,即用新版本替换旧版本
方式2: cordon,drain节点->停止service->拷贝二进制文件进行替换->启动service->uncordon节点,即直接替换旧版本。方式2操作步骤演示如下

1、从kubernetes github站点下载1.26.4二进制压缩包, 并上传到部署节点/usr/local/src

filenamesha512 hash
kubernetes.tar.gz 308c5584f353fb2a59e91f28c9d70693e6587c45c80c95f44cb1be85b6bae02705eaa95e5a2d8e729d6b2c52d529f1d159aeddbfe4ca23b653a5fd5b7ea908b7
kubernetes-client-linux-amd64.tar.gz ef75896aa6266dc71b1491761031b9acf6bf51f062a42e7e965b0d8bee059761d4b40357730c13f5a17682bda36bc2dce1dd6d8e57836bf7d66fe5a889556ce9
kubernetes-server-linux-amd64.tar.gz db686014c2e6ff9bc677f090a1c342d669457dc22716e2489bb5265e3f643a116efff117563d7994b19572f132b36ad8634b8da55501350d6200e1813ad43bdf
kubernetes-node-linux-amd64.tar.gz fe025affdc41fc3ab55a67ce3298d0282aa18925f65b1f52e726a0cfaddee1e979eecd97f31da236e5508f649ec60d3e6ac5ecc44ed012e2d65076d595e8b170

2、升级node3节点

root@k8s-master1:~# kubectl cordon 192.168.1.113
root@k8s-master1:~# kubectl drain 192.168.1.113 --ignore-daemonsets

root@k8s-node3:~# systemctl stop kubelet kube-proxy

root@k8s-deploy:/usr/local/src/# tar zxvf kubernetes.tar.gz
root@k8s-deploy:/usr/local/src/# tar zxvf kubernetes-client-linux-amd64.tar.gz
root@k8s-deploy:/usr/local/src/# tar zxvf kubernetes-node-linux-amd64.tar.gz
root@k8s-deploy:/usr/local/src/# tar zxvf kubernetes-server-linux-amd64.tar.gz
root@k8s-deploy:/usr/local/src/kubernetes/server/bin# scp kubelet kube-proxy kubectl 192.168.1.113:/usr/local/bin/

root@k8s-node3:/usr/local/bin#  systemctl start kubelet kube-proxy

root@k8s-deploy:/usr/local/src/kubernetes/server/bin# kubectl get nodes
NAME            STATUS                     ROLES    AGE   VERSION
192.168.1.101   Ready,SchedulingDisabled   master   44h   v1.26.2
192.168.1.102   Ready,SchedulingDisabled   master   44h   v1.26.2
192.168.1.103   Ready,SchedulingDisabled   master   15h   v1.26.2
192.168.1.111   Ready                      node     43h   v1.26.2
192.168.1.112   Ready                      node     43h   v1.26.2
192.168.1.113   Ready,SchedulingDisabled   node     15h   v1.26.4

root@k8s-master1:~# kubectl uncordon 192.168.1.113

3、升级master节点(待测试)

手动升级master节点与升级node相似,步骤也是停止服务->拷贝二进制文件->启动服务

k8s-master1:~# systemctl stop kube-apiserver kube-scheduler kube-controller-manager kubeproxy kubelet
k8s-deploy# scp kube-apiserver kube-scheduler kube-controller-manager kube-proxy kubelet kubectl 192.168.1.101:/usr/local/bin/
k8s-master1:~# systemctl start kube-apiserver kube-scheduler kube-controller-manager kubeproxy kubelet

1.6.6.2 使用kubeasz批量升级节点

root@k8s-deploy:/usr/local/src/kubernetes/server/bin# cp kube-apiserver kube-controller-manager kube-scheduler kubelet kube-proxy kubectl /etc/kubeasz/bin/
root@k8s-deploy:/etc/kubeasz/bin# cd /etc/kubeasz/bin
root@k8s-deploy:/etc/kubeasz/bin# ./kube-proxy --version #验证新版本的文件
Kubernetes v1.26.4
root@k8s-deploy:/etc/kubeasz/bin# ./kube-apiserver --version
Kubernetes v1.26.4

root@k8s-deploy:/etc/kubeasz/bin# cd /etc/kubeasz/
root@k8s-deploy:/etc/kubeasz# ./ezctl upgrade k8s-cluster1 #执行升级

root@k8s-master1:~# kubectl get nodes
NAME            STATUS                     ROLES    AGE   VERSION
192.168.1.101   Ready,SchedulingDisabled   master   44h   v1.26.4
192.168.1.102   Ready,SchedulingDisabled   master   44h   v1.26.4
192.168.1.103   Ready,SchedulingDisabled   master   15h   v1.26.4
192.168.1.111   Ready                      node     44h   v1.26.4
192.168.1.112   Ready                      node     44h   v1.26.4
192.168.1.113   Ready                      node     15h   v1.26.4

1.7 coreDNS组件

k8s DNS组件版本发展经历了三个不同的版本,skyDNS,kube-dns,coreDNS。kube-dns和coredns组件用于解析k8s集群中service name所对应得到IP地址。
最早使用的是skyDNS,目前常用的DNS组件有kube-dns和coredns两个,到k8s版本 1.17.X都可以使用,kube-dns从kubernetes v1.18开始不支持使用。
https://console.cloud.google.com/gcr/images/google-containers/GLOBAL #google的镜像仓库地址

1.7.1 部署coreDNS

https://github.com/coredns/coredns
https://coredns.io/
https://github.com/coredns/deployment/tree/master/kubernetes #部署清单文件

root@k8s-master1:~# nerdctl pull coredns/coredns:1.9.4
root@k8s-master1:~# nerdctl tag coredns/coredns:1.9.4 harbor.idockerinaction.info/baseimages/coredns:1.9.4
root@k8s-master1:~# nerdctl push harbor.idockerinaction.info/baseimages/coredns:1.9.4
root@k8s-master1:~# kubectl apply -f coredns-v1.9.4.yaml #注意修改里面image的地址和service网段地址
root@k8s-master1:~# kubectl exec -it net-test1 sh #测试域名解析
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping www.baidu.com
PING www.baidu.com (36.152.44.95): 56 data bytes
64 bytes from 36.152.44.95: seq=0 ttl=55 time=9.506 ms
64 bytes from 36.152.44.95: seq=1 ttl=55 time=9.354 ms
^C
---

2 packets transmitted, 2 packets received, 0% packet loss

1.8 部署dashboard

dashboard是K8s的web管理界面组件

1.8.1 K8s官方dashboard部署

1.8.1.1 下载镜像并上传然后部署

root@k8s-master1:~# nerdctl pull kubernetesui/metrics-scraper:v1.0.8
root@k8s-master1:~# nerdctl tag kubernetesui/metrics-scraper:v1.0.8 harbor.idockerinaction.info/baseimages/kubernetesui/metrics-scraper:v1.0.8
root@k8s-master1:~# nerdctl push harbor.idockerinaction.info/baseimages/kubernetesui/metrics-scraper:v1.0.8
root@k8s-master1:~# nerdctl pull kubernetesui/dashboard:v2.7.0
root@k8s-master1:~# nerdctl tag kubernetesui/dashboard:v2.7.0 harbor.idockerinaction.info/baseimages/kubernetesui/dashboard:v2.7.0
root@k8s-master1:~# nerdctl push harbor.idockerinaction.info/baseimages/kubernetesui/dashboard:v2.7.0
root@k8s-master1:~# vim dashboard-v2.7.0.yaml #修改里面的image url为harbor.idockerinaction.info
root@k8s-master1:~# kubectl apply -f dashboard-v2.7.0.yaml -f admin-user.yaml -f admin-secret.yaml

1.8.1.2 获取token并用node节点地址登录dashboard

root@k8s-master1:~# kubectl get secret -A | grep admin
root@k8s-master1:~# kubectl -n kubernetes-dashboard describe secret dashboard-admin-user

端口是30000

image

image

1.8.1.3 基于kubeconfig文件登录

制作kubeconfig文件

root@k8s-master1:~# cp /root/.kube/config /opt/kubeconfig
root@k8s-master1:~# vim /opt/kubeconfig

在kubeconfig文件末尾添加1.8.1.2里面获取到的token, 格式如下截图。然后将kubeconfig文件复制出来,放到客户端本地,客户端再用浏览器登录的时候选择使用kubeconfig文件

image

image

image

1.8.2 Rancher

本实验未测试Rancher安装使用,有需要的可以参考[官网](https://www.rancher.com/)

1.8.3 Kuboard

https://kuboard.cn/overview/#kuboard在线体验 #官网
https://github.com/eip-work/kuboard-press #github

#在kubernetes环境部署kuboard
root@k8s-ha1:~# apt install nfs-server
root@k8s-ha1:~# mkdir -p /data/k8sdata/kuboard
root@k8s-ha1:~# vi /etc/exports
 /data/k8sdata/kuboard *(rw,no_root_squash)
root@k8s-ha1:~# systemctl restart nfs-server.service
root@k8s-ha1:~# systemctl enable nfs-server.service

#方式1 单机运行kuboard容器部署
root@k8s-master1:~# docker run -d \
 --restart=unless-stopped \
 --name=kuboard \
 -p 80:80/tcp \
 -p 10081:10081/tcp \
 -e KUBOARD_ENDPOINT="http://192.168.1.101:80" \
 -e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
 -v /root/kuboard-data:/data \
 swr.cn-east-2.myhuaweicloud.com/kuboard/kuboard:v3

#方式2 k8s pod方式运行,自己编写kuboard-all-in-one.yaml文件,注意修改里面挂载的nfs server地址
root@k8s-master1:~/20230416-cases/3.kuboard# kubectl apply -f kuboard-all-in-one.yaml

在浏览器输入 http://node-host-ip:30080 即可访问 Kuboard v3.x的界面,登录方式
用户名: admin
密 码: Kuboard123

登录Kuboard

image

登录进去后添加K8s集群

image

选择添加集群的方式

image

填写相关信息并点确定

image

image

选择添加好的集群登录

image

选择登录登录身份

image

登录成功

image

1.8.4 KubeSphere

1.8.4.1 部署KubeSphere

4.KubeSphere# ll
total 20
drwxr-xr-x 3 root root 109 Apr 12 14:51 ./
drwxr-xr-x 6 root root 86 Apr 12 14:38 ../
drwxr-xr-x 2 root root 82 Apr 12 14:48 1.nfs-stroageclass-cases/
-rw-r--r-- 1 root root 4553 Feb 3 12:37 2.kubesphere-installer.yaml
-rw-r--r-- 1 root root 10072 Feb 3 12:37 3.cluster-configuration.yaml
4.KubeSphere# cd 1.nfs-stroageclass-cases/
4.KubeSphere/1.nfs-stroageclass-cases# kubectl apply -f 1-rbac.yaml
4.KubeSphere/1.nfs-stroageclass-cases# kubectl apply -f 2-storageclass.yaml
4.KubeSphere/1.nfs-stroageclass-cases# kubectl apply -f 3-nfs-provisioner.yaml
4.KubeSphere/1.nfs-stroageclass-cases# kubectl get storageclasses.storage.k8s.io
4.KubeSphere# wget https://github.com/kubesphere/ksinstaller/releases/download/v3.3.2/kubesphere-installer.yaml
4.KubeSphere# kubectl apply -f kubesphere-installer.yaml
4.KubeSphere# wget https://github.com/kubesphere/ksinstaller/releases/download/v3.3.2/cluster-configuration.yaml
4.KubeSphere# kubectl apply cluster-configuration.yaml

查看安装过程日志信息:
4.KubeSphere# kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f

1.8.4.2 登录KubeSphere

默认账户:admin
默认密码:P@88w0rd

image

image

 
 
分类: 云服务
标签: k8s , 二进制 , ezdown , ansible
 
posted @   liujiacai  阅读(386)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· 没有源码,如何修改代码逻辑?
· PowerShell开发游戏 · 打蜜蜂
· 在鹅厂做java开发是什么体验
· 百万级群聊的设计实践
· WPF到Web的无缝过渡:英雄联盟客户端的OpenSilver迁移实战
历史上的今天:
2021-07-29 (转)linux运维人员必会的22道shell编程面试题及视频讲解
点击右上角即可分享
微信分享提示