kind创建开发测试k8s集群
通过容器创建集群节点,可在单台机器上创建k8s集群
内部基于kubeadm创建集群
环境
- CentOS 8.2
- Docker-CE 18.06+
- 内核:
- net.ipv4.ip_forward=1
- net.bridge.bridge-nf-call-iptables=1
- vm.swappiness=0
安装
kind通过二进制文件部署,下载后赋予可执行权限并移动到PATH下即可。
linux下安装:
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64
chmod +x ./kind
mv ./kind /usr/bin/kind
mac和windows略
使用
https://kind.sigs.k8s.io/docs/user/quick-start/
默认通过"kind create cluster"创建的是单节点集群,只能宿主机访问,无法被外部访问
[root@vm kind]# kind create cluster
Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.21.1) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-kind"
You can now use your cluster with:
kubectl cluster-info --context kind-kind
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
[root@vm kind]#
[root@vm1 kind]# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
kind-control-plane Ready control-plane,master 7m1s v1.21.1 172.18.0.2 <none> Ubuntu 21.04 4.18.0-193.el8.x86_64 containerd://1.5.2
[root@vm kind]# docker ps |grep kind
e027620b386c kindest/node:v1.21.1 "/usr/local/bin/entr…" 8 minutes ago Up 8 minutes 127.0.0.1:46475->6443/tcp kind-control-plane
[root@vm kind]#
通过配置文件定制集群,下面为配置文件示例及说明:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: cluster01
networking:
# 绑定到宿主机上的地址,如果需要外部访问设置为宿主机ip
apiServerAddress: "127.0.0.1"
# 绑定到宿主机上的端口,如果建多个集群或者宿主机已经占用需要修改为不同的端口
apiServerPort: 16443
podSubnet: "10.244.0.0/16"
serviceSubnet: "10.96.0.0/12"
# 是否使用默认的cni插件kindnet
disableDefaultCNI: false
# kube-proxy使用的网络模式,none表示不需要kube-proxy组件
kubeProxyMode: "ipvs"
nodes:
# master节点,写一项表示加一个节点
- role: control-plane
# 自定义节点使用的镜像及版本
image: kindest/node:v1.22.5
# 宿主机和节点文件共享挂载
extraMounts:
# 宿主机目录
- hostPath: /kind/cluster1
# 节点目录
containerPath: /data
readOnly: false
selinuxRelabel: false
propagation: HostToContainer
# 节点端口到宿主机端口映射
extraPortMappings:
# 节点端口nodeport
- containerPort: 38080
# 宿主机端口
hostPort: 18080
# 宿主机端口监听地址,需要外部访问设置为"0.0.0.0"
listenAddress: "127.0.0.1"
protocol: TCP
# worker节点,配置同master节点
- role: worker
image: kindest/node:v1.22.5
- role: worker
image: kindest/node:v1.22.5
- role: worker
image: kindest/node:v1.22.5
创建自定义多节点集群:
# 自定义1 master、3 worker节点
[root@vm kind]# kind create cluster --config cluster1.yml
Creating cluster "cluster1" ...
✓ Ensuring node image (kindest/node:v1.22.5) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-cluster1"
You can now use your cluster with:
kubectl cluster-info --context kind-cluster1
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
# 查看集群信息,默认安装完成后会自动把kubeconfig追加到~/.kube/config中
[root@vm kind]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.xx.xx:16443
CoreDNS is running at https://192.168.xx.xx:16443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
# 查看集群节点信息
[root@vm kind]# kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
cluster1-control-plane Ready control-plane,master 2m1s v1.22.5 172.18.0.5 <none> Ubuntu 21.10 4.18.0-193.el8.x86_64 containerd://1.5.9
cluster1-worker Ready <none> 86s v1.22.5 172.18.0.4 <none> Ubuntu 21.10 4.18.0-193.el8.x86_64 containerd://1.5.9
cluster1-worker2 Ready <none> 86s v1.22.5 172.18.0.2 <none> Ubuntu 21.10 4.18.0-193.el8.x86_64 containerd://1.5.9
cluster1-worker3 Ready <none> 86s v1.22.5 172.18.0.3 <none> Ubuntu 21.10 4.18.0-193.el8.x86_64 containerd://1.5.9
# 查看集群所有资源
[root@vm kind]# kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-78fcd69978-7cq66 1/1 Running 0 3m15s
kube-system pod/coredns-78fcd69978-vwzrc 1/1 Running 0 3m15s
kube-system pod/etcd-cluster1-control-plane 1/1 Running 0 3m28s
kube-system pod/kindnet-4hrl2 1/1 Running 0 2m57s
kube-system pod/kindnet-5qcpj 1/1 Running 0 3m15s
kube-system pod/kindnet-755hw 1/1 Running 0 2m57s
kube-system pod/kindnet-z2tbn 1/1 Running 0 2m57s
kube-system pod/kube-apiserver-cluster1-control-plane 1/1 Running 0 3m28s
kube-system pod/kube-controller-manager-cluster1-control-plane 1/1 Running 0 3m28s
kube-system pod/kube-proxy-4mt7g 1/1 Running 0 2m57s
kube-system pod/kube-proxy-5b9d4 1/1 Running 0 2m57s
kube-system pod/kube-proxy-n8b4k 1/1 Running 0 2m57s
kube-system pod/kube-proxy-qr62h 1/1 Running 0 3m15s
kube-system pod/kube-scheduler-cluster1-control-plane 1/1 Running 0 3m28s
local-path-storage pod/local-path-provisioner-74567d47b4-m5wn8 1/1 Running 0 3m15s
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3m30s
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3m28s
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kindnet 4 4 4 4 4 <none> 3m25s
kube-system daemonset.apps/kube-proxy 4 4 4 4 4 kubernetes.io/os=linux 3m28s
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2/2 2 2 3m28s
local-path-storage deployment.apps/local-path-provisioner 1/1 1 1 3m24s
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-78fcd69978 2 2 2 3m15s
local-path-storage replicaset.apps/local-path-provisioner-74567d47b4 1 1 1 3m15s
[root@vm kind]#
由于配置文件定义了集群apiserver地址监听了宿主机地址,所以可以通过外部客户端访问集群,接下来使用和其他类型的集群无任何区别了。