【Kubernetes】通过kind在本机搭建高可用集群

kind安装官方文档:https://kind.sigs.k8s.io/docs/user/quick-start/#installing-with-a-package-manager
kind配置文件说明: https://kind.sigs.k8s.io/docs/user/configuration/
kind git: https://github.com/kubernetes-sigs/kind
k8s 官方文档:https://kubernetes.io/docs/tasks/tools/

kind介绍

k8s 单机安装的方式有很多,如:

  • 通过docker desktop桌面一键部署
  • 通过minikube部署一个单节点的k8s集群
  • 通过kubeadm部署并定理k8s集群工具
  • 通过kind部署一个k8s集群,支持多节点, 需要docker

Kind 是 Kubernetes In Docker 的缩写,顾名思义是使用 Docker 容器作为 Node 并将 Kubernetes 部署至其中的一个工具。其实内部也是kubeadm来管理集群的。

kind安装

二进制方式安装

这里选择二进制方式安装,so fast and so easy ~

# for Intel Macs
[ $(uname -m) = x86_64 ]&& curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.13.0/kind-darwin-amd64
# for M1 / ARM Macs
[ $(uname -m) = arm64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.13.0/kind-darwin-arm64
chmod +x ./kind
mv ./kind /some-dir-in-your-PATH/kind
# 我的执行 mv ./kind ~/devTools/k8s/kind

# 为kind配置环境变量
vim ~/.zprofile
export PATH=${PATH}:~/devTools/k8s

$ kind version
kind v0.13.0 go1.18 darwin/arm64

kind命令

先查看一下kind命令,支持哪些操作

$ ./kind --help
kind creates and manages local Kubernetes clusters using Docker container 'nodes'

Usage:
  kind [command]

Available Commands:
  build       Build one of [node-image]
  completion  Output shell completion code for the specified shell (bash, zsh or fish)
  create      Creates one of [cluster]
  delete      Deletes one of [cluster]
  export      Exports one of [kubeconfig, logs]
  get         Gets one of [clusters, nodes, kubeconfig]
  help        Help about any command
  load        Loads images into nodes
  version     Prints the kind CLI version

Flags:
  -h, --help              help for kind
      --loglevel string   DEPRECATED: see -v instead
  -q, --quiet             silence all stderr output
  -v, --verbosity int32   info log verbosity, higher value produces more output
      --version           version for kind

Use "kind [command] --help" for more information about a command.

kind create cluster 命令

$ ./kind create cluster --help
Creates a local Kubernetes cluster using Docker container 'nodes'

Usage:
  kind create cluster [flags]

Flags:
      --config string       path to a kind config file
  -h, --help                help for cluster
      --image string        node docker image to use for booting the cluster
      --kubeconfig string   sets kubeconfig path instead of $KUBECONFIG or $HOME/.kube/config
      --name string         cluster name, overrides KIND_CLUSTER_NAME, config (default kind)
      --retain              retain nodes for debugging when cluster creation fails
      --wait duration       wait for control plane node to be ready (default 0s)

Global Flags:
      --loglevel string   DEPRECATED: see -v instead
  -q, --quiet             silence all stderr output
  -v, --verbosity int32   info log verbosity, higher value produces more output

创建默认单节点集群

# 创建一个默认集群并指定集群名称,名称需要满足规则: `^[a-z0-9.-]+$`
$ ./kind create cluster --name my-test-cluster
Creating cluster "my-test-cluster" ...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼 
✓ Preparing nodes 📦  
✓ Writing configuration 📜 
✓ Starting control-plane 🕹️ 
✓ Installing CNI 🔌 
✓ Installing StorageClass 💾 
Set kubectl context to "kind-my-test-cluster"
You can now use your cluster with:

kubectl cluster-info --context kind-my-test-cluster

Thanks for using kind! 😊

# 指定k8s集群环境变量,与该k8s集群交互
$ kubectl cluster-info --context kind-my-test-cluster
Kubernetes control plane is running at https://127.0.0.1:56882
CoreDNS is running at https://127.0.0.1:56882/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

#这是一个单节点集群
$ kubectl get nodes
NAME                            STATUS   ROLES           AGE   VERSION
my-test-cluster-control-plane   Ready    control-plane   22m   v1.24.0

其它命令

# 列出所有k8s集群
$ kind get clusters
my-test-cluster

$ kind get nodes --name my-test-cluster
my-test-cluster-control-plane

# 获取config
$ kind get kubeconfig --name my-test-cluster

# 查看所有集群
$ kind get clusters
my-test-cluster

删除集群
$ kind delete cluster --name my-test-cluster
Deleting cluster "my-test-cluster" ...

创建多节点集群

# 创建集群
$ kind create cluster --config kind-example-config.yaml --name my-multi-cluster
Creating cluster "my-multi-cluster" ...
 ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-my-multi-cluster"
You can now use your cluster with:

kubectl cluster-info --context kind-my-multi-cluster

Thanks for using kind! 😊

# 查看所有集群
$ kind get clusters
my-multi-cluster

# 确认是否多节点
$ kubectl get nodes
NAME                             STATUS   ROLES           AGE   VERSION
my-multi-cluster-control-plane   Ready    control-plane   81s   v1.24.0
my-multi-cluster-worker          Ready    <none>          60s   v1.24.0
my-multi-cluster-worker2         Ready    <none>          60s   v1.24.0

配置文件内容如下:

# three node (two workers) cluster config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

https://github.com/kubernetes-sigs/kind/blob/main/site/content/docs/user/kind-example-config.yaml

创建主节点HA集群

$ kind create cluster --config kind-ha-config.yaml --name my-ha-cluster
Creating cluster "my-ha-cluster" ...
 ✓ Ensuring node image (kindest/node:v1.24.0) 🖼 
 ✓ Preparing nodes 📦 📦 📦 📦 📦 📦  
 ✓ Configuring the external load balancer ⚖️ 
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining more control-plane nodes 🎮 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-my-ha-cluster"
You can now use your cluster with:

kubectl cluster-info --context kind-my-ha-cluster #可通过该命令切换集群

Have a nice day! 👋

# 可以发现当前有两个集群了
$ kind get clusters
my-ha-cluster
my-multi-cluster

# 查看节点确认是否为高可用集群
$ kubectl get nodes -o wide
NAME                           STATUS   ROLES           AGE     VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION      CONTAINER-RUNTIME
my-ha-cluster-control-plane    Ready    control-plane   4m38s   v1.24.0   172.19.0.6    <none>        Ubuntu 21.10   5.10.104-linuxkit   containerd://1.6.4
my-ha-cluster-control-plane2   Ready    control-plane   4m18s   v1.24.0   172.19.0.10   <none>        Ubuntu 21.10   5.10.104-linuxkit   containerd://1.6.4
my-ha-cluster-control-plane3   Ready    control-plane   3m19s   v1.24.0   172.19.0.9    <none>        Ubuntu 21.10   5.10.104-linuxkit   containerd://1.6.4
my-ha-cluster-worker           Ready    <none>          3m11s   v1.24.0   172.19.0.8    <none>        Ubuntu 21.10   5.10.104-linuxkit   containerd://1.6.4
my-ha-cluster-worker2          Ready    <none>          3m12s   v1.24.0   172.19.0.5    <none>        Ubuntu 21.10   5.10.104-linuxkit   containerd://1.6.4
my-ha-cluster-worker3          Ready    <none>          3m11s   v1.24.0   172.19.0.7    <none>        Ubuntu 21.10   5.10.104-linuxkit   containerd://1.6.4         Ready    <none>          90s     v1.24.0

配置文件内容如下:

# a cluster with 3 control-plane nodes and 3 workers
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: control-plane
- role: control-plane
- role: worker
- role: worker
- role: worker

关于kind配置文件

从上面我们可以通过配置文件方式创建单节点集群、多节点集群、高可用集群,以上配置中最简单配置项,其实可配置项还有很多,更多配置请参考官网配置说明 https://kind.sigs.k8s.io/docs/user/configuration/
这里只将所有配置项放到一个文件中,方便查看,详细配置约束查看官方文档

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: app-1-cluster
featureGates:
  # any feature gate can be enabled here with "Name": true
  # or disabled here with "Name": false
  # not all feature gates are tested, however
  "CSIMigration": true
runtimeConfig:
  "api/alpha": "false"
networking:
  ipFamily: ipv6
  # WARNING: It is _strongly_ recommended that you keep this the default
  # (127.0.0.1) for security reasons. However it is possible to change this.
  apiServerAddress: "127.0.0.1"
  # By default the API server listens on a random open port.
  # You may choose a specific port but probably don't need to in most cases.
  # Using a random port makes it easier to spin up multiple clusters.
  apiServerPort: 6443
  podSubnet: "10.244.0.0/16"
  serviceSubnet: "10.96.0.0/12"
  # the default CNI will not be installed
  disableDefaultCNI: true
  kubeProxyMode: "ipvs"
# One control plane node and three "workers".
#
# While these will not add more real compute capacity and
# have limited isolation, this can be useful for testing
# rolling updates etc.
#
# The API-server and other control plane components will be
# on the control-plane node.
#
# You probably don't need this unless you are testing Kubernetes itself.
nodes:
- role: control-plane
  # kind是通过kubeadm管理集群的 kubeadm init: InitConfiguration, ClusterConfiguration, KubeProxyConfiguration, KubeletConfiguration
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "my-label=true"
- role: worker
- role: worker
- role: worker
  image: kindest/node:v1.16.4@sha256:b91a2c2317a000f3a783489dfb755064177dbc3a0b2f4147d50f04825d016f55
  # add a mount from /path/to/my/files on the host to /files on the node
  extraMounts:
  - hostPath: /path/to/my/files/
    containerPath: /files
    # optional: if set, the mount is read-only.
    # default false
    readOnly: true
    # optional: if set, the mount needs SELinux relabeling.
    # default false
    selinuxRelabel: false
    # optional: set propagation mode (None, HostToContainer or Bidirectional)
    # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
    # default None
    propagation: HostToContainer
  # port forward 80 on the host to 80 on this node
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    # optional: set the bind address on the host
    # 0.0.0.0 is the current default
    listenAddress: "127.0.0.1"
    # optional: set the protocol to one of TCP, UDP, SCTP.
    # TCP is the default
    protocol: TCP
posted @ 2022-05-17 21:59  大梦想家  阅读(587)  评论(0编辑  收藏  举报