KubeKey 安装Kubernetes和KubeSphere

KubeKey

KubeKey(由 Go 语言开发)是一种全新的安装工具,替代了以前使用的基于 ansible 的安装程序。KubeKey 为您提供灵活的安装选择,您可以仅安装 Kubernetes,也可以同时安装 Kubernetes 和 KubeSphere。

KubeKey 的几种使用场景:

  • 仅安装 Kubernetes;
  • 使用一个命令同时安装 Kubernetes 和 KubeSphere;
  • 扩缩集群;
  • 升级集群;
  • 安装 Kubernetes 相关的插件(Chart 或 YAML)。

KubeKey:  https://kubesphere.io/zh/docs/v3.3/installing-on-linux/introduction/multioverview/

1.准备环境:

准备5台虚拟机  

master01   centos7.4  2核  6G 192.168.0.226
master02   centos7.4  2核  6G 192.168.0.227
master03  centos7.4  2核  6G 192.168.0.228
worker01  centos7.4  2核  10G 192.168.0.230
worker02  centos7.4  2核  10G 192.168.0.231

 

 

1.1关闭防火墙

systemctl stop firewalld

1.2 设置主机hostname

hostnamectl set-hostname master01  #自行修改

1.3  安装依赖软件

 检查是否已经安装(建议:虽然官网建议根据版本选装,我建议不管什么版本的kubenete,都安装上,以防止后期升级时忘记安装)

rpm -qa|grep  socat
yum install socat #没有就安装
yum install conntrack #没有就安装
yum install ebtables #没有就安装
yum install ipset #没有就安装

 

 1.4 时间同步

定期同步的话,需要编写自动同步脚本,建议使用aliyun时间源,稳定好用

yum install ntpdate
ntpdate ntp1.aliyun.com

1.5 免密登录

此步,只需要在一台机器上就可以,也就是运行kubekey的机器上安装

ssh-keygen #一路回车
ssh-copy-id root@192.168.0.227
ssh root@192.168.0.227  #测试

 

2.VIP  192.168.0.235

因为我们是高可用部署(多个master),使用Keepalived提前准好一个VIP

参考文档:日行一善 博客

3.安装Kubekey

建议使用下载安装包部署,官方提供能的在线下载方式,我试用了失败

wget https://github.com/kubesphere/kubekey/releases/download/v2.2.2/kubek^C-v2.2.2-linux-amd64.tar.gz
tar -xfz kubekey-v2.2.2-linux-amd64.tar.gz

4.部署开始

4.1 获取配置文件

./kk create config --with-kubernetes 1.22.10 --with-kubesphere 3.3.0  -f /root/kubesphere.yaml  # 目前kk支持1.22.10,不支持1.24.x

参数说明:

  • 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7
  • 如果您在此步骤的命令中不添加标志 --with-kubesphere,则不会部署 KubeSphere,只能使用配置文件中的 addons 字段安装,或者在您后续使用 ./kk create cluster 命令时再次添加这个标志。
  • 如果您添加标志 --with-kubesphere 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。

4.2 配置yaml文件并执行

./kk create cluster -f kubesphere.yaml

 注意点:

controlPlaneEndpoint 配置负载均衡的vip,官方的描述如下,所以VIP必须提前准备好

 Yaml 如下

  1 apiVersion: kubekey.kubesphere.io/v1alpha2
  2 kind: Cluster
  3 metadata:
  4   name: sample
  5 spec:
  6   hosts:
  7   - {name: master01, address: 192.168.0.226, internalAddress: 192.168.0.226, user: root, password: "123456"}
  8   - {name: master02, address: 192.168.0.227, internalAddress: 192.168.0.227, user: root, password: "123456"}
  9   - {name: master03, address: 192.168.0.228, internalAddress: 192.168.0.228, user: root, password: "123456"}
 10   - {name: worker01, address: 192.168.0.230, internalAddress: 192.168.0.230, user: root, password: "123456"}
 11   - {name: worker02, address: 192.168.0.231, internalAddress: 192.168.0.231, user: root, password: "123456"}
 12   roleGroups:
 13     etcd:
 14     - master01
 15     - master02
 16     - master03
 17     control-plane: 
 18     - master01
 19     - master02
 20     - master03
 21     worker:
 22     - worker01
 23     - worker02
 24   controlPlaneEndpoint:
 25     ## Internal loadbalancer for apiservers 
 26     # internalLoadbalancer: haproxy
 27 
 28     domain: lb.kubesphere.local
 29     address: "192.168.0.235"
 30     port: 6443
 31   kubernetes:
 32     version: 1.22.10
 33     clusterName: cluster.local
 34     autoRenewCerts: true
 35     containerManager: containerd
 36   etcd:
 37     type: kubekey
 38   network:
 39     plugin: calico
 40     kubePodsCIDR: 10.233.64.0/18
 41     kubeServiceCIDR: 10.233.0.0/18
 42     ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
 43     multusCNI:
 44       enabled: false
 45   registry:
 46     privateRegistry: ""
 47     namespaceOverride: ""
 48     registryMirrors: []
 49     insecureRegistries: []
 50   addons: []
 51 
 52 
 53 
 54 ---
 55 apiVersion: installer.kubesphere.io/v1alpha1
 56 kind: ClusterConfiguration
 57 metadata:
 58   name: ks-installer
 59   namespace: kubesphere-system
 60   labels:
 61     version: 3.3.0
 62 spec:
 63   persistence:
 64     storageClass: ""
 65   authentication:
 66     jwtSecret: ""
 67   zone: ""
 68   local_registry: ""
 69   namespace_override: ""
 70   # dev_tag: ""
 71   etcd:
 72     monitoring: true
 73     endpointIps: localhost
 74     port: 2379
 75     tlsEnable: true
 76   common:
 77     core:
 78       console:
 79         enableMultiLogin: true
 80         port: 30880
 81         type: NodePort
 82     # apiserver:
 83     #  resources: {}
 84     # controllerManager:
 85     #  resources: {}
 86     redis:
 87       enabled: false
 88       volumeSize: 2Gi
 89     openldap:
 90       enabled: false
 91       volumeSize: 2Gi
 92     minio:
 93       volumeSize: 20Gi
 94     monitoring:
 95       # type: external
 96       endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
 97       GPUMonitoring:
 98         enabled: false
 99     gpu:
100       kinds:
101       - resourceName: "nvidia.com/gpu"
102         resourceType: "GPU"
103         default: true
104     es:
105       # master:
106       #   volumeSize: 4Gi
107       #   replicas: 1
108       #   resources: {}
109       # data:
110       #   volumeSize: 20Gi
111       #   replicas: 1
112       #   resources: {}
113       logMaxAge: 7
114       elkPrefix: logstash
115       basicAuth:
116         enabled: false
117         username: ""
118         password: ""
119       externalElasticsearchHost: ""
120       externalElasticsearchPort: ""
121   alerting:
122     enabled: true
123     # thanosruler:
124     #   replicas: 1
125     #   resources: {}
126   auditing:
127     enabled: false
128     # operator:
129     #   resources: {}
130     # webhook:
131     #   resources: {}
132   devops:
133     enabled: true
134     # resources: {}
135     jenkinsMemoryLim: 2Gi
136     jenkinsMemoryReq: 1500Mi
137     jenkinsVolumeSize: 8Gi
138     jenkinsJavaOpts_Xms: 1200m
139     jenkinsJavaOpts_Xmx: 1600m
140     jenkinsJavaOpts_MaxRAM: 2g
141   events:
142     enabled: true
143     # operator:
144     #   resources: {}
145     # exporter:
146     #   resources: {}
147     # ruler:
148     #   enabled: true
149     #   replicas: 2
150     #   resources: {}
151   logging:
152     enabled: true
153     logsidecar:
154       enabled: true
155       replicas: 2
156       # resources: {}
157   metrics_server:
158     enabled: true
159   monitoring:
160     storageClass: ""
161     node_exporter:
162       port: 9100
163       # resources: {}
164     # kube_rbac_proxy:
165     #   resources: {}
166     # kube_state_metrics:
167     #   resources: {}
168     # prometheus:
169     #   replicas: 1
170     #   volumeSize: 20Gi
171     #   resources: {}
172     #   operator:
173     #     resources: {}
174     # alertmanager:
175     #   replicas: 1
176     #   resources: {}
177     # notification_manager:
178     #   resources: {}
179     #   operator:
180     #     resources: {}
181     #   proxy:
182     #     resources: {}
183     gpu:
184       nvidia_dcgm_exporter:
185         enabled: false
186         # resources: {}
187   multicluster:
188     clusterRole: none
189   network:
190     networkpolicy:
191       enabled: false
192     ippool:
193       type: none
194     topology:
195       type: none
196   openpitrix:
197     store:
198       enabled: true
199   servicemesh:
200     enabled: true
201     istio:
202       components:
203         ingressGateways:
204         - name: istio-ingressgateway
205           enabled: false
206         cni:
207           enabled: false
208   edgeruntime:
209     enabled: false
210     kubeedge:
211       enabled: false
212       cloudCore:
213         cloudHub:
214           advertiseAddress:
215             - ""
216         service:
217           cloudhubNodePort: "30000"
218           cloudhubQuicNodePort: "30001"
219           cloudhubHttpsNodePort: "30002"
220           cloudstreamNodePort: "30003"
221           tunnelNodePort: "30004"
222         # resources: {}
223         # hostNetWork: false
224       iptables-manager:
225         enabled: true
226         mode: "external"
227         # resources: {}
228       # edgeService:
229       #   resources: {}
230   terminal:
231     timeout: 600

执行过程:

 

 

安装成功

 部署中遇到的问题:

没有提前准备Vip,在初始化kubelet会失败

 

posted @ 2022-10-12 10:52  王叫兽  阅读(1806)  评论(0编辑  收藏  举报