二、Kubernetes(k8s)安装NFS动态供给存储类并安装KubeSphere
Kubernetes(k8s)安装NFS动态供给存储类并安装KubeSphere
KubeSphere介绍
它是一款全栈的 Kubernetes 容器云 PaaS 解决方案(来源于官网),而我觉得它是一款强大的Kubernetes图形界面,它继承了如下组件 (下面这段内容来自官网):
Kubernetes DevOps 系统
基于 Jenkins 为引擎打造的 CI/CD,内置 Source-to-Image 和 Binary-to-Image 自动化打包部署工具
基于 Istio 的微服务治理
提供细粒度的流量管理、流量监控、灰度发布、分布式追踪,支持可视化的流量拓扑
丰富的云原生可观测性
提供多维度与多租户的监控、日志、事件、审计搜索,支持多种告警策略与通知渠道,支持日志转发
云原生应用商店
提供基于 Helm 的应用商店与应用仓库,内置多个应用模板,支持应用生命周期管理
Kubernetes 多集群管理
跨多云与多集群统一分发应用,提供集群高可用与灾备的最佳实践,支持跨级群的可观测性
Kubernetes 边缘节点管理
基于 KubeEdge 实现应用与工作负载在云端与边缘节点的统一分发与管理,解决在海量边、端设备上完成应用交付、运维、管控的需求
当然他的功能远不止这些,欢迎各位来到KubeSphere的官网了解更多内容:https://www.kubesphere.io/zh/
环境准备
KubeSphere
(摘自官网)
您的 Kubernetes 版本必须为:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
确保您的机器满足最低硬件要求:CPU > 1 核,内存 > 2 GB。
在安装之前,需要配置 Kubernetes 集群中的默认存储类型(这篇文章会介绍安装)。
我已经准备好了一个Kubernetes集群,如图
NFS动态供给
首先你需要准备一台NFS服务器,为了方便,我这次就以我的主服务器 k8s-master 来担任这个NFS服务器了。
安装NFS动态供给
搭建NFS
首先我们需要在NFS服务器(我的NFS服务器和master是同一台)和所有k8s节点当中安装 nfs-utils 软件包(master和node都需要安装),可执行下面这行命令:
yum install -y nfs-utils
然后确定一个nfs共享的目录,这次我就使用 /data/nfs/dynamic-provisioner
这个目录作为nfs的共享目录了。所以我们来执行下面命令创建并共享这个目录:
# 创建这个目录 mkdir -p /data/nfs/dynamic-provisioner # 执行这行命令将这个目录写到写到 /etc/exports 文件当中去,这样NFS会对局域网暴露这个目录 cat >> /etc/exports << EOF /data/nfs/dynamic-provisioner *(rw,sync,no_root_squash) EOF # 启动NFS服务 systemctl enable --now nfs-server
检查是否暴露成功:
showmount -e {nfs服务器地址}
showmount -e 10.0.8.16
下载动态供给驱动
因为Kubernetes自己不自带NFS动态供给的驱动,所以我们需要下载第三方的NFS动态供给驱动。Kubernetes官方推荐了两个第三方的驱动可供选择,如图:
个人觉得这个 NFS subdir
驱动比较好用,这次就用这个驱动来搭建动态供给了。我们可以来到它的官网:https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner,并找到最新的release:
目前最新的发行版是 4.0.18
我们就下载这个版本:
也可直接通过命令下载:
wget https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/releases/download/nfs-subdir-external-provisioner-4.0.18/nfs-subdir-external-provisioner-4.0.18.tgz
我们直接解压它:
修改驱动文件
我们来到这个文件夹下的deploy目录:
cd nfs-subdir-external-provisioner-nfs-subdir-external-provisioner-4.0.18/deploy/
可以看到这里面有一些yaml,我们需要修改一部分:
首先我们需要修改的就是 deployment.yaml
,我们直接用vim修改:
vim deployment.yaml
首先就是这个镜像是在谷歌的k8s官方镜像仓库拉取的,国内拉取不到,所以我们要修改一下:
我已经通过一些方法将它拉取下来并且上传到了国内的阿里云镜像仓库,我们可以直接用下面这个镜像来替换:
# 这个镜像是在谷歌上的,国内拉取不到 # image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2 # 使用这个我先在谷歌上拉取下来再上传到阿里云上的镜像 image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/nfs-subdir-external-provisioner:v4.0.2
然后我们还需要修改一下下面的nfs服务器地址和nfs服务器内共享的目录:
执行下面这一段脚本我们可以看到还是有很多资源是存放在默认命名空间下:
cd nfs-subdir-external-provisioner-nfs-subdir-external-provisioner-4.0.18/deploy yamls=$(grep -rl 'namespace: default' ./) for yaml in ${yamls}; do echo ${yaml} cat ${yaml} | grep 'namespace: default' done
执行结果:
我们可以新创建一个命名空间专门装这个驱动,也方便以后管理,所以我决定创建一个名为 nfs-provisioner
命名空间,为了方便就不用yaml文件了,直接通过命令创建:
kubectl create namespace nfs-provisioner
kubectl get namespace
涉及命名空间这个配置的文件还挺多的,所以我们干脆通过一行脚本更改所有:
cd /home/nfs-subdir-external-provisioner-nfs-subdir-external-provisioner-4.0.18/deploy sed -i 's/namespace: default/namespace: nfs-provisioner/g' `grep -rl 'namespace: default' ./`
这行批量替换脚本直接将所有文件的命名空间都改过来了:
安装动态供给
之前我们已经修改好了所有的yaml资源清单文件,接下来我们直接执行安装。安装也是非常简单,直接通过下面一行命令就可以安装完成:
cd nfs-subdir-external-provisioner-nfs-subdir-external-provisioner-4.0.18/deploy kubectl apply -k .
可以执行下面这个行命令查看是否部署完成:
kubectl get all -o wide -n nfs-provisioner
看到READY为 1/1
并且STATUS状态为 Running
那么动态供给就已经部署完毕:
可以执行下面命令查询安装的动态供应存储类的名字:
kubectl get storageclass
nfs动态供应就已经安装完毕了
如果你只打算安装动态供给的存储类,那么到这里就结束了哦,接下来是KubeSphere相关的内容
安装KubeSphere
下载KubeSphere的yaml资源清单文件
此次安装的是最新的 v3.4.0 的 KubeSphere,可以通过以下命令下载资源清单文件(共两个):
wget \ https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml \ https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml
#[root@master kubesphere]# cat kubesphere-installer-3.4.yaml --- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: clusterconfigurations.installer.kubesphere.io spec: group: installer.kubesphere.io versions: - name: v1alpha1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object x-kubernetes-preserve-unknown-fields: true status: type: object x-kubernetes-preserve-unknown-fields: true scope: Namespaced names: plural: clusterconfigurations singular: clusterconfiguration kind: ClusterConfiguration shortNames: - cc --- apiVersion: v1 kind: Namespace metadata: name: kubesphere-system --- apiVersion: v1 kind: ServiceAccount metadata: name: ks-installer namespace: kubesphere-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: ks-installer rules: - apiGroups: - "" resources: - '*' verbs: - '*' - apiGroups: - apps resources: - '*' verbs: - '*' - apiGroups: - extensions resources: - '*' verbs: - '*' - apiGroups: - batch resources: - '*' verbs: - '*' - apiGroups: - rbac.authorization.k8s.io resources: - '*' verbs: - '*' - apiGroups: - apiregistration.k8s.io resources: - '*' verbs: - '*' - apiGroups: - apiextensions.k8s.io resources: - '*' verbs: - '*' - apiGroups: - tenant.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - certificates.k8s.io resources: - '*' verbs: - '*' - apiGroups: - devops.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - monitoring.coreos.com resources: - '*' verbs: - '*' - apiGroups: - logging.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - jaegertracing.io resources: - '*' verbs: - '*' - apiGroups: - storage.k8s.io resources: - '*' verbs: - '*' - apiGroups: - admissionregistration.k8s.io resources: - '*' verbs: - '*' - apiGroups: - policy resources: - '*' verbs: - '*' - apiGroups: - autoscaling resources: - '*' verbs: - '*' - apiGroups: - networking.istio.io resources: - '*' verbs: - '*' - apiGroups: - config.istio.io resources: - '*' verbs: - '*' - apiGroups: - iam.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - notification.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - auditing.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - events.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - core.kubefed.io resources: - '*' verbs: - '*' - apiGroups: - installer.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - storage.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - security.istio.io resources: - '*' verbs: - '*' - apiGroups: - monitoring.kiali.io resources: - '*' verbs: - '*' - apiGroups: - kiali.io resources: - '*' verbs: - '*' - apiGroups: - networking.k8s.io resources: - '*' verbs: - '*' - apiGroups: - edgeruntime.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - types.kubefed.io resources: - '*' verbs: - '*' - apiGroups: - monitoring.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - application.kubesphere.io resources: - '*' verbs: - '*' - apiGroups: - alerting.kubesphere.io resources: - '*' verbs: - '*' --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: ks-installer subjects: - kind: ServiceAccount name: ks-installer namespace: kubesphere-system roleRef: kind: ClusterRole name: ks-installer apiGroup: rbac.authorization.k8s.io --- apiVersion: apps/v1 kind: Deployment metadata: name: ks-installer namespace: kubesphere-system labels: app: ks-installer spec: replicas: 1 selector: matchLabels: app: ks-installer template: metadata: labels: app: ks-installer spec: serviceAccountName: ks-installer containers: - name: installer image: kubesphere/ks-installer:v3.4.0 imagePullPolicy: "Always" resources: limits: cpu: "1" memory: 1Gi requests: cpu: 20m memory: 100Mi volumeMounts: - mountPath: /etc/localtime name: host-time readOnly: true volumes: - hostPath: path: /etc/localtime type: "" name: host-time #[root@master kubesphere]#
#[root@master kubesphere]# cat cluster-configuration.yaml --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration metadata: name: ks-installer namespace: kubesphere-system labels: version: v3.4.0 spec: persistence: storageClass: "nfs-client" # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here. authentication: # adminPassword: "" # Custom password of the admin user. If the parameter exists but the value is empty, a random password is generated. If the parameter does not exist, P@88w0rd is used. jwtSecret: "" # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster. local_registry: "" # Add your private registry address if it is needed. # dev_tag: "" # Add your kubesphere image tag you want to install, by default it's same as ks-installer release version. etcd: monitoring: false # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it. endpointIps: localhost # etcd cluster EndpointIps. It can be a bunch of IPs here. port: 2379 # etcd port. tlsEnable: true common: core: console: enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time. port: 30880 type: NodePort # apiserver: # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster # resources: {} # controllerManager: # resources: {} redis: enabled: false enableHA: false volumeSize: 2Gi # Redis PVC size. openldap: enabled: false volumeSize: 2Gi # openldap PVC size. minio: volumeSize: 20Gi # Minio PVC size. monitoring: # type: external # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line. endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data. GPUMonitoring: # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero. enabled: false gpu: # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs. kinds: - resourceName: "nvidia.com/gpu" resourceType: "GPU" default: true es: # Storage backend for logging, events and auditing. # master: # volumeSize: 4Gi # The volume size of Elasticsearch master nodes. # replicas: 1 # The total number of master nodes. Even numbers are not allowed. # resources: {} # data: # volumeSize: 20Gi # The volume size of Elasticsearch data nodes. # replicas: 1 # The total number of data nodes. # resources: {} enabled: false logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log. basicAuth: enabled: false username: "" password: "" externalElasticsearchHost: "" externalElasticsearchPort: "" opensearch: # Storage backend for logging, events and auditing. # master: # volumeSize: 4Gi # The volume size of Opensearch master nodes. # replicas: 1 # The total number of master nodes. Even numbers are not allowed. # resources: {} # data: # volumeSize: 20Gi # The volume size of Opensearch data nodes. # replicas: 1 # The total number of data nodes. # resources: {} enabled: true logMaxAge: 7 # Log retention time in built-in Opensearch. It is 7 days by default. opensearchPrefix: whizard # The string making up index names. The index name will be formatted as ks-<opensearchPrefix>-logging. basicAuth: enabled: true username: "admin" password: "admin" externalOpensearchHost: "" externalOpensearchPort: "" dashboard: enabled: false alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from. enabled: false # Enable or disable the KubeSphere Alerting System. # thanosruler: # replicas: 1 # resources: {} auditing: # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants. enabled: false # Enable or disable the KubeSphere Auditing Log System. # operator: # resources: {} # webhook: # resources: {} devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image. enabled: false # Enable or disable the KubeSphere DevOps System. jenkinsCpuReq: 0.5 jenkinsCpuLim: 1 jenkinsMemoryReq: 4Gi jenkinsMemoryLim: 4Gi # Recommend keep same as requests.memory. jenkinsVolumeSize: 16Gi events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters. enabled: false # Enable or disable the KubeSphere Events System. # operator: # resources: {} # exporter: # resources: {} ruler: enabled: true replicas: 2 # resources: {} logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd. enabled: false # Enable or disable the KubeSphere Logging System. logsidecar: enabled: true replicas: 2 # resources: {} metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler). enabled: false # Enable or disable metrics-server. monitoring: storageClass: "nfs-client" # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default. node_exporter: port: 9100 # resources: {} # kube_rbac_proxy: # resources: {} # kube_state_metrics: # resources: {} # prometheus: # replicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability. # volumeSize: 20Gi # Prometheus PVC size. # resources: {} # operator: # resources: {} # alertmanager: # replicas: 1 # AlertManager Replicas. # resources: {} # notification_manager: # resources: {} # operator: # resources: {} # proxy: # resources: {} gpu: # GPU monitoring-related plug-in installation. nvidia_dcgm_exporter: # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly. enabled: false # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes. # resources: {} multicluster: clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster. network: networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net. enabled: false # Enable or disable network policies. ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool. type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled. topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope. type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled. openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle. store: enabled: false # Enable or disable the KubeSphere App Store. servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology. enabled: false # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based). istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/ components: ingressGateways: - name: istio-ingressgateway enabled: false cni: enabled: false edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes. enabled: false kubeedge: # kubeedge configurations enabled: false cloudCore: cloudHub: advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided. service: cloudhubNodePort: "30000" cloudhubQuicNodePort: "30001" cloudhubHttpsNodePort: "30002" cloudstreamNodePort: "30003" tunnelNodePort: "30004" # resources: {} # hostNetWork: false iptables-manager: enabled: true mode: "external" # resources: {} # edgeService: # resources: {} gatekeeper: # Provide admission policy and rule management, A validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent. enabled: false # Enable or disable Gatekeeper. # controller_manager: # resources: {} # audit: # resources: {} terminal: # image: 'alpine:3.15' # There must be an nsenter program in the image timeout: 600 # Container timeout, if set to 0, no timeout will be used. The unit is seconds #[root@master kubesphere]#
其中这两个文件的作用:
- kubesphere-installer.yaml: KubeSphere的安装器
- cluster-configuration.yaml: KubeSphere的集群配置文件
我们需要修改一下 cluster-configuration.yaml
文件,还记得我们之前的那个存储类吗?我们记住这个名字:
kubectl get storageclass
然后我们开始修改这个文件:
vim cluster-configuration.yaml
可以看到后面注释的说明,所以我们将 nfs-client
这个存储类的名字写在后面:
安装KubeSphere
然后我们先创建 kubesphere-installer.yaml
里面的资源:
kubectl apply -f kubesphere-installer.yaml
可以看到创建了一些资源:
然后我们检查这个资源是否创建成功:
kubectl get pod -o wide -n kubesphere-system
执行下面命令检查KubeSphere的执行日志:
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
一段时间之后看到这个就是安装成功了:
因为我使用的是云服务器,所以我使用任何一个云服务器的公网IP地址+端口就能访问KubeSphere了,默认的用户名/密码是
admin/P@88w0rd
:
初次登陆需要修改admin用户的密码:
随后即可以登录到KubeSphere的首页了:
同时我们来到NFS服务器共享的目录,可以看到KubeSphere的持久化数据存储在这:
使用KubeSphere部署应用
创建项目
因为KubeSphere的管理是基于项目的,所以我们先要创建一个项目,先点击企业空间:
创建项目
创建一个测试项目
创建一个测试项目:
创建一个项目其实就是创建了一个命名空间:
部署MySQL
现在我们开始部署MySQL了,点击这个刚创建的项目:
然后依次点击 工作负载->有状态副本集->创建
:
填写部署一个测试的数据库然后点击下一步:
点击添加容器:
搜索指定的镜像并填写要创建的容器名字:
网下面拉可以设置CPU和内存限制还有需要使用的端口:
然后我们往下拉勾选环境变量,然后点击创建保密字典:
我们来设置mysql的密码,这个名字可以随便写,但是自己要记住:
类型选择默认后点击添加数据:
在这里设置mysql的root用户密码:
然后点击创建:
最后创建的Secret会自动填充,但是注意MySQL设置root用户密码的环境变量名不能自定义,是由Docker规定死的 MYSQL_ROOT_PASSWORD
:
点击勾选同步主机时区:
点击下面的对勾✅:
最后点击下一步:
到了下一步点击添加持久卷声明模版:
然后按照提示输入内容:
最后点击下一步:
点击创建:
点击部署的这个mysql进来:
可以看到容器状态并且可以快速伸缩容器:
当这个变绿了就代表创建好了:
本文来自博客园,作者:IT老登,转载请注明原文链接:https://www.cnblogs.com/nb-blog/p/17970547