Kubernetes---Helm及其它功能性组件(Dashboard,)
⒈什么是Helm
在没使用helm之前,向kubernetes 部署应用,我们要依次部署deployment, svc等,步骤较繁琐。况且随着很多项目微服务化,复杂的应用在容器中部署以及管理显得较为复杂,helm 通过打包的方式,支持发布的版本管理和控制,很大程度上简化了 Kubernetes 应用的部署和管理
Helm 本质就是让 K8s的应用管理 (Deployment,Service等)可配置,能动态生成。通过动态生成 K8s资源清单文件(deployment.yaml, service.yaml)。然后调用Kubectl 自动执行K8s资源部署
Helm 是官方提供的类似于YUM的包管理器,是部署环境的流程封装。 Helm 有两个重要的概念:chart 和release
chart是创建一个应用的信息集合,包括各种Kubernetes对象的配置模板、参数定义、依赖关系、文档说明等。chart是应用部署的自包含逻辑单元。可以将chart想象成apt. yum 中的软件安装包
release 是 chart 的运行实例,代表了一个正在运行的应用。当chart 被安装到 Kubernetes集群,就生成一个 release, chart 能够多次安装到同一个集群,每次安装都是一个release
Helm包含两个组件:Helm客户端和 Tiller服务器,如下图所示
HeIm客户端负责chart 和 release的创建和管理以及和 Tiller的交互。Tiller服务器运行在 Kubernetes 集群中,它会处理Helm客户端的请求,与 Kubernetes AP| Server 交互
⒉Helm 部署
越来越多的公司和团队开始使用Helm 这个Kubernetes 的包管理器,我们也将使用Helm安装Kubernetes 的常用组件。Hem 由客户端命helm 令行工具和服务端tiller 组成,Helm 的安装十分简单。下载helm 命令行工具到master 节点node1 的/usrflocal/bin 下,这里下载的2.13.1版本:
ntpdate ntp1.aliyun.com wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz wget https://get.helm.sh/helm-v3.2.0-linux-amd64.tar.gz tar -zxvf helm-v2.13.1-linux-amd64.tar.gz cd linux-amd64/ cp -a helm /usr/local/bin/
chmod a+x /usr/local/bin/helm
apiVersion: v1 kind: ServiceAccount metadata: name: tiller namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: tiller roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: tiller namespace: kube-system
kubectl create -f rbac-config.yaml
serviceaccount/tiller created clusterrolebinding.rbac.authorization.k8s.io/tiller created
helm init --service-account tiller --skip-refresh
⒊tiller 默认被部署在k8s 集群中的kube-system 这个 namespace 下
$ kubectl get pod -n kube-system -l app=helm NAME READY STATUS RESTARTS AGE tiller-deploy-c4fd4cd68-dwkhv 1/1 Running 83s $ helm version Client: &version.Version(SemVer:"v2.13.1", GitCommit:"618447bf203d147601b4b9bd7f837a5d39fbb4”, GitTreeState:"clean" Server:&version.VersionSemVer:"v2.13.1" GitCommit:"618447cbf203d147601b4b9bd7f8°37a5d39fbb4” GitTreeState: "clean”]
⒋Helm 自定义模板
# 创建文件夹 $ mkdir ./hello-world $ cd ./hello-world
#创建自描述文件Chart.yaml ,这个文件必须有 name 和 version 定义 $ cat <<'E0F' > ./Chart. yaml name: hello-world version: 1.0.0 EOF
#创建模板文件,用于生成Kubernetes 资源清单 (manifests) ,模板目录必须是templates $ mkdir ./templates $ cat <<'EOF' > ./templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-world spec: replicas: 1 template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: hub.coreqi.cn/library/myapp:v1 ports: - containerPort: 80 protocol: TCP EOF $ cat <<'E0F' > ./templates/service.yaml apiVersion: v1 kind: Service metadata: name: hello-world spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP selector: app: hello-world EOF
# 使用命令 helm install RELATIVE_PATH_TO_CHART 创建一次Release,使用helm install . 从当前目录安装 helm install .
# 列出已经部署的 Release $ helm ls # 查询一个特定的Release的状态 $ helm status RELEASE_NAME # 移除所有与这个 Release 相关的 Kubernetes 资源 $ helm delete cautious-shrimp # helm rollback RELEASE_NAME REVISION NUMBER $ helm rollback cautious-shrimp 1 # 使用 helm delete --purge RELEASE_NAME 移除所有与指定 Release 相关的Kubernetes 资源和所有这个 Release 的记录 $ helm delete --purge cautious-shrimp $ helm ls --deleted
#配置体现在配置文件 values.yaml $ cat <<'EOF' > ./values.yaml image: repository: gcr.io/google-samples/node-hello tag: '1.0' EOF # 这个文件中定义的值,在模板文件中可以通过 .Values对象访问到 $ cat <<'EOF' > ./templates/deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: hello-world spec: replicas: 1 template: metadata: labels: app: hello-world spec: containers: - name: hello-world image: {{ .Values.image.repository }}:{{ .Values.image.tag }} ports: - containerPort: 8080 protocol: TCP EOF
#在 values.yaml 中的值可以被部署 release 时用到的参数 --values YAML_FILE_PATH 或 --set key1=value1, key2=value2覆盖掉 $ helm install --set image.tag="latest" . # 升级版本 helm upgrade -f values.yaml test .
⒌Debug
#使用模板动态生成K8s资源清单,非常需要能提前预览生成的结果。 #使用--dry-run --debug 选项来打印出生成的清单文件内容,而不执行部署 helm install . --dry-run --debug --set image.tag latest
⒍使用Helm部署 dashboard
helm fetch stable/kubernetes-dashboard tar -zxvf kubernetes-dashboard-1.8.0.tgz
kubernetes-dashboard.yaml:
image: repository: k8s.gcr.io/kubernetes-dashboard-amd64 tag: v1.10.1 ingress: enabled: true hosts: - k8s.frognew.com annotations: nginx.ingress.kubernetes.io/ssl-redirect: "true" nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" tls: - secretName: frognew-com-tls-secret hosts: - k8s.frognew.com rbac: clusterAdminRole: true
helm install stable/kubernetes-dashboard -n kubernetes-dashboard --namespace kube-system -f kubernetes-dashboard.yaml
kubectl -n kube-system get secret | grep kubernetes-dashboard-token kubernetes.io/service-account-token 3 3m7s kubectl describe -n kube-system secret/kubernetes-dashboard-token-pkm2s Name: kubernetes-dashboard-token-pkm2s Namespace: kube-system Labels: <none>Annotations: kubernetes.io/service-account.name: kubernetes-dashboard kubernetes.io/service-account.uid:2f0781dd-156a-11e9-b0f0-080027bb7c43 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace:11 bytes token: eyJhbGci0iJSUzI1NiIsImtpZCI6IiJ9.eyJpc3Mi0iJrdWJ1 cm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5 pby9zZX32aWN1YWNjb3VudC9uYW11C3BY2Ui01JrdWJlXNc3R1bSIsImt1YmVybmVOZXMuaN8v2Vydm1jZWF jY291bnQ vc2VjcmVOLm5hbWUi0iJrdWJl cm5ldGVzLWRhc2hib2FyZC10b2t]bi1wa20ycyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWF jY291bn0v2VydmljZS1hY2NvdW50Lm5hbWUi0iJrdWJ1cm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0XMuaW8vc2Vydml jZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjJmMDc4MWRkLTE1NmEtMTF10S1iMGYWLTA4MDAyN2Ji N2M0MyIsInN 1YiI6InN5c3R1bTpzZX32aWNIYNjb3VudDprdW 1LXN3R1bTprdWJ1cm51dGVzLWRh2hib2FyZC09.24d6ZgZMxdydp wlmYAiMxZ9VSIN7dDR7Q-RLWOqC81ajXo0KHAyrEGpIonf1d3gqbE0x08nisskpmlkQra72- 9X6sBPoByqIKyTs083B0lME2sf0JemlDOHqzwSCjvS0a0x bU1q9HgH2vEXzpFuSS6Si7RbfzLX1EuggNoC4MfA4E2hF10X_m18iAKx-49y1BQQe5FGWyCyBSi1TD_- ZpVs44H5gIvsGK2kcvi0JT40HXtWjjQBKLIWL7xxyRCSE4HmUZT2StIHn0W1X7IEIBOoBX4mPg2_xNGnqwcu- 80ERU9IoqAAE2cZa0v3b502LMcPrcxrV0ukvRIumA
kubectl edit svc kubernetes-dashboard -n kube-system
type: NodePort
修改 ClusterIP 为 NodePort
⒎使用Helm部署metrics-server
从 Heapster 的github <https:/github.com/kubernetes/heapster>中可以看到已经, heapster 已经DEPRECATED.
这里是 heapsterdeprecation timeline。可以看出 heapster 从 Kubernetes 1.12开始将从 Kubernetes 各种安装脚 本中移除。 Kubernetes 推荐使用 metrics-server。我们这里也使用helm来部署metrics-server.
metrics-server.yaml:
args: - --logtostderr - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP
helm install stable/metrics-server -n metrics-server --namespace kube-system -f metrics-server.yaml
使用下面的命令可以获取到关于集群节点基本的指标信息:
kubectl top node NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% node1 650m 32% 1276Mi 73% node2 73m 3% 527Mi 30%
kubectl top pod --all-namespaces NAMESPACE NAME cpU(cores) MEMORY(bytes) ingress-nginx nginx-ingress-controller-6f5687c58d-jdxzk 3m 142Mi ingress-nginx nginx-ingress-controller-6f5687c58d-lxj5q 5m 146Mi ingress-nginx nginx-ingress-default-backend-6dc6c46dcc-lf882 1m 4Mi kube-system coredns-86c58d9df4-k5jkh 2m 15Mi kube-system coredns-86c58d9df4-rw6tt 3m 23Mi kube-system etcd-node1 20m 86Mi kube-system kube-apiserver-node1 33m 468Mi kube-system kube-controller-manager-node1 29m 89Mi kube-system kube-f]annel-ds-amd64-8nr5j 2m 13Mi kube-system kube-flannel-ds-amd64-bmncz 2m 21Mi kube-system kube-proxy-d5gxv 2m 18Mi kube-system kube-proxy-zm29n 2m 16Mi kube-system kube-scheduler-node1 8m 28Mi kube-system kubernetes-dashboard-788c98d699-qd2cx 2m 16Mi kube-system metrics-server-68785fbcb4-k4g9v 3m 12Mi kube-system tiller-deploy-c4fd4cd68-dwkhv 1m 24Mi
⒏部署Prometheus
1.相关地址信息
git clone https://github.com/coreos/kube-prometheus.git cd /root/kube-prometheus/manifests
2.修改grafana-service.yaml文件,使用nodepode 方式访问grafana:
vim grafana-service.yaml apiVersion: v1 kind: Service metadata: name: grafana namespace: monitoring spec: type: NodePort #添加内容 ports: - name: http port: 3000 targetPort: http nodePort: 30100 #添加内容 selector: app: grafana
3.修改 prometheus-service.yaml,改为nodepode
vim prometheus-service.yaml apiVersion: v1 kind: Service metadata: labels: prometheus: k8s name:prometheus-k8s namespace: monitoring spec: type: NodePort ports: - name: web port: 9090 targetPort: web nodePort: 30200 selector: app: prometheus prometheus: k8s
4.修改 alertmanager-service.yaml, 改为 nodepode
vim alertmanager-service.yaml apiVersion: v1 kind: Service metadata: labels: alertmanager: main name: alertmanager-main namespace: monitoring spec: type: NodePort ports: - name: web port: 9093 targetPort: web nodePort: 30300 selector: alertmanager: main app: alertmanager
kubectl get service -n monitoring | grep grafana grafana NodePort 10.107.56.143 <none> 3000:30100/TCP 20h
如上可以看到 grafana 的端口号是30100,浏览器访问http://Master1P:30100用户名密码默认 admin/admin
6.Horizontal Pod Autoscaling【HPA】
Horizontal Pod Autoscaling 可以根据CPU利用率自动伸缩一个 Replication Controller、 Deployment 或者 Replica Set中的Pod数量
kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m --expose --port=80
创建HPA控制器-相关算法的详情请参阅这篇文档
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
增加负载,查看负载节点数目
$kubectl run -i --tty load-generator --image=busybox /bin/sh
$while true; do wget -q -O- http://php-apache.default.svc.cluster.local; done
7.资源限制 - Pod
spec:
containers:
- image: xxxx
imagePullPolicy: Always
name: auth
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
cpu: "4"
memory: 2Gi
requests:
cpu: 250m
memory: 250Mi
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
namespace: spark-cluster
spec:
hard:
pods: "20"
requests.cpu: "20"
requests.memory: 100Gi
limits.cpu: "40"
limits.memory: 200Gi
2.配置对象数量配额限制
apiVersion: v1
kind: ResourceQuota
metadata:
name: object-counts
namespace: spark-cluster
spec:
hard:
configmaps: "10"
persistentvolumeclaims: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
services.loadbalancers: "2"
3.配置 CPU和内存 LimitRange
apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
spec:
limits:
- default:
memory: 50Gi
cpu: 5
defaultRequest:
memory: 1Gi
cpu: 1
type: Container
⒐部署EFK平台
1、添加 Google incubator 仓库
helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
2、部署 Elasticsearch
kubectl create namespace efk helm fetch incubator/elasticsearch
tar -zxvf elasticsearch-1.10.2.tgz helm install --name els1 --namespace=efk -f values.yaml incubator/elasticsearch kubectl run cirror-$RANDOM --rm -it --image=cirros -- /bin/sh curl Elasticsearch:Port/_cat/nodes
3.部署Fluentd
helm fetch stable/fluentd-elasticsearch
tar -zxvf fluentd-elasticsearch-2.0.7.tgz vim values.yaml # 更改其中 Elasticsearch 访问地址
kubectl get svc -n efk CLUSTER-IP节点的ip地址 helm install --name flu1 --namespace=efk -f values.yaml stable/fluentd-elasticsearch
4.部署kibana
helm fetch stable/kibana --version 0.14.8
tar -zxvf kibana-0.14.8.tgz
#更改其中的 ElasticSearch访问地址 helm install --name kib1 --namespace=efk -f values.yaml stable/kibana --version 0.14.8
5.配置外网访问
kubectl edit svc kib1-kibana -n efk
#修改type节点ClusterIP为NodePort