k8s学习(十)-- helm

学习目标:掌握helm原理 helm模板自定义 helm部署一些常用插件

一、Helm是官方提供的类似于yum的包管理器,是部署环境的流程封装。Helm有两个重要的概念:chart和release

  A、chart是创建一个应用的信息集合,包括各种Kubernetes对象的配置模板、参数定义、依赖关系、文档说明等。chart是应用部署的自包含逻辑单元。可以将chart想象成apt、yum中的软件包。

  B、release是chart的运行实例,代表了一个正在运行的应用。当chart被安装到Kubernetes集群,就生成一个release。chart能够多次安装到同一个集群,每次安装都是一个release

    

 

    Helm客户端负责chart和release的创建和管理以及和Tiller的交互Tiller服务运行在Kubernetes集群中,它会处理Helm客户端的请求,与Kubernetes API Server交互

  C、Helm部署

    1. 下载软件包:wget https://storage.googleapis.com/kubernetes-helm/helm-v2.13.1-linux-amd64.tar.gz

    2. 解压 tar -zxvf helm-v2.13.1-linux-amd64.tar.gz

    3. 将解压目录下的helm拷贝至/usr/local/bin/:cp -a linux-amd64/helm /usr/local/bin/

    4. chmod a+x /usr/local/bin/helm 

    5. 创建tiller是用的SA以及绑定Cluster-admin角色

      kind: ServiceAccount
      apiVersion: v1
      metadata:
        name: tiller
        namespace: kube-system
      ---
      kind: ClusterRoleBinding
      apiVersion: rbac.authorization.k8s.io/v1
      metadata:
        name: tiller
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: cluster-admin
      subjects:
        - kind: ServiceAccount
          name: tiller
          namespace: kube-system

    6. 初始化helm

      helm init --service-account tiller --skip-refresh

    7. 实验

      a. mkdir test && cd test

      b. vim Chart.yaml

        name: hello-world
        version: v1.0.0

      c. vim values.yaml

        image:
          repository: wangyanglinux/myapp
          tag: 'v2'

      d. mkdir templates && cd templates

      f. vim deployment.yaml

        kind: Deployment
        apiVersion: extensions/v1beta1
        metadata:
          name: hello-world-deployment
        spec:
          replicas: 1
          template:
            metadata:
              labels:
                app: hello-world
            spec:
              containers:
                - name: hello-world-container
                  image: {{.Values.image.repository}}:{{.Values.image.tag}}
                  ports:
                  - containerPort: 80
                    protocol: TCP

      h. vim service.yaml

        kind: Service
        apiVersion: v1
        metadata:
          name: hello-world-service
        spec:
          type: NodePort
          ports:
            - port: 80
              targetPort: 80
              protocol: TCP
          selector:
            app: hello-world

      i. 创建:helm install .

      j. 更新:

        1)helm upgrade release名 .

         2) helm upgrade release名 --set key=value . (例如 helm upgrade singing-clam --set image.tag='v3' .)

二、使用helm部署dashboard

  A、cd /usr/local/install-k8s/plugin/ && mkdir dashboard && cd dashboard

  B、helm repo update

  C、helm fetch stable/kubernetes-dashboard

  D、tar -zxvf xxxxxxx(下载的压缩包)

  E、进入解压文件夹

  F、vim kubernetes-dashboard.yaml

    image:
      repository: k8s.gcr.io/kubernetes-dashboard-amd64
      tag: v1.10.1
    ingress:
      enabled: true
      hosts:
        - k8s.frognew.com
      annotations:
        nginx.ingress.kubernetes.io/ssl-redirect: "true"
        nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
      tls:
        - secretName: frognew-com-tls-secret
          hosts:
          - k8s.frognew.com
    rbac:
      clusterAdminRole: true

  G、helm install . -n kubernetes-dashboard --namespace kube-system -f kubernetes-dashboard.yaml

  H、kubectl edit svc kubernetes-dashboard -n kube-system 

    .spec.type=NodePort

  I、火狐浏览器访问仪表盘

  J、获取token

    1. kubectl -n kube-system get secret | grep kubernetes-dashboard-token

    2. kubectl describe secret secret名称 -n kube-system

三、helm部署prometheus

  A、组件说明:

    1. MetricServer: 是kubernetes集群资源使用情况的聚合器,收集数据给kubernetes集群内使用,如kubectl,hpa,schedule等

    2. PrometheusOperator: 是一个系统监测和报警工具箱,用来存储监控数据

    3. NodeExporter:用于各node的关键度量指标状态数据

    4. KubeStateMetrics:收集kubernetes集群内资源对象,制定告警规则

    5. Prometheus: 采用pull方式收集apiserver,scheduler,controller-manager,kubelet组件数据,通过http传输协议

    6. Grafana:是可视化数据统计和监控平台

  B、安装Prometheus

    1. 到/usr/local/install-k8s/plugin/prometheus

    2. git clone https://github.com/coreos/kube-prometheus.git

    3. cd kube-prometheus/manifest

    4. 修改grafana-service.yaml,使用NodePort方式访问grafana

      apiVersion: v1
      kind: Service
      metadata:
        labels:
          app: grafana
        name: grafana
        namespace: monitoring
      spec:
        type: NodePort
        ports:
          - name: http
            port: 3000
            targetPort: http
            nodePort: 30100
        selector:
          app: grafana

    5. 同样修改prometheus.service

      apiVersion: v1
      kind: Service
      metadata:
        labels:
          prometheus: k8s
          name: prometheus-k8s
          namespace: monitoring
      spec:
        type: NodePort
        ports:
          - name: web
            port: 9090
            targetPort: web
            nodePort: 30200
        selector:
          app: prometheus
          prometheus: k8s
        sessionAffinity: ClientIP

    6. 同样修改alertmanager-service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        labels:
          alertmanager: main
          name: alertmanager-main
          namespace: monitoring
      spec:
        type: NodePort
        ports:
          - name: web
            port: 9093
            targetPort: web
            nodePort: 30300
        selector:
          alertmanager: main
          app: alertmanager
        sessionAffinity: ClientIP

四、HPA、

  A、实验

    1. kubectl run php-apache --image=gcr.io/google_containers/hpa-example --requests=cpu=200m --expose --port=80

    2. 创建HPA控制器

      kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

    3. 增加负载,查看负载节点数

      kubectl run -i --tty load-generator --image=busybox /bin/sh

      while true; do get -q -O- http://php-apache.default.svc.cluster.local;done

五、资源限制-Pod

  A、kubernetes对资源的限制实际上是通过cgroup来控制的,cgroup是容器的一组用来控制内核如何运行进程的相关属性集合。针对内存、CPU和各种设备都有对应的cgroup

  B、默认情况下,pod运行没有cpu和内存的限额。这意味着系统中任何pod将能够像执行该pod所在的节点一样,消耗足够多的cpu和内存。一般会针对某些应用的pod资源进行资源限制,这个资源限制是通过resources的requests和limits来实现。requests要分配的资源,limits为最高请求的资源值。可以简单地理解为初始值和最大值

  C、例子

    spec:

      containers:

        - name: xxxx

          imagePullPolicy: IfNotPresent

          name: auth

          ports:

          - containerPort: 8080

            protocol: TCP

          resources:

          limits:

            cup: "4"

            memory: 2Gi

          requests:

            cpu: 250m

            memory: 250Mi

六、资源限制-名称空间

  A、计算资源配额

    kind: ResourceQuota

    apiVersion: v1

    metadata:

      name: compute-resources

      namespace: spark-cluster

    spec:

      hard:

        pods: "20"

        requests.cpu: "20"

        requests.memory: 100Gi

        limits.cpu: "40"

        limits.memory: 200Gi

  B、配置对象数量配额限制

    kind: ResourceQuota

    apiVersion: v1

    metadata:

      name: object-counts

      namespace: spark-cluster

    spec:

      hard:

        configmaps: "10"

        persistentvolumeclaims:"4"

        replicationcontrollers: "20"

        secrets: "10"

        services: "10"

        services.loadbalancer: "2"

  C、配置CPU和内存LimitRange

    default即limit的值,defaultRequest即request的值

    apiVersion: v1

    kind: LimitRange

    metadata:

      name: mem-limit-range

    spec:

      limits:

        - default:

          memory: 50Gi

          cpu: 5

          defaultRequest:

          memory: 1Gi

          cpu: 1

          type: Container

七、部署efk

  A、添加Google incubator仓库

    helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator

  B、创建efk名称空间

    kubectl create ns efk

  C、部署elasticsearch

    1. helm fetch incubator/elasticsearch

    2. 修改values文件  副本数改为1 pv不要(机器带不起来)

    3. helm install --name els1 --namespace=efk -f values.yaml .

 

  D、部署fluentd

    1. helm fetch stable/fluentd-elasticsearch

    2. 修改values文件,设置elasticsearch访问地址

    3. helm install --name flu1 --namespace=efk -f values.yaml .

  F、部署kibana

    1. helm fetch stable/kibana --version 0.14.8

    2. 修改values文件的elasticsearch地址

    3. helm install --name kib1 --namespace=efk -f values.yaml .

  

 

  

posted on 2019-11-26 10:40  DjanFey  阅读(210)  评论(0编辑  收藏  举报

导航