Rancher 2.6管理k8s集群

一、 Rancher介绍

1. Rancher简介

  Rancher是一个开源的企业级多集群Kubernetes管理平台,实现了Kubernetes集群在混合云+本地数据中心的集中部署与管理,以确保集群的安全性,加速企业数字化转型。

  Rancher官方文档:https://docs.rancher.cn/

2. Rancher和k8s的关系

  Rancher和k8s都是用来作为容器的调度与编排系统。但是rancher不仅能够管理应用容器,更重要的一点是能够管理k8s集群。Rancher2.x底层基于k8s调度引擎,通过Rancher的封装,用户可以在不熟悉k8s概念的情况下轻松的通过Rancher来部署容器到k8s集群当中。

  为实现上述的功能,Rancher自身提供了一套完整的用于管理k8s的组件,包括Rancher API Server, Cluster Controller, Cluster Agent, Node Agent等等。组件相互协作使得Rancher能够掌控每个k8s集群,从而将多集群的管理和使用整合在统一的Rancher平台中。Rancher增强了一些k8s的功能,并提供了面向用户友好的使用方式。

  简单的说,就是Rancher对k8s进行了功能的拓展与实现了和k8s集群交互的一些便捷工具,包括执行命令行,管理多个 k8s集群,查看k8s集群节点的运行状态等等。

二、安装Rancher

1. 实验环境设置

1)配置hosts文件

  在上述节点rancher-admin、k8s-master1、k8s-node1、k8s-node2上分别配置hosts文件,内容如下:

cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.130  rancher-admin
10.0.0.131  k8s-master1
10.0.0.132  k8s-node1
10.0.0.133  k8s-node2

2)配置rancher到k8s主机互信

  生成ssh秘钥对,一路回车,不输入密码

[root@rancher-admin ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)?
You have new mail in /var/spool/mail/root 

  把本地的ssh公钥文件安装到远程主机对应的账户

[root@rancher-admin ~]# ssh-copy-id rancher-admin
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'rancher-admin (10.0.0.130)' can't be established.
ECDSA key fingerprint is SHA256:J9UnR8HG9Iws8xvmhv4HMjfjJUgOGgEV/3yQ/kFT87c.
ECDSA key fingerprint is MD5:af:38:29:b9:6b:1c:eb:03:bd:93:ad:0d:5a:68:4d:06.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.
                (if you think this is a mistake, you may want to use -f option)

You have new mail in /var/spool/mail/root
[root@rancher-admin ~]# ssh-copy-id k8s-master1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-master1 (10.0.0.131)' can't be established.
ECDSA key fingerprint is SHA256:O2leSOvudbcqIRBokjf4cUtbvjzdf/Yl49VkIQGfLxE.
ECDSA key fingerprint is MD5:de:41:d0:68:53:e3:08:09:b0:7a:55:2e:b6:1d:af:d3.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-master1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-master1'"
and check to make sure that only the key(s) you wanted were added.

[root@rancher-admin ~]# ssh-copy-id k8s-node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node1 (10.0.0.132)' can't be established.
ECDSA key fingerprint is SHA256:O2leSOvudbcqIRBokjf4cUtbvjzdf/Yl49VkIQGfLxE.
ECDSA key fingerprint is MD5:de:41:d0:68:53:e3:08:09:b0:7a:55:2e:b6:1d:af:d3.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node1'"
and check to make sure that only the key(s) you wanted were added.

[root@rancher-admin ~]# ssh-copy-id k8s-node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-node2 (10.0.0.133)' can't be established.
ECDSA key fingerprint is SHA256:O2leSOvudbcqIRBokjf4cUtbvjzdf/Yl49VkIQGfLxE.
ECDSA key fingerprint is MD5:de:41:d0:68:53:e3:08:09:b0:7a:55:2e:b6:1d:af:d3.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'k8s-node2'"
and check to make sure that only the key(s) you wanted were added.

[root@rancher-admin ~]#

3)防火墙和selinux默认关闭

[root@rancher-admin ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@rancher-admin ~]# getenforce
Disabled

4)交换分区关闭

[root@rancher-admin ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3931         286        2832          11         813        3415
Swap:             0           0           0

5)开启转发

  br_netfilter模块用于将桥接流量转发至iptables链,br_netfilter内核参数需要开启转发

[root@rancher-admin ~]# modprobe br_netfilter
[root@rancher-admin ~]# echo "modprobe br_netfilter" >> /etc/profile
[root@rancher-admin ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@rancher-admin ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

6)安装好docker-ce

[root@rancher-admin ~]# yum install docker-ce docker-ce-cli containerd.io -y
[root@rancher-admin ~]# systemctl start docker && systemctl enable docker.service
#配置镜像加速器
[root@rancher-admin ~]# cat /etc/docker/daemon.json
{
        "registry-mirrors": ["https://docker.mirrors.ustc.edu.cn","https://reg-mirror.qiniu.com/","https://hub-mirror.c.163.com/","https://registry.docker-cn.com","https://dockerhub.azk8s.cn","http://qtid6917.mirror.aliyuncs.com"],
        "exec-opts": ["native.cgroupdriver=systemd"]
}
#重新加载配置
[root@rancher-admin ~]# systemctl daemon-reload
[root@rancher-admin ~]# systemctl restart docker
[root@rancher-admin ~]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2022-12-11 12:44:27 CST; 8s ago
     Docs: https://docs.docker.com
 Main PID: 4708 (dockerd)
    Tasks: 8
   Memory: 25.7M
   CGroup: /system.slice/docker.service
           └─4708 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.124138353+08:00" level=info msg="ccResolverWrapper: sending update to cc: {[{...dule=grpc
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.124150996+08:00" level=info msg="ClientConn switching balancer to \"pick_firs...dule=grpc
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.134861031+08:00" level=info msg="[graphdriver] using prior storage driver: overlay2"
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.137283344+08:00" level=info msg="Loading containers: start."
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.312680540+08:00" level=info msg="Default bridge (docker0) is assigned with an... address"
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.381480257+08:00" level=info msg="Loading containers: done."
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.396206717+08:00" level=info msg="Docker daemon" commit=3056208 graphdriver(s)...=20.10.21
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.396311425+08:00" level=info msg="Daemon has completed initialization"
Dec 11 12:44:27 rancher-admin systemd[1]: Started Docker Application Container Engine.
Dec 11 12:44:27 rancher-admin dockerd[4708]: time="2022-12-11T12:44:27.425944033+08:00" level=info msg="API listen on /var/run/docker.sock"
Hint: Some lines were ellipsized, use -l to show in full.

2. 安装Rancher

  Rancher2.6.4支持导入已经存在的k8s1.23+集群,所以安装rancher2.6.4版本

  提前下载好有关rancher的镜像:

[root@k8s-master1 ~]# docker pull rancher/rancher-agent:v2.6.4
[root@k8s-node1 ~]# docker pull rancher/rancher-agent:v2.6.4
[root@k8s-node2 ~]# docker pull rancher/rancher-agent:v2.6.4 

1)启动rancher容器

[root@rancher-admin rancher]# docker pull rancher/rancher:v2.6.4
[root@rancher-admin rancher]# docker run -d --restart=unless-stopped -p 80:80 -p 443:443 --privileged rancher/rancher:v2.6.4
0a3209f670cc5c9412d5c34dd20275686c2526865ddfe20b60d65863b346d0d2

注:unless-stopped,在容器退出时总是重启容器,但是不考虑在Docker守护进程启动时就已经停止了的容器

2)验证rancher是否启动

[root@rancher-admin rancher]# docker ps | grep rancher
0a3209f670cc   rancher/rancher:v2.6.4   "entrypoint.sh"   About a minute ago   Up 46 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   affectionate_rosalind

3)登录Rancher平台

  在浏览器中访问:输入http://10.0.0.130

  点击高级,出现如下界面

  点击继续前往10.0.0.130(不安全),出现如下界面:

 

(1)获取密码:

  查看到rancher容器的id

[root@rancher-admin rancher]# docker ps | grep rancher
0a3209f670cc   rancher/rancher:v2.6.4   "entrypoint.sh"   6 minutes ago   Up 43 seconds   0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp   affectionate_rosalind

通过上面可以看到容器的id是:0a3209f670cc

  执行以下命令获取密码:

[root@rancher-admin rancher]#  docker logs 0a3209f670cc 2>&1 | grep "Bootstrap Password:"
2022/12/11 05:11:56 [INFO] Bootstrap Password: mgrb9rgbl2gxvgjmz5xdwct899b28swnr4ssfwnmhqhwsqf9fhnwdx

通过上面可以看到获取到的密码是:mgrb9rgbl2gxvgjmz5xdwct899b28swnr4ssfwnmhqhwsqf9fhnwdx

  在浏览器页面输入获取的密码

  点击Login with Local User,出现如下界面,选择设置密码

(2)设置新密码

(3)正常登录

  点击继续之后,显示如下

(4)设置语言

三、Rancher管理已存在的k8s集群

1. 导入已有的k8s集群

  选择导入已有的集群,出现下面界面

 

  选择通用,出现如下界面

 

  填写集群名称:k8s-rancher,点击创建

  出现如下界面:

  复制上述红框中的命令,在k8s控制节点执行该命令,如下:

[root@k8s-master1 ~]# curl --insecure -sfL https://10.0.0.130/v3/import/s7l7wzbkj5pnwh7wl7lrjt54l2x659mfhc5qlhmntbjflqx4rdbqsm_c-m-86g26jzn.yaml | kubectl apply -f -
clusterrole.rbac.authorization.k8s.io/proxy-clusterrole-kubeapiserver created
clusterrolebinding.rbac.authorization.k8s.io/proxy-role-binding-kubernetes-master created
namespace/cattle-system created
serviceaccount/cattle created
clusterrolebinding.rbac.authorization.k8s.io/cattle-admin-binding created
secret/cattle-credentials-1692b54 created
clusterrole.rbac.authorization.k8s.io/cattle-admin created
deployment.apps/cattle-cluster-agent created
service/cattle-cluster-agent created

  验证rancher-agent是否部署成功

[root@k8s-master1 ~]# kubectl get pods -n cattle-system -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
cattle-cluster-agent-867ff9c57f-ndspc   1/1     Running   0          18s   10.244.159.188   k8s-master1   <none>           <none>
[root@k8s-master1 ~]# kubectl get pods -n cattle-system -o wide
NAME                                    READY   STATUS        RESTARTS   AGE   IP               NODE          NOMINATED NODE   READINESS GATES
cattle-cluster-agent-5967bb5986-rhzz8   1/1     Running       0          39s   10.244.36.88     k8s-node1     <none>           <none>
cattle-cluster-agent-867ff9c57f-ndspc   1/1     Terminating   0          61s   10.244.159.188   k8s-master1   <none>           <none>
You have new mail in /var/spool/mail/root
[root@k8s-master1 ~]# kubectl get pods -n cattle-system -o wide
NAME                                    READY   STATUS              RESTARTS   AGE   IP             NODE        NOMINATED NODE   READINESS GATES
cattle-cluster-agent-5967bb5986-dbmlj   0/1     ContainerCreating   0          15s   <none>         k8s-node2   <none>           <none>
cattle-cluster-agent-5967bb5986-rhzz8   1/1     Running             0          55s   10.244.36.88   k8s-node1   <none>           <none>
[root@k8s-master1 ~]# kubectl get pods -n cattle-system -o wide
NAME                                    READY   STATUS    RESTARTS   AGE   IP               NODE        NOMINATED NODE   READINESS GATES
cattle-cluster-agent-5967bb5986-dbmlj   1/1     Running   0          39s   10.244.169.154   k8s-node2   <none>           <none>
cattle-cluster-agent-5967bb5986-rhzz8   1/1     Running   0          79s   10.244.36.88     k8s-node1   <none>           <none>

  看到cattle-cluster-agent这个pod时running,说明rancher-agent部署成功了

  查看rancher UI页面显示结果:

  在https://10.0.0.130/dashboard/home页面显示如下:

  上面结果说明rancher里面已经导入了k8s,k8s的版本是1.20.6

2. Rancher仪表盘上部署tomcat服务

  点击k8s-rancher集群

  出现如下界面:

1)创建命名空间

2)创建deployment

  选择命名空间:tomcat-test,输入deployment的名称:tomcat-test,副本数:2,容器名称:tomcat-test,镜像:tomcat:8.5-jre8-alpine,拉取策略:IfNotPresent

  添加标签:app=tomcat,给pod也打app=tomcat标签

  设置完成后,点击创建:

  查看是否创建成功

3)创建service

  把k8s集群的tomcat这个pod映射出来

  选择节点端口

  输入service的名称:tomcat-svc,服务端口号名称:tomcat-port,监听端口:8080,目标端口:8080,节点端口:30080

  添加选择器app=tomcat,点击创建

  查看创建是否成功:

  访问k8s任何一个节点+端口 30080,可以访问内部的tomcat

4)创建Ingress资源

(1)安装Ingress-controller七层代理

  下载资源清单:https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/baremetal/deploy.yaml,对其做部分修改,修改后的配置文件如下:

cat deploy.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx

---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      hostNetwork: true
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchLabels:
                  app.kubernetes.io/name: ingress-nginx
              topologyKey: kubernetes.io/hostname
      dnsPolicy: ClusterFirstWithHostNet
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.1.0
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: nginx
  namespace: ingress-nginx
spec:
  controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-4.0.10
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-4.0.10
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

  在k8s-master1节点上执行以下命令,安装Ingress-controller七层代理:

[root@k8s-master1 ~]# cd nginx-ingress/
[root@k8s-master1 nginx-ingress]# ll
total 20
-rw-r--r-- 1 root root 19435 Sep 17 16:18 deploy.yaml
[root@k8s-master1 nginx-ingress]# kubectl apply -f deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
[root@k8s-master1 nginx-ingress]# kubectl get pods -n ingress-nginx
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-gxx5m        0/1     Completed   0          88s
ingress-nginx-admission-patch-5tfmc         0/1     Completed   1          88s
ingress-nginx-controller-6c8ffbbfcf-rnbtd   1/1     Running     0          89s
ingress-nginx-controller-6c8ffbbfcf-zknjx   1/1     Running     0          89s
(2)创建ingress规则

  输入ingress资源的名称:tomcat-test,请求主机域名:tomcat-test.example.com,路径:/,目标服务:tomcat-svc,端口:8080

  添加注解:kubernetes.io/ingress.class: nginx

  查看创建是否成功

(3)配置hosts文件

  添加本地hosts解析,在C:\Windows\System32\drivers\etc\hosts文件中添加一行:10.0.0.131  tomcat-test.example.com

(4)浏览器访问  

  浏览器中输入:http://http://tomcat-test.example.com:30080/ 访问结果如下:

posted @ 2022-12-11 15:33  出水芙蓉·薇薇  阅读(1440)  评论(0编辑  收藏  举报