kubernetes系列(十) - 通过Ingress实现七层代理
1. Ingress入门
1.1 Ingress简介
kubernetes
自带的service
概念只有四层代理,即表现形式为IP:Port
.
如果需要实现七层代理,即绑定到域名的话,则需要另一个了,即ingress api
- 官方在
v1.11
推出了ingress api
接口,既而达到七层代理的效果 - 对于
ingress
来说,必须要绑定一个域名
1.2 原理和组成部分
Ingress可以理解为Service的Service。它由两部分组成
- Ingress Controller
- 这是一个标准,可以有很多实现,其中
ingress-nginx
是最常用的 - 以pod形式运行的
- 这是一个标准,可以有很多实现,其中
- Ingress策略设置
- 以yaml形式为载体的一组声明式的策略
ingress-controller
会动态地按照策略生成配置文件(如:nginx.conf
)
1.3 资料信息
- Ingress-Nginx github repo
- Ingress-Nginx官方网站
2. Ingress部署的几种方式
2.1 前言
ingress的部署,需要考虑两个方面:
ingress-controller
是作为pod
来运行的,那么以什么方式部署比较好?ingress
解决了把如何请求路由到集群内部,那它自己怎么暴露给外部比较好?
下面列举一些目前常见的部署和暴露方式,具体使用哪种方式还是得根据实际需求来考虑决定。
2.1 Deployment+LoadBalancer模式的Service
如果要把ingress部署在公有云,那可以选择这种方式。用Deployment部署ingress-controller,创建一个type为LoadBalancer的service关联这组pod。大部分公有云,都会为LoadBalancer的service自动创建一个负载均衡器,通常还绑定了公网地址。只要把域名解析指向该地址,就实现了集群服务的对外暴露。
需要额外购买公有云的服务!
2.2 Deployment+NodePort模式的Service
同样用deployment模式部署ingress-controller,并创建对应的服务,但是type为NodePort。这样,ingress就会暴露在集群节点ip的特定端口上。由于nodeport暴露的端口是随机端口,一般会在前面再搭建一套负载均衡器来转发请求。该方式一般用于宿主机是相对固定的环境ip地址不变的场景。
缺点:
- NodePort方式暴露ingress虽然简单方便,但是NodePort多了一层NAT,在请求量级很大时可能对性能会有一定影响。
- 请求节点会是类似
https://www.xx.com:30076
,其中30076
是kubectl get svc -n ingress-nginx
的svc暴露出来的nodeport
端口
2.3 DaemonSet+HostNetwork+nodeSelector(推荐)
用DaemonSet
结合nodeselector
来部署ingress-controller
到特定的node
上,然后使用HostNetwork
直接把该pod与宿主机node的网络打通,直接使用宿主机的80/433
端口就能访问服务。这时,ingress-controller
所在的node机器就很类似传统架构的边缘节点,比如机房入口的nginx服务器
优点
- 该方式整个请求链路最简单,性能相对NodePort模式更好
缺点
- 由于直接利用宿主机节点的网络和端口,一个node只能部署一个
ingress-controller pod
3. Deployment+NodePort模式
3.1. 官网下载yaml,安装ingress-nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/deploy.yaml
3.2. 创建deployment和service
- 这里service定义为
clusterip
apiVersion: apps/v1
kind: Deployment
metadata:
name: tocgenerator-deploy
namespace: default
labels:
app: tocgenerator-deploy
spec:
replicas: 2
revisionHistoryLimit: 2
selector:
matchLabels:
app: tocgenerator-server
template:
metadata:
labels:
app: tocgenerator-server
spec:
containers:
- name: tocgenerator
image: lzw5399/tocgenerator:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: tocgenerator-svc
spec:
selector:
app: tocgenerator-server
ports:
- protocol: TCP
port: 80
targetPort: 80
3.3 创建https证书的secret
- 方式1:直接指定文件创建
kubectl create secret tls mywebsite-secret --key tls.key --cert tls.crt
- 方式2: 以yaml资源清单方式创建
apiVersion: v1
kind: Secret
metadata:
name: mywebsite-secret
data:
tls.crt: **************************
tls.key: **************************
3.4. 创建ingress策略
ingress策略
必须和service同一个namespace!
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tocgenerator-ingress
spec:
tls:
- hosts:
- toc.codepie.fun
secretName: toc-secret
rules:
- host: toc.codepie.fun
http:
paths:
- path: /
backend:
serviceName: tocgenerator-svc
servicePort: 80
3.5 查看ingress-controller的nodeport端口,并访问
- 查看ingress-controller的nodeport端口
$ kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.104.80.142 <none> 80:30122/TCP,443:30577/TCP 21h
- 访问
https://toc.codepie.fun:30577
4. DaemonSet+HostNetwork+nodeSelector模式(推荐)
4.1 前言
为了配置kubernetes
中的ingress
的高可用,对于kubernetes
集群以外只暴露一个访问入口,需要使用keepalived
排除单点问题。需要使用daemonset
方式将ingress-controller
部署在边缘节点上。
4.2 边缘节点
首先解释下什么叫边缘节点Edge Node
,所谓的边缘节点即集群内部用来向集群外暴露服务能力的节点,集群外部的服务通过该节点来调用集群内部的服务,边缘节点是集群内外交流的一个Endpoint
边缘节点要考虑两个问题
- 边缘节点的高可用,不能有单点故障,否则整个kubernetes集群将不可用
- 对外的一致暴露端口,即只能有一个外网访问IP和端口
4.3 架构
为了满足边缘节点的以上需求,我们使用keepalived
来实现。
在Kubernetes
中添加了ingress
后,在DNS
中添加A记录
,域名为你的ingress
中host的内容,IP为你的keepalived的VIP,这样集群外部就可以通过域名来访问你的服务,也解决了单点故障。
选择Kubernetes的node作为边缘节点,并安装keepalived
。
4.4 安装keepalived服务
注意:keepalived服务,每个想要当作边缘节点的机器都要安装,一般是node节点
- 安装keeplived
yum install -y keepalived
- 将下图配置修改为
edgenode
- 启动keeplived并设置为开机启动
systemctl start keepalived
systemctl enable keepalived
4.5 安装ingress-nginx-controller
- 给边缘节点打标签
kubectl label nodes k8s-node01 edgenode=true
kubectl label nodes k8s-node02 edgenode=true
- daemonset形式安装ingress-nginx-controller
-
注意:以下的资源清单很长,可以直接复制过来用,会创建一系列ingess-nginx0controller相关的
-
创建资源清单,并命名为
ingress-nginx.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: default-http-backend
labels:
app: default-http-backend
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
requests:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
namespace: ingress-nginx
labels:
app: default-http-backend
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: default-http-backend
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
serviceAccountName: nginx-ingress-serviceaccount
hostNetwork: true
nodeSelector:
edgenode: 'true'
containers:
- name: nginx-ingress-controller
image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.20.0
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
# livenessProbe:
# failureThreshold: 3
# httpGet:
# path: /healthz
# port: 10254
# scheme: HTTP
# initialDelaySeconds: 10
# periodSeconds: 10
# successThreshold: 1
# timeoutSeconds: 1
# readinessProbe:
# failureThreshold: 3
# httpGet:
# path: /healthz
# port: 10254
# scheme: HTTP
# periodSeconds: 10
# successThreshold: 1
# timeoutSeconds: 1
---
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
- 应用资源清单
kubectl apply -f ingress-nginx.yaml
4.6 查看安装是否成功
[root@k8s-master01 ingress]# kubectl get ds -n ingress-nginx
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
nginx-ingress-controller 2 2 2 2 2 edgenode=true 57m
[root@k8s-master01 ingress]# kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
default-http-backend-86569b9d95-x4bsn 1/1 Running 12 24d 172.17.65.6 10.40.0.105 <none>
nginx-ingress-controller-5b7xg 1/1 Running 0 58m 10.40.0.105 10.40.0.105 <none>
nginx-ingress-controller-b5mxc 1/1 Running 0 58m 10.40.0.106 10.40.0.106 <none>