Knative
Knative
1、安装
1.1 kn
kn
https://github.com/knative/client
cd knative/cmd/kn && go build main.go
1.2 serving
安装crd
curl -LO https://github.com/knative/serving/releases/download/knative-v1.10.1/serving-crds.yaml
安装核心组件
curl -LO https://github.com/knative/serving/releases/download/knative-v1.11.1/serving-core.yaml
安装网络层插件(istio)
istio安装请参考istio官网
curl -LO https://github.com/knative/net-istio/releases/download/knative-v1.11.0/net-istio.yaml
1.3 evening
安装crd
curl -LO https://github.com/knative/eventing/releases/download/knative-v1.11.4/eventing-crds.yaml
安装核心组件
curl -LO https://github.com/knative/eventing/releases/download/knative-v1.11.4/eventing-core.yaml
2、组件
serverless 无服务器架构
Knative 是一种与平台无关的解决方案,用于运行无服务器部署的平台
拥有两个核心组件 serving 和 eventing
2.1 serving
将一组对象定义为 Kubernetes Custom Resource 定义。这些资源用于定义和控制您的 无服务器工作负载在群集上的行为
2.1.1 service
一种crd资源
(base) gu@python:~/下载$ kubectl api-resources |grep knative|grep ksvc
services kservice,ksvc serving.knative.dev/v1 true Service
自动管理工作负载的整个生命周期。它控制其他对象的创建,以确保您的应用程序在每次更新服务时都有路由、配置和新修订。可以将服务定义为始终将流量路由到最新修订版或固定修订版。
2.1.1.1 协调器
knative中有一个组件controller,在kubernetes中表现为一个控制器,实际包含了对knative一些列功能的实现,相当于控制器的控制器,为了区分,将knative组件中的控制器称之为协调器
具体分为两类
1、面向开发者
配置协调器
修订协调器
路由协调器
服务协调器
2、面向底层
标签协调器 为k8S对象设置标签,istio根据这些标签来进行流量治理
serverless协调器 负载激活器的一部分工作
gc协调器
2.1.1.2 webhook
验证并校验用户提供的资源,比如service
设置默认配置,包括超时,并发限制,容器资源限制,垃圾收集时间等
将路由和网络注入k8s
校验配置的正确性,修复部分容器镜像加上摘要,比如将latest解析
2.1.1.3 网络控制器
具备证书和流量入口功能比如istio
2.1.1.4 自动缩放/激活器/队列代理
1、流量入口收到请求后,将其发送到配置的激活器中,参考 (4、流量走向)
2、激活器将请求放入缓冲区
3、激活器传递信息给kpa(自动缩放器)并带上缓冲区内等待的请求信息,请求立即扩容
4、由于有请求等待,且无实例提供服务,缩放器立即执行扩容实例动作,下发目标给服务模块(启动实例)
5、服务模块完成kpa下发的任务后,激活器将查看是否具有实例可以处理请求,若存在可用实例,将缓冲区请求通过代理发送给实例
6、实例做出响应后,代理组件将响应返回流量入口进行响应
如下图,若存在足够实例,流量将会直接路由到实例而不经过激活器
激活器在工作过程启动足够实例后也会将更新路由,将流量直接路由到实例
2.1.1.5 kpa工作机制
kpa的扩缩容不是根据请求数量来的,而是根据当前可用实例可以服务的请求数量来决定的
# 恐慌模式
恐慌模式区别于正常模式,会产生两个新的规则
1、更改扩容采集数据为6s,对流量的感知更为敏感(普通模式下采集数据为60s)
2、忽略缩容需求,直到退出恐慌模式
# 少量实例下的kpa
此时流量具有两级缓存,一个是触发器,另一个是sider容器进行队列代理,sider容器一方面对流量进行平衡,另一方面对数据指标进行采集上报给触发器当流量巨大不足以进行平衡时,触发器介入,进行扩容动作
# 零实例下的kpa
参考2.1.1.4 激活器的工作模式
# 大量实例下的kpa
将流量直接流量到队列代理不经过触发器
2.1.1.5.1 扩容设置
# cm下的配置是在.data. 下 注意这一点
(base) gu@python:~/k8s/yaml/knative/serving$ kubectl get cm -n knative-serving config-autoscaler -oyaml|grep -v '#'|egrep -v '^$'
apiVersion: v1
data:
_example: |
container-concurrency-target-percentage: "70" # 容器的并发达到多少开始进行pa操作
container-concurrency-target-default: "100" #容器的默认并发
requests-per-second-target-default: "200" # 此配置与上面互斥,表示启用rps作为扩缩容指标
target-burst-capacity: "211" # 当前启动的实例可以安全的处理多少请求后,激活器退出作为缓存的指标
stable-window: "60s" # 普通模式采集数据时间
panic-window-percentage: "10.0" # 恐慌模式数据采集时间
panic-threshold-percentage: "200.0" # 恐慌模式触发阈值百分比
max-scale-up-rate: "1000.0" # 允许运行的实例最大扩容的倍数,比如2个实例就是2*1000=2000
max-scale-down-rate: "2.0" # 运行的实例每次缩容的倍数
enable-scale-to-zero: "true" # 是否开启将实例缩容为0
scale-to-zero-grace-period: "30s" # 等待多久将实例从流量中摘除
scale-to-zero-pod-retention-period: "0s"
pod-autoscaler-class: "kpa.autoscaling.knative.dev" # 指定扩缩容pod的缩放器,比如hpa
activator-capacity: "100.0"
initial-scale: "1" # 初始化后启动多少实例
allow-zero-initial-scale: "false" # 是否运行以0实例完成初始化
min-scale: "0" # 每个实例最小允许的数量
max-scale: "0" # 每个实例最多允许扩容到多少
scale-down-delay: "0s"
max-scale-limit: "0" #最大扩容到多少实例
kind: ConfigMap
metadata:
annotations:
knative.dev/example-checksum: 47c2487f
kubectl.kubernetes.io/last-applied-configuration: |
creationTimestamp: "2023-10-11T07:39:19Z"
labels:
app.kubernetes.io/component: autoscaler
app.kubernetes.io/name: knative-serving
app.kubernetes.io/version: 1.11.1
name: config-autoscaler
namespace: knative-serving
resourceVersion: "9342681"
uid: 53b70fcb-9313-46ae-8655-251bd7da1802
2.1.2 route
一种crd资源
(base) gu@python:~/下载$ kubectl api-resources |grep knative|grep route
routes rt serving.knative.dev/v1 true Route
将endpoint映射到一个或多个revision
可以通过多种方式管理流量 包括自定义流量比例和对流量路线命名
2.1.2.1 查看路由
### kn
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kn route list
NAME URL READY
hello http://hello.default.knative.com True
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kn route describe hello
Name: hello
Namespace: default
Age: 1h
URL: http://hello.default.knative.com
Service: hello
Traffic Targets:
100% @latest (hello-00002) # 流量被100%路由到最后一个版本 02
Conditions:
OK TYPE AGE REASON
++ Ready 7m
++ AllTrafficAssigned 1h # 表示是否所有的流量目标已被发现
++ CertificateProvisioned 1h TLSNotEnabled # tls是否配置
++ IngressReady 7m # 流量入口系统是否配置就绪,比如istio
### kubectl
kubectl get route hello -oyaml
2.1.2.2 路由设置
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get route hello -oyaml
apiVersion: serving.knative.dev/v1
kind: Route
metadata:
annotations:
serving.knative.dev/creator: admin
serving.knative.dev/lastModifier: admin
creationTimestamp: "2023-10-16T01:39:39Z"
finalizers:
- routes.serving.knative.dev
generation: 1
labels:
serving.knative.dev/service: hello
name: hello
namespace: default
ownerReferences:
- apiVersion: serving.knative.dev/v1
blockOwnerDeletion: true
controller: true
kind: Service
name: hello
uid: 4797750f-d893-46b9-a584-8780ea1a202d
resourceVersion: "10191338"
uid: ffebeebe-717c-4197-af30-ee5c67e5f6cd
spec:
traffic: # 指定流量,是一个数组
- configurationName: hello
latestRevision: true # 此选项打开时会默认将流量路由到最后一个版本,无法将名称指定版本,只能指定服务名称
percent: 100
2.1.2.3 流量控制
# 对版本打标签
kn service update hello --tag hello-00001=secret,hello-00001=cm # 针对hello版本进行打标签,多个版本以,分割
# 查看服务路由信息
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kn route describe hello
Name: hello
Namespace: default
Age: 1h
URL: http://hello.default.knative.com
Service: hello
Traffic Targets:
100% @latest (hello-00002)
0% hello-00001 #secret
URL: http://secret-hello.default.knative.com
0% hello-00001 #cm
URL: http://cm-hello.default.knative.com
Conditions:
OK TYPE AGE REASON
++ Ready 44s
++ AllTrafficAssigned 1h
++ CertificateProvisioned 1h TLSNotEnabled
++ IngressReady 44s
# 流量配置
kn service update hello --traffic secret=50,@latest=50 # 必须保证流量比例加起来为100 否则报错 latest始终执行最后一个版本
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kn route describe hello
Name: hello
Namespace: default
Age: 1h
URL: http://hello.default.knative.com
Service: hello
Traffic Targets: # 流量已经被切分
50% hello-00001 #secret
URL: http://secret-hello.default.knative.com
50% hello-00002 #cm
URL: http://cm-hello.default.knative.com
Conditions:
OK TYPE AGE REASON
++ Ready 1m
++ AllTrafficAssigned 1h
++ CertificateProvisioned 1h TLSNotEnabled
++ IngressReady 1m
# 取消标签
kn service update --untag hello-00001=one
(base) gu@python:~/k8s/yaml/knative/serving$ kn revision list
标签被取消后从主url来进行访问依然可以访问到没有标签的版本,但不可再通过单独的url访问
取消标签后再重新赋予标签需要重新生成流量规则,原标签的流量不会被路由到新标签
NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON
hello-00004 hello 40% 4 18m 3 OK / 4 True
hello-00003 hello three 3 29m 3 OK / 4 True
hello-00002 hello 30% two 2 35m 3 OK / 4 True
hello-00001 hello 30% 1 36m 3 OK / 4 True
2.1.3 Configurations
一种crd资源
(base) gu@python:~/下载$ kubectl api-resources |grep knative|grep config
configurations config,cfg serving.knative.dev/v1 true Configuration
维护deployment(无状态服务)所需的状态
描述软件不出和软件行为的方式
修改config会创建新的revision。
2.1.3.1 软件持续部署历史
蓝绿部署: 新增一套与原版本完全相同的实例数目后将流量转移
金丝雀部署: 新增部分实例与原实例同时存在,确认无异常后逐渐增多直到满足要求后下掉原来的实例
渐进式部署: 从现有实例中挑出一部分进行滚动更新
2.1.3.2 配置生成方式
命令行
kn service create hello-world \
> --image quanheng.com/k8s/hello-go@sha256:cdad55f08f800a3419a030e01e15c82901dd39963e2a4c9d18d564102ccc3ae6 \
> --env TARGET="first hello"
yaml文件 // 更新yaml文件后重新apply会创建一个新版本revision
apiVersion: serving.knative.dev/v1
kind: Configuration
metadata:
name: hello
spec: 属于k8s规范,属于configuration的一部分
template: revision的模版期望状态
spec: revision的期望状态
containers:
- image: quanheng.com/k8s/hello-go:v1
ports:
- containerPort: 8080
env:
- name: TARGET
value: "hello-world-2"
上述两种方式是等价的
2.1.3.3 配置信息查看
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get configurations.serving.knative.dev -n knative-prod -o json|jq '.items[0].status'
{
"conditions": [
{
"lastTransitionTime": "2023-10-13T06:42:40Z",
"status": "True",
"type": "Ready"
}
],
"latestCreatedRevisionName": "hello-00002", 最后创建的版本
"latestReadyRevisionName": "hello-00002", #最后可以使用的版本
"observedGeneration": 2 这个数值会作为revision的版本
}
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl describe configurations.serving.knative.dev -n knative-prod
Name: hello
Namespace: knative-prod
Labels: <none>
Annotations: serving.knative.dev/creator: admin 标签中标识了创建者和最后修改者
serving.knative.dev/lastModifier: admin
API Version: serving.knative.dev/v1
Kind: Configuration
Metadata:
Creation Timestamp: 2023-10-13T06:34:34Z
Generation: 2 版本信息
Managed Fields:
API Version: serving.knative.dev/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:template:
.:
f:spec:
.:
f:containers:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2023-10-13T06:42:29Z
API Version: serving.knative.dev/v1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:latestCreatedRevisionName:
f:latestReadyRevisionName:
f:observedGeneration:
Manager: controller
Operation: Update
Subresource: status
Time: 2023-10-13T06:42:40Z
Resource Version: 9995892
UID: ede6d8c8-48b4-4128-adcf-e15358c8af83
Spec:
Template:
Metadata:
Creation Timestamp: <nil>
Spec:
Container Concurrency: 0
Containers:
Env:
Name: TARGET
Value: hello-world-3
Image: quanheng.com/k8s/hello-go@sha256:cdad55f08f800a3419a030e01e15c82901dd39963e2a4c9d18d564102ccc3ae6
Name: user-container
Ports:
Container Port: 8080
Protocol: TCP
Readiness Probe:
Success Threshold: 1
Tcp Socket:
Port: 0
Resources:
Enable Service Links: false
Timeout Seconds: 300
Status:
Conditions:
Last Transition Time: 2023-10-13T06:42:40Z
Status: True
Type: Ready
Latest Created Revision Name: hello-00002
Latest Ready Revision Name: hello-00002
Observed Generation: 2
Events: 报告给k8s集群api的event
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 17m configuration-controller Created Revision "hello-00001"
Normal ConfigurationReady 17m configuration-controller Configuration becomes ready
Normal LatestReadyUpdate 17m configuration-controller LatestReadyRevisionName updated to "hello-00001"
Normal Created 9m43s configuration-controller Created Revision "hello-00002"
Normal LatestReadyUpdate 9m32s configuration-controller LatestReadyRevisionName updated to "hello-00002"
2.1.4 revision
一种crd资源
(base) gu@python:~/下载$ kubectl api-resources |grep knative|grep revi
revisions rev serving.knative.dev/v1 true Revision
可以理解为对工作负载进行的每次修改的代码和配置的时间点快照
revision是不可变对象,只有有用就可以保留
可以根据流量大小进行自动伸缩
2.1.4.1 基本信息
查看版本信息
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kn revision list -A
NAMESPACE NAME SERVICE TRAFFIC TAGS GENERATION AGE CONDITIONS READY REASON
knative-prod hello-00002 #可以通过修改yaml中信息来更改名称 2 16m 3 OK / 4 True
knative-prod hello-00001 1 24m 3 OK / 4 True
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get revision -n knative-prod hello-00001 -ojson|jq '.metadata'
注解遵循 subject.knative.dev/subject 格式
{
"annotations": {
"serving.knative.dev/creator": "admin", 创建者
"serving.knative.dev/routingStateModified": "2023-10-13T06:34:34Z"
},
"creationTimestamp": "2023-10-13T06:34:34Z",
"generation": 1,
"labels": {
"serving.knative.dev/configuration": "hello", 所属的配置名称
"serving.knative.dev/configurationGeneration": "1",
"serving.knative.dev/configurationUID": "ede6d8c8-48b4-4128-adcf-e15358c8af83",
"serving.knative.dev/routingState": "pending"
},
"name": "hello-00001",
"namespace": "knative-prod",
"ownerReferences": [
{
"apiVersion": "serving.knative.dev/v1",
"blockOwnerDeletion": true,
"controller": true,
"kind": "Configuration",
"name": "hello",
"uid": "ede6d8c8-48b4-4128-adcf-e15358c8af83"
}
],
"resourceVersion": "9991657",
"uid": "18e1a8f0-15f4-4f2b-9e96-66f1e0196a91"
}
2.1.4.2 配置
和k8s中的pod类似,revision表现为实例,可以像pod一样对实例进行配置
# 变量声明和使用
apiVersion: v1
kind: Secret
metadata:
name: hello-world-secret
namespace: default
type: Opaque
data:
TARGET: aGVsbG8td29ybGQtc2VjcmV0Cg==
---
kind: ConfigMap
apiVersion: v1
metadata:
name: hello-world-cm
data:
TARGET: hello-world-cm
---
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
spec:
template:
spec:
containers:
- image: quanheng.com/k8s/hello-go@sha256:cdad55f08f800a3419a030e01e15c82901dd39963e2a4c9d18d564102ccc3ae6
ports:
- containerPort: 8080
env: 设置环境变量
- name: TARGET
value: "hello-world-3"
- name: TARGET
envFrom:
- configMapRef:
name: hello-world-cm
- secretRef:
name: hello-world-secret
env:
- name: TARGET
valueFrom:
configMapKeyRef:
name: hello-world-cm
key: TARGET
- name: TARGET
valueFrom:
secretKeyRef:
name: hello-world-secret
key: TARGET
卷配置和挂载,参考k8s
探针配置 参考k8s
limit配置 参考k8s
容器并发,超时时间 在其余组件会详细讲解
2.2 eventing
一组 API,使您能够在应用程序中使用事件驱动的架构。可以使用这些 API 创建组件,这些组件将事件从事件生成者(称为源)路由到接收事件的事件使用者(称为接收器)。还可以将接收器配置为通过发送响应事件来响应 HTTP 请求
各种类型的工作负载提供支持,包括标准的Kubernetes服务和Knative Serving Services。
2.2.1 cloudevent
2.2.1.1 属性
# 必要属性
specrevision 引用版本
sourece 事件来源
type 事件种类
id 表示事件,每个id来源唯一
# 可选属性
datacontenttype 内容类型
dataschema 数据架构
subject 与source类似,区别在于类似于 类和实例的关系
time 事件发生的时间
# 扩展字段
dataref 数据来源,将信息的指针添加到数据而不是数据本身,用来指定大文件或者要加密的信息
traceparent|tracestate 链路追踪
rate 采样率 采集数据与发送数据的比例
2.2.1.2 格式和协议
满足格式加协议
比如http就是协议的一种,还有kafka和mq等
json是数据格式的一种,也可以使用其他数据格式
2.2.1.3 内容模式
# json
# 二进制
二进制模式消息”是将事件数据存储在消息正文中的消息,并且事件 属性存储为消息元数据的一部分。
二进制内容模式可适应任何形状的事件数据,并允许高效传输 并且无需转码工作。
# 批处理
2.2.2 broker
2.2.2.1 broker机制
Kubernetes 自定义资源
定义一个收集source列表的事件网格
broker是事件入口提供可发现的入口
使用触发器进行事件传递到sink
事件生成者可以通过 POST 事件将事件发送到代理。
broker和trigger结合抽象出了一个关于生产者消费者的路由模型
2.2.2.2 broker种类
broker基于管道实现,目前常用的有kafka和mq,还有基于内存的管道
2.2.2.2.1 kafka
# kafka-broker
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
annotations:
# case-sensitive
eventing.knative.dev/broker.class: Kafka
# Optional annotation to point to an externally managed kafka topic:
# kafka.eventing.knative.dev/external.topic: <topic-name>
name: kafka-test
namespace: default
spec:
# Configuration specific to this broker.
config:
apiVersion: v1
kind: ConfigMap
name: kafka-broker-config
namespace: knative-eventing
2.2.2.2.2 基于内存通道
# 创建管道资源
创建完后准入 Webhook 会基于默认通道实现设置spec.channelTemplate
apiVersion: messaging.knative.dev/v1
kind: Channel
metadata:
name: <example-channel>
namespace: <namespace>
2.2.3 trigger
概念:
触发器用来接收特定代理的事件,然后满足自定义的条件
2.2.3.1 例子
# 将default-broker下的所有信息发送到 ksvc:my-service
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-service-trigger
spec:
broker: default
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-service
2.2.3.2 fillter(过滤器)
# 将来自default-broker下的所有信息 筛选 type=dev.knative.foo.bar 且 myextension=my-extension-value的事件
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-service-trigger
spec:
broker: default
filter:
attributes:
type: dev.knative.foo.bar
myextension: my-extension-value
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-service
# 查看trigger信息
(base) gu@python:~/k8s/yaml/knative/knative-project/evetning$ kn trigger describe hello-cloud
Name: hello-cloud
Namespace: default
Labels: eventing.knative.dev/broker=kafka-test
Annotations: eventing.knative.dev/creator=admin, eventing.knative.dev/lastModifier=admin
Age: 21h
Broker: kafka-test # 指定来源的broker
Sink: # 指定接收器
Name: cloud-eventing
Resource: Service (serving.knative.dev/v1)
Conditions: # 状态信息
OK TYPE AGE REASON
++ Ready 18m
++ BrokerReady 18m # 表示broker代理可以进行接收,过滤,发送等操作
++ DeadLetterSinkResolved 21h DeadLetterSinkNotConfigured #
++ DependencyReady 1h
++ SubscriberResolved 3h SubscriberResolved # 接收器的url是否可以被解析
++ SubscriptionReady 21h # 接收器状态是否正常
2.2.4 source(事件源)
# 概念
产生事件的地方
一种k8s的CR
充当事件生成者和事件接收器之间的链接
接收器可以是 k8s 服务(svc),包括 Knative 服务(ksvc)、通道(channel)或从事件源接收事件的代理(broker)
# 查看支持的事件源
# 官方文档参考:https://knative.dev/docs/eventing/sources/
(base) gu@python:~/k8s/yaml/knative/knative-project/evetning$ kn sources list-types
2.2.4.1 事件源解析
事件源包含两个属性(任何具有此两个字段的资源都可以被称之为事件源,参考go中的接口实现)
sink
cloudeventoverrides
2.2.4.2 pingsource
# 官网配置连接
https://knative.dev/docs/eventing/sources/ping-source/reference/
# pingsource
定时触发的事件源,类似于cronjob
创建一个事件源
# 创建一个每2分钟触发一次,发送value:hello 字段给ksvc mysvc
kn source ping create my-ping --schedule "*/2 * * * *" --data '{ value: "hello" }' --sink ksvc:mysvc
kn source ping create test-ping-source --schedule "*/2 * * * *" --sink ksvc:cloud-eventing
# yaml格式
apiVersion: sources.knative.dev/v1
kind: PingSource
metadata:
name: <pingsource-name>
namespace: <namespace>
spec:
schedule: "<cron-schedule>"
contentType: "<content-type>"
data: '<data>'
sink:
ref:
apiVersion: v1
kind: <sink-kind>
name: <sink-name>
# 查看状态
(base) gu@python:~/k8s/yaml/knative/knative-project/evetning$ kn source ping describe test-ping-source
Name: test-ping-source
Namespace: default
Annotations: sources.knative.dev/creator=admin, sources.knative.dev/lastModifier=admin
Age: 39s
Schedule: * * * * *
Data: {"name":"fulifang"}
Sink:
Name: cloud-eventing
Resource: Service (serving.knative.dev/v1)
Conditions:
OK TYPE AGE REASON
++ Ready 38s
++ Deployed 38s
++ SinkProvided 38s
2.2.4.3 kafkasource
# 创建kafkatopic
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaTopic
metadata:
name: knative-kafka-topic
labels:
strimzi.io/cluster: knative-kafka
spec:
partitions: 10
replicas: 3
config:
retention.ms: 7200000
segment.bytes: 1073741824
# 创建source
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
name: test-kafka-source
spec:
consumerGroup: knative-group
bootstrapServers:
- knative-kafka-kafka-bootstrap.kafka:9092
topics:
- knative-broker-default-kafka-test
sink:
ref: # 指定接收者
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display
# 测试
kubectl -n kafka run kafka-producer -ti --image=quanheng.com/k8s/kafka:0.37.0-kafka-3.5.1 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list knative-kafka-kafka-bootstrap.kafka:9092 --topic knative-broker-default-kafka-test
>{"name":"guquanheng"}
2.2.4.4 apiserversource
用于侦听由 Kubernetes API 服务器(例如 pod 创建、部署更新等),并将它们作为 CloudEvents 转发到接收器
此资源随eventing默认安装
# 创建一个apiserversource
# 创建sa
apiVersion: v1
kind: ServiceAccount
metadata:
name: apiserver-source-sa
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: apiserver-source-r
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: apiserver-source-rb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: apiserver-source-r
subjects:
- kind: ServiceAccount
name: apiserver-source-sa
---
apiVersion: v1
kind: Secret
metadata:
name: apiserver-source-secret
annotations:
kubernetes.io/service-account.name: apiserver-source-sa
type: kubernetes.io/service-account-token
# 创建apisource
kn方式
kn source apiserver create apisource \
--mode "Resource" \ # 针对所有资源
--resource "Event:v1" \ # 指定资源
--service-account apiserver-source-sa \
--sink http://event-display.svc.cluster.local
yaml方式(参考链接 https://knative.dev/docs/eventing/sources/apiserversource/reference/)
apiVersion: sources.knative.dev/v1
kind: ApiServerSource
metadata:
name: apiserver-source
spec:
mode: Reference # 进针对resource定义的资源
resources: # 一个列表,可以定义多个资源(注意sa的授权)
- apiVersion: v1
kind: Pod
selector: # 针对定义资源进行标签选择
matchExpressions: # 和k8s的标签选择机制一样
- key: app
operator: In #In, NotIn, Exists and DoesNotExist
values:
- nginx
serviceAccountName: apiserver-source-sa
sink: # 将事件发送到什么地方
ref:
apiVersion: v1
kind: Service
name: event-display
ceOverrides: # 指定在事件中插入字段(注意为字符串格式,否则webhook报错)
extensions:
extra: "this is an extra attribute"
additional: "42"
2.2.4.5 customsource
# 必要组件
Receive adapter 定义如何从生成者接收消息,以及uri,以及将格式转化为cloudevent格式
Kubernetes controller k8s控制器 管理事件源并使适配器按需求部署
CRD k8s自定义资源 提供控制器生成适配器的配置
# 自定义适配器需要有go开发基础
2.2.5 sink(接收器)
概念:
创建事件源时,可以指定事件从源发送到的接收器
接收器是可寻址或可调用的资源 比如 broker channel ksvc svc等
接收器通过http协议将事件传递到定义的地址,其中k8s svs 也可以作为uri来进行http传递
2.2.5.1 接收器参数
# 参考资料
https://knative.dev/docs/eventing/sinks/
# 例子
apiVersion: sources.knative.dev/v1
kind: SinkBinding
metadata:
name: bind-heartbeat
spec:
...
sink:
ref:
apiVersion: v1
kind: Service
namespace: default
name: mysink
uri: /extra/path
# uri若和ref同时存在,那么将会以后缀添加到ref后面将解析为http://mysink.default.svc.cluster.local/extra/path
2.2.5.2 使用CRD接收
1、必须保证crd存在可寻址对象,类似svc
2、创建clusterrole
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: kafkasinks-addressable-resolver
labels:
kafka.eventing.knative.dev/release: devel
duck.knative.dev/addressable: "true"
# Do not use this role directly. These rules will be added to the "addressable-resolver" role.
rules: # 允许对knative组下kafkasinks 资源进行读取操作
- apiGroups:
- eventing.knative.dev
resources:
- kafkasinks
- kafkasinks/status
verbs:
- get
- list
- watch
2.2.5.3 使用触发器进行筛选
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: <trigger-name> # 触发器名称
spec:
...
subscriber: # 接收器名称
ref:
apiVersion: eventing.knative.dev/v1alpha1
kind: KafkaSink
name: <kafka-sink-name>
2.2.5.4 kafkasink
将传入的 CloudEvent 持久化到可配置的 Apache Kafka Topic
# 创建
apiVersion: eventing.knative.dev/v1alpha1
kind: KafkaSink
metadata:
name: test-kafka-sink
spec:
topic: knative-broker-default-kafka-test
bootstrapServers:
- knative-kafka-kafka-bootstrap.kafka:9092
contentMode: binary # or structured # 指定输入内容格式为二进制
# 配置
kubectl get cm config-kafka-sink-data-plane -n knative-eventing
具体参数参考官网: https://kafka.apache.org/documentation/#producerconfigs
# 启用调试日志记录
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-config-logging
namespace: knative-eventing
data:
config.xml: |
<configuration>
<appender name="jsonConsoleAppender" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder"/>
</appender>
<root level="DEBUG"> # 日志级别
<appender-ref ref="jsonConsoleAppender"/>
</root>
</configuration>
2.2.6 Subscription(发信失败处理)
apiVersion: messaging.knative.dev/v1
kind: Subscription
metadata:
name: example-subscription
namespace: example-namespace
spec:
delivery:
deadLetterSink: # 配置当发信失败后将消息发送到哪里 目标可以是ksvc或 URI
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: example-sink
backoffDelay: <duration> # 设置重试间隔
backoffPolicy: <policy-type> # 设置重试策略 时间为指数还是线性
retry: <integer> # 设置重试次数
---
# 将代理添加失败处理
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
name: with-dead-letter-sink
spec:
delivery:
deadLetterSink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: example-sink
backoffDelay: <duration>
backoffPolicy: <policy-type>
retry: <integer>
2.2.7 流
2.2.7.1 并行
# 定义两个集合对,filter用来匹配符号条件的事件,当匹配到后将其发送到定义的subscriber,若发送失败将其发送到reply定义的服务
apiVersion: flows.knative.dev/v1
kind: Parallel
metadata:
name: odd-even-parallel
spec:
channelTemplate: # 可选项,将会创建一个管道来处理此流程
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
branches: # 一个列表,存在n个过滤器和订阅者的集合对
- filter:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: even-filter
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: even-transformer
- filter:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: odd-filter
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: odd-transformer
reply:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display
2.2.7.2 串行
# 定义一个序列,对外抽象为一个整体,按照定义的顺序依次调用后,将结果按顺序发送到指定的位置
apiVersion: flows.knative.dev/v1
kind: Sequence
metadata:
name: sequence
spec:
channelTemplate:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
steps:
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: first
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: second
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: third
reply: # 与并行不同的是,并行使失败的结果发送到此,串行使执行最后一步后发送到此
ref:
kind: Broker
apiVersion: eventing.knative.dev/v1
name: default
2.2.8 channel
概念:
通道提供一种事件传递机制,该机制可以通过订阅将收到的事件传递到多个目标或接收器
接收器的示例包括broker和 Ksvc。
# 配置
https://knative.dev/docs/install/operator/configuring-eventing-cr/#setting-a-default-channel
ns配置将覆盖集群默认配置
2.2.8.1 通道类型
通用型
自定义,比如kafka和inmemory
KafkaChannel 支持有序使用者交付保证,即每个分区的阻塞使用者,在传递分区的下一条消息之前等待 CloudEvent 订阅者的成功响应
内存通道禁止在生产中使用
2.2.9 subscription(订阅)
用来将管道和接收器进行连接
# 创建
apiVersion: messaging.knative.dev/v1
kind: Subscription
metadata:
# 订阅名称
name: <subscription-name>
namespace: default
spec:
# 订阅连接的管道信息
channel:
apiVersion: messaging.knative.dev/v1
kind: Channel
name: <channel-name>
# 可选配置
delivery:
# 死信配置,将失败信息发送到指定的地方,不会进行重试,
deadLetterSink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: <service-name>
# 可选配置,指定接收器回复消息的路径
reply:
ref:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
name: <service-name>
# 接收器配置
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: <service-name>
2.2.10 eventtype
概念:
knative的一个自定义资源
可以更方便的发现从管道和代理中的事件类型
维护broker和channel可以使用的事件类型的目录,存储在注册表中的事件类型包含使用者创建触发器所需的所有信息,而无需诉诸其他带外机制。
# 创建
apiVersion: eventing.knative.dev/v1beta2
kind: EventType
metadata:
name: dev.knative.source.github.push-34cnb
namespace: default
labels:
eventing.knative.dev/sourceName: github-sample
spec:
#进入事件网格的事件类型,可以根据此字段创建触发器
type: dev.knative.source.github.push
# 指定source源的状态,事件的使用者可以根据此属性创建触发器进行筛选
source: https://github.com/knative/eventing
# 具有事件类型架构(如 JSON 架构或原型架构)的有效 URI 可选项
schema:
# 对事件类型内容的描述 可选项
description:
# 引用提供事件类型的组件,此处使用broker
reference:
apiVersion: eventing.knative.dev/v1
kind: Broker
name: default
# 对eventtype的描述,是否已经准备好基于存在KReference
status:
conditions:
- status: "True"
type: ReferenceExists
- status: "True"
type: Ready
# 使用
在知道可以从代理的事件网格中使用哪些事件之后, 您可以创建触发器来订阅特定事件
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: kafka-knative-demo-trigger
namespace: default
spec:
broker: default
filter:
attributes:
# 过滤来自kafka且type为dev.knative.kafka.event
type: dev.knative.kafka.event
source: /apis/v1/namespaces/default/kafkasources/kafka-sample#knative-demo
subscriber:
ref:
# 可以被下面这个svc消费掉
apiVersion: serving.knative.dev/v1
kind: Service
name: kafka-knative-demo-service
2.3 istio
配置
(base) gu@python:~/k8s/istio/istio-1.18.0/manifests/charts/gateways$ kubectl get cm -n knative-serving config-istio -oyaml|grep -v "#"
apiVersion: v1
data:
_example: |
istio 默认使用的gateway自定义gateway需要更改此配置
gateway.knative-serving.knative-ingress-gateway: "istio-ingressgateway.knative-prod.svc.cluster.local"
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
creationTimestamp: "2023-10-12T04:04:26Z"
labels:
app.kubernetes.io/component: net-istio
app.kubernetes.io/name: knative-serving
app.kubernetes.io/version: 1.11.0
networking.knative.dev/ingress-provider: istio
name: config-istio
namespace: knative-serving
resourceVersion: "9951412"
uid: 63fbfb78-eaad-4fa4-ba18-1806a77aaa92
3、扩展
3.1 插件
3.1.1 func
使用 CLI 轻松创建、构建和部署无状态、事件驱动的函数作为 Knative 服务
构建或运行函数时,系统会自动生成开放容器计划 (OCI) 格式的容器映像,并将其存储在容器注册表中。每次更新代码,然后运行或部署代码时,容器映像也会更新
// 安装
https://github.com/knative/func/archive/refs/tags/knative-v1.11.1.tar.gz
cd func/
make
// 使用
创建: func create -l go hello 会在当前目录生成一个hello文件夹,里面是go语言实现的函数
构建镜像:cd hello ; func run 会自动构建镜像并运行但并不会部署在集群
// 查看函数运行状况
func invoke
// 部署
cd hello;kn func deploy --registry <registry>
3.2 eventing-kafka
# kafka集群部署
# 安装oprater控制器
helm repo add strimzi https://strimzi.io/charts
helm pull strimzi/strimzi-kafka-operator --version 0.38.0 # 版本可以自定
修改镜像为私有镜像
helm install kafka-oprator -n kafka ./
# 安装kafka
https://github.com/strimzi/strimzi-kafka-operator/tree/main/examples/kafka
选择一种模式安装kafka
kafka-persistent.yaml
部署具有三个 ZooKeeper 和三个 Kafka 节点的持久集群。
kafka-jbod.yaml
部署具有三个 ZooKeeper 和三个 Kafka 节点(每个节点使用多个持久卷)的持久集群。
kafka-persistent-single.yaml
部署具有单个 ZooKeeper 节点和单个 Kafka 节点的持久集群。
kafka-ephemeral.yaml
部署具有三个 ZooKeeper 和三个 Kafka 节点的临时群集。
kafka-ephemeral-single.yaml
部署具有三个 ZooKeeper 节点和一个 Kafka 节点的临时群集
# 我们使用kafka-persistent.yaml
修改配置
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
version: 3.6.0
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
inter.broker.protocol.version: "3.6"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 10Gi
deleteClaim: false
class: nfs # 添加一个sc用来制备pvc
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 10Gi # 大小根据自己使用来定义,实验环境给10G够用了
deleteClaim: false
class: nfs # 添加一个sc用来制备pvc
entityOperator:
topicOperator: {}
userOperator: {}
# kubectl apply -f xx.yaml 等待pod状态running
# 部署kafka-ui
helm repo add kafka-ui https://provectus.github.io/kafka-ui-charts
helm pull kafka-ui/kafka-ui
修改value,部署,将服务暴露
image:
registry: quanheng.com
repository: k8s/kafka-ui
pullPolicy: IfNotPresent
tag: "v1"
yamlApplicationConfig:
kafka:
clusters:
- name: knative-kafka # kafka集群名称
bootstrapServers: knative-kafka-kafka-bootstrap.kafka.svc.cluster.local:9092 # kafka service
spring:
security:
oauth2:
auth:
type: disabled
management:
health:
ldap:
enabled: false
4、流量走向
部署一个示例函数
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
spec:
template:
spec:
containers:
- image: quanheng.com/k8s/hello-go@sha256:cdad55f08f800a3419a030e01e15c82901dd39963e2a4c9d18d564102ccc3ae6
ports:
- containerPort: 8080
env:
- name: TARGET
value: "helloworld"
---
kubectl apply -f xx.yaml
# 查看函数暴露的域名(通过此域名访问) 函数依赖dns解析 本示例不通过dns暴露
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get ksvc
NAME URL LATESTCREATED LATESTREADY READY REASON
hello http://hello.default.knative.com hello-00001 hello-00001 True
# 查看默认ns下的vs(本示例通过istio进行服务治理)
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get vs hello-ingress -oyaml
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
annotations:
networking.internal.knative.dev/rollout: '{"configurations":[{"configurationName":"hello","percent":100,"revisions":[{"revisionName":"hello-00001","percent":100}],"stepParams":{}}]}'
networking.knative.dev/ingress.class: istio.ingress.networking.knative.dev
serving.knative.dev/creator: admin
serving.knative.dev/lastModifier: admin
creationTimestamp: "2023-10-12T08:37:09Z"
generation: 1
labels:
networking.internal.knative.dev/ingress: hello
serving.knative.dev/route: hello
serving.knative.dev/routeNamespace: default
name: hello-ingress
namespace: default
ownerReferences:
- apiVersion: networking.internal.knative.dev/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Ingress
name: hello
uid: 9353a99d-fdc7-49bc-9738-4ffb3530ffeb
resourceVersion: "9708555"
uid: 1de941cb-5699-4583-9037-f87760a8b04b
spec:
gateways: # 接收knative-serving命名空间下的knative-ingress-gateway和knative-local-gateway网关流量
- knative-serving/knative-ingress-gateway和
- knative-serving/knative-local-gateway
hosts: # 接收主机头为下面列表中的其中一项的访问
- hello.default
- hello.default.knative.com
- hello.default.svc
- hello.default.svc.cluster.local
http:
- headers:
request:
set:
K-Network-Hash: 5bf690dc2ed46a5a3a24a89fdf59cd9263168ae313fcad76203abede516340f6
match:
- authority:
prefix: hello.default
gateways:
- knative-serving/knative-local-gateway
headers:
K-Network-Hash:
exact: override
retries: {}
route: 将流量路由到默认命名空间下hello-00001的svc
- destination:
host: hello-00001.default.svc.cluster.local
port:
number: 80
headers:
request:
set:
Knative-Serving-Namespace: default
Knative-Serving-Revision: hello-00001
weight: 100
- match:
- authority:
prefix: hello.default
gateways:
- knative-serving/knative-local-gateway
retries: {}
route:
- destination:
host: hello-00001.default.svc.cluster.local
port:
number: 80
headers:
request:
set:
Knative-Serving-Namespace: default
Knative-Serving-Revision: hello-00001
weight: 100
- headers:
request:
set:
K-Network-Hash: 5bf690dc2ed46a5a3a24a89fdf59cd9263168ae313fcad76203abede516340f6
match:
- authority:
prefix: hello.default.knative.com
gateways:
- knative-serving/knative-ingress-gateway
headers:
K-Network-Hash:
exact: override
retries: {}
route:
- destination:
host: hello-00001.default.svc.cluster.local
port:
number: 80
headers:
request:
set:
Knative-Serving-Namespace: default
Knative-Serving-Revision: hello-00001
weight: 100
- match:
- authority:
prefix: hello.default.knative.com
gateways:
- knative-serving/knative-ingress-gateway
retries: {}
route:
- destination:
host: hello-00001.default.svc.cluster.local
port:
number: 80
headers:
request:
set:
Knative-Serving-Namespace: default
Knative-Serving-Revision: hello-00001
weight: 100
# 查看默认命名空间的svc
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get svc hello-00001 -oyaml
apiVersion: v1
kind: Service
metadata:
annotations:
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
serving.knative.dev/creator: admin
creationTimestamp: "2023-10-12T08:36:57Z"
labels:
app: hello-00001
networking.internal.knative.dev/serverlessservice: hello-00001
networking.internal.knative.dev/serviceType: Public
serving.knative.dev/configuration: hello
serving.knative.dev/configurationGeneration: "1"
serving.knative.dev/configurationUID: 9fea0307-2598-45b8-8f82-c9fb6b33d45e
serving.knative.dev/revision: hello-00001
serving.knative.dev/revisionUID: 35506455-4f2f-4e6f-b059-7530e7ca94f0
serving.knative.dev/service: hello
serving.knative.dev/serviceUID: af620f21-3e8d-4571-88b9-1996e1cbf291
name: hello-00001
namespace: default
ownerReferences:
- apiVersion: networking.internal.knative.dev/v1alpha1
blockOwnerDeletion: true
controller: true
kind: ServerlessService
name: hello-00001
uid: 3d7698c3-f0a6-4983-9991-01baca3260ed
resourceVersion: "9708401"
uid: 26f4b80f-b3ef-45e3-aa56-4bbf3bdc3d18
spec: # 无选择算符,查看ep
clusterIP: 10.173.56.193
clusterIPs:
- 10.173.56.193
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8012
- name: https
port: 443
protocol: TCP
targetPort: 8112
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
# 查看svc转发的pod(流量最终被转发到了knative activator组件)
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get ep hello-00001
NAME ENDPOINTS AGE
hello-00001 10.182.169.155:8012,10.182.169.155:8112 16h
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get pod -owide -A|grep 10.182.169.155
knative-serving activator-55dd777987-7f4fw 1/1 Running 2 (17m ago) 41h 10.182.169.155 worker-02 <none> <none>
# 查看路由信息,流量以什么方式进行路由到哪个实例
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get route hello-world -oyaml
apiVersion: serving.knative.dev/v1
kind: Route
metadata:
annotations:
serving.knative.dev/creator: admin
serving.knative.dev/lastModifier: admin
creationTimestamp: "2023-10-13T01:49:25Z"
finalizers:
- routes.serving.knative.dev
generation: 2
labels:
serving.knative.dev/service: hello-world
name: hello-world
namespace: default
ownerReferences:
- apiVersion: serving.knative.dev/v1
blockOwnerDeletion: true
controller: true
kind: Service
name: hello-world
uid: 81d9a713-cfe0-40cd-b798-f99bc14d5fbc
resourceVersion: "9834400"
uid: b93237a3-81ab-4cda-b8f4-cf19fe92b835
spec:
traffic: # 最终将流量路由到01和04版本 流量比例各50%
- latestRevision: false
percent: 50
revisionName: hello-world-00001
- latestRevision: false
percent: 50
revisionName: hello-world-00004
# 查看gateway
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get gateway -n knative-serving knative-ingress-gateway -oyaml
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.istio.io/v1beta1","kind":"Gateway","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"net-istio","app.kubernetes.io/name":"knative-serving","app.kubernetes.io/version":"1.11.0","networking.knative.dev/ingress-provider":"istio"},"name":"knative-ingress-gateway","namespace":"knative-serving"},"spec":{"selector":{"istio":"ingressgateway"},"servers":[{"hosts":["*"],"port":{"name":"http","number":80,"protocol":"HTTP"}}]}}
creationTimestamp: "2023-10-12T04:04:25Z"
generation: 1
labels:
app.kubernetes.io/component: net-istio
app.kubernetes.io/name: knative-serving
app.kubernetes.io/version: 1.11.0
networking.knative.dev/ingress-provider: istio
name: knative-ingress-gateway
namespace: knative-serving
resourceVersion: "9540382"
uid: 73ec896f-f2a4-43fa-a7e2-12d6ee01cbf1
spec: # 匹配任意主机段80端口的http流量,使用istio=ingressgateway标签的pod来进行流量治理
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
# 查找istioingress
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get pod -A -l istio=ingressgateway
NAMESPACE NAME READY STATUS RESTARTS AGE
istio-system istio-ingressgateway-5998d8ccf-6bzwb 1/1 Running 2 (20m ago) 40h
查找istioingress的svc
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get svc -n istio-system istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway NodePort 10.173.124.221 <none> 15021:14407/TCP,80:22920/TCP,443:27657/TCP,8081:31772/TCP 40h
# 查找pod所在节点
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ kubectl get pod -owide -n istio-system istio-ingressgateway-5998d8ccf-6bzwb
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
istio-ingressgateway-5998d8ccf-6bzwb 1/1 Running 2 (22m ago) 40h 10.182.36.80 worker-01 <none> <none>
# 通过nodeport访问此服务
(base) gu@python:~/k8s/yaml/knative/knative-project/serving$ curl 172.31.3.17:22920 -H "Host:hello.default.knative.com"
Hello helloworld!
5、配置管理(运维关注)
5.1 eventing配置
5.1.1 broker默认配置
(base) gu@python:~$ kubectl get cm -n knative-eventing config-br-defaults -oyaml
apiVersion: v1
data:
default-br-config: |
clusterDefault:
brokerClass: MTChannelBasedBroker # broker默认类
apiVersion: v1
kind: ConfigMap
name: config-br-default-channel
namespace: knative-eventing
delivery: # 死信设置
retry: 10 # 重试次数
backoffPolicy: exponential # 失败策略
backoffDelay: PT0.2S # 回退延迟
namespaceDefaults: # 针对每个ns使用单独的配置文件的设置或者直接使用配置也一样
namespace-1:
apiVersion: v1
kind: ConfigMap
name: kafka-channel
namespace: knative-eventing
namespace-2:
apiVersion: v1
kind: ConfigMap
name: kafka-channel
namespace: knative-eventing
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: knative-eventing
app.kubernetes.io/version: 1.11.4
name: config-br-defaults
namespace: knative-eventing
5.1.2 istio设置
5.1.2.1 将 Istio 与 Knative Brokers 集成
# 开启sider自动注入
kubectl label namespace knative-eventing istio-injection=enabled
# 删除原代理pod使其注入sider
kubectl delete pod <broker-ingress-pod-name> -n knative-eventing
# 新建一个代理
kn broker create mt-test
# 启动测试pod
kubectl -n default run curl --image=quanheng.com/k8s/busyboxplus:curl -i --tty
# 发送测试消息
curl -X POST -v -H "content-type: application/json" -H "ce-specversion: 1.0" -H "ce-source: my/curl/command" -H "ce-type: my.demo.event" -H "ce-id: 0815" -d '{"value":"Hello Knative"}' http://broker-ingress.knative-eventing.svc.cluster.local/default/mt
5.1.3 channel设置
$ kubectl get cm -n knative-eventing default-ch-webhook -oyaml
apiVersion: v1
kind: ConfigMap
metadata:
name: default-ch-webhook
namespace: knative-eventing
labels:
eventing.knative.dev/release: devel
app.kubernetes.io/version: devel
app.kubernetes.io/part-of: knative-eventing
data:
default-ch-config: |
clusterDefault: #集群默认配置
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
namespaceDefaults: # 一个列表
some-namespace: # 针对某ns默认配置
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
5.1.4 kafka-channel配置
# 创建一个模版
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-channel
namespace: knative-eventing
data:
channel-template-spec: |
apiVersion: messaging.knative.dev/v1beta1
kind: KafkaChannel
spec:
numPartitions: 3 # 指定partition数量
replicationFactor: 1 # 指定副本数
# 创建kafka-broker时 指定使用配置
apiVersion: eventing.knative.dev/v1
kind: Broker
metadata:
annotations:
eventing.knative.dev/broker.class: MTChannelBasedBroker
name: kafka-backed-broker
namespace: default
spec:
config: # 设置使用的kafka配置
apiVersion: v1
kind: ConfigMap
name: kafka-channel
namespace: knative-eventing
5.1.5 pingsource设置
$ kubectl get cm -n knative-eventing config-ping-defaults
apiVersion: v1
kind: ConfigMap
metadata:
name: config-ping-defaults
namespace: knative-eventing
data:
data-max-size: -1
6、可观测行设置(运维关注)
6.1 链路追踪
# 设置 默认使用zipkin将其发送到api,若使用skywarking请自行参考api
kubectl edit cm -n knative-eventing config-tracing
backend: zipkin
sample-rate: "0.1"
zipkin-endpoint: http://zipkin.istio-system.svc.cluster.local:9411/api/v2/spans
6.2 日志收集
https://github.com/knative/docs/raw/main/docs/serving/observability/logging/fluent-bit-collector.yaml
# 使用fluent-bit收集并将其转发到elk等日志系统
6.3 监控
# 安装
# 指标收集
https://raw.githubusercontent.com/knative-sandbox/monitoring/main/servicemonitor.yaml
# grafana仪表板
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· Manus重磅发布:全球首款通用AI代理技术深度解析与实战指南
· 被坑几百块钱后,我竟然真的恢复了删除的微信聊天记录!
· 没有Manus邀请码?试试免邀请码的MGX或者开源的OpenManus吧
· 【自荐】一款简洁、开源的在线白板工具 Drawnix
· 园子的第一款AI主题卫衣上架——"HELLO! HOW CAN I ASSIST YOU TODAY