istio sidecar自动注入过程分析
istio sidecar自动注入过程分析
istio通过mutating webhook admission controller机制实现sidecar的自动注入.istio sidecard在每个服务创建pod时都会被自动注入.
sidecar自动注入检查
检查kube-apiserver
webhook支持需要Kubernets1.9或者更高的版本,使用以下命令查看
[root@test1 ~]# kubectl api-versions | grep admissionregistration
admissionregistration.k8s.io/v1beta1
同时检查kube-apiserver有没加入参数MutatingAdmissionWebhook和ValidatingAdmissionWebhook
如果kubernetes是二进制安装,在master结点没有安装kube-proxy的情况下,需要在kube-apiserver加入参数enable-aggregator-routing=true.
检查sidecar-injector的configmap
在sidecar-injector的configmap中设置policy=enabled字段来查看是否启用自动注入
[root@test1 ~]# kubectl describe cm istio-sidecar-injector -n istio-system
Name: istio-sidecar-injector
Namespace: istio-system
Labels: app=istio
chart=istio-1.0.3
heritage=Tiller
istio=sidecar-injector
release=istio
...
Data
====
config:
----
policy: enabled
检查namespace标签
为需要自动注入的namespace打上标签istio-injection: enabled
[root@test1 ~]# kubectl get namespace -L istio-injection
NAME STATUS AGE ISTIO-INJECTION
default Active 3d enabled
istio-system Active 3d
kube-public Active 3d
kube-system Active 3d
kubectl label namespace default istio-injection=enabled
sidecar自动注入过程
webhook过程
查看sidecar的webhook
[root@test1 ~]# kubectl get MutatingWebhookConfiguration -n istio-system
NAME CREATED AT
istio-sidecar-injector 2018-11-12T09:14:44Z
[root@test1 ~]# kubectl describe MutatingWebhookConfiguration istio-sidecar-injector -n istio-system
Name: istio-sidecar-injector
Namespace:
Labels: app=istio-sidecar-injector
chart=sidecarInjectorWebhook-1.0.3
heritage=Tiller
release=istio
... ...
Webhooks:
Client Config:
... ...
Service:
Name: istio-sidecar-injector
Namespace: istio-system
Path: /inject
Failure Policy: Fail
Name: sidecar-injector.istio.io
Namespace Selector:
Match Labels:
Istio - Injection: enabled
Rules:
API Groups:
API Versions:
v1
Operations:
CREATE
Resources:
pods
由上面可以看出创建pod时会调用sidecar的webhook,接着向istio-sidecar-injector的服务发送inject注册(post https://istio-sidecar-injector.istio-system.svc:443/inject?timeout=30s).
查看istio-sidecar-injector的日志
[root@test-1 ~]# kubectl get pods -n istio-system | grep istio-sidecar
istio-sidecar-injector-d96cd9459-lbf66 1/1 Running 0 13d
[root@test-1 ~]# kubectl logs istio-sidecar-injector-d96cd9459-lbf66 -n istio-system
2018-11-09T06:40:53.895979Z info AdmissionReview for Kind=/v1, Kind=Pod Namespace=default Name= () UID=67d96021-e3ea-11e8-a721-00163e0c1d10 Rfc6902PatchOperation=CREATE UserInfo={system:unsecured [system:masters system:authenticated] map[]}
2018-11-09T06:40:53.897821Z info AdmissionResponse: patch=[{"op":"add","path":"/spec/initContainers","value":[{"name":"istio-init","image":"docker.io/istio/proxy_init:1.0.0","args":["-p","15001","-u","1337","-m","REDIRECT","-i","10.0.0.1/24","-x","","-b","80,","-d",""] ... ...},{"op":"add","path":"/spec/containers/-","value":{"name":"istio-proxy","image":"docker.io/istio/proxyv2:1.0.0","args":["proxy","sidecar",... ...\"initContainers\":[\"istio-init\"],\"containers\":[\"istio-proxy\"],\"volumes\":[\"istio-envoy\",\"istio-certs\"],\"imagePullSecrets\":null}"}}]
hook发送inject后,sidecar会返回两个container,istio-init和istio-proxy.下面我们来具体分析下.
获取pod具体信息
[root@test-1 ~]#kubectl describe pod nginx-dm-fff68d674-9tv9w
Name: nginx-dm-fff68d674-9tv9w
Namespace: default
Node: 10.0.3.126/10.0.3.126
Start Time: Fri, 09 Nov 2018 14:40:53 +0800
Labels: name=nginx
pod-template-hash=999248230
Annotations: sidecar.istio.io/status={"version":"5aa52d92ced8dab93e04a5a4701773b2f3d78968c04b05bb430f32e80a4d9be1","initContainers":["istio-init"],"containers":["istio-proxy"],...
Status: Running
IP: 172.30.2.21
Controlled By: ReplicaSet/nginx-dm-fff68d674
Init Containers:
istio-init:
Container ID: docker://43668b6cf4bb331542b8d98348a7670dad99b735aa0ef0ca572bf4ee1966538b
Image: docker.io/istio/proxy_init:1.0.0
Image ID: docker-pullable://istio/proxy_init@sha256:345c40053b53b7cc70d12fb94379e5aa0befd979a99db80833cde671bd1f9fad
Port: <none>
Host Port: <none>
Args:
-p
15001
... ...
Containers:
Containers:
nginx:
Container ID: docker://d917ffa9282bc4f82a0af1c8cbd6b51c0392fca6a85de6f8db6da128700db204
Image: nginx:alpine
Image ID:
Port: 80/TCP
Host Port: 0/TCP
istio-proxy:
Container ID: docker://932a8bc6b85f1106cde057bd55598337bf7f9963fc4e796d3d88907d717a8eff
Image: docker.io/istio/proxyv2:1.0.0
Image ID: docker-pullable://istio/proxyv2@sha256:77915a0b8c88cce11f04caf88c9ee30300d5ba1fe13146ad5ece9abf8826204c
Port: <none>
Host Port: <none>
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
... ...
由具体信息可知,pod除了自身的容器外,还额外注入了两个容器.这就是由istio-sidecar-injector完成的.
proxy_init
proxy_init是一个Init Containers.Init Containers用于pod中执行初始化的任务,执行完毕退出后,才会执行后面的containers.
[root@test-1 ~]# docker inspect docker.io/istio/proxy_init:1.0.0
[
{
"RepoTags": [
"istio/proxy_init:1.0.0",
"gcr.io/istio-release/proxy_init:1.0.0"
],
"ContainerConfig": {
...
"Cmd": [
"/bin/sh",
"-c",
"#(nop) ",
"ENTRYPOINT [\"/usr/local/bin/istio-iptables.sh\"]"
],
...
},
]
如上Cmd可以知道,这个容器主要执行的是istio-iptables.sh的脚本.
查看脚本内容
...
while getopts ":p:u:g:m:b:d:i:x:h" opt; do
case ${opt} in
p)
PROXY_PORT=${OPTARG}
;;
u)
...
该脚本通过配置iptable来劫持pod中的流量.结合前面的-p 15001可知pod的数据流量被转发向envoy的15001端口.
proxyv2
查看pod内istio-proxy的进程
[root@test-1 ~]# kubectl exec nginx-dm-fff68d674-9tv9w -c istio-proxy -- ps -ef
UID PID PPID C STIME TTY TIME CMD
istio-p+ 1 0 0 Nov09 ? 00:00:12 /usr/local/bin/pilot-agent proxy sidecar --configPath /etc/istio/proxy --binaryPath /usr/local/bin/envoy --serviceCluster istio-proxy --drainDuration 45s --parentShutdownDuration 1m0s --discoveryAddress istio-pilot.istio-system:15007 --discoveryRefreshDelay 1s --zipkinAddress zipkin.istio-system:9411 --connectTimeout 10s --statsdUdpAddress istio-statsd-prom-bridge.istio-system:9125 --proxyAdminPort 15000 --controlPlaneAuthPolicy NONE
istio-p+ 24 1 0 Nov09 ? 00:42:50 /usr/local/bin/envoy -c /etc/istio/proxy/envoy-rev0.json --restart-epoch 0 --drain-time-s 45 --parent-shutdown-time-s 60 --service-cluster istio-proxy --service-node sidecar~172.30.2.21~nginx-dm-fff68d674-9tv9w.default~default.svc.cluster.local --max-obj-name-len 189 -l warn --v2-config-only
上面有两个进程pilot-agent和envoy.
pilot-agent根据k8s api生成配置信息,并负责管理(启动,热更新,关闭等)整个envoy.生成的配置信息在 /etc/istio/proxy/envoy-rev0.json,具体内容可自己查看.
envoy由pilot-agent进程启动,Envoy读取Pilot-agent为它生成的配置文件(envoy-rev0.json),然后根据该文件的配置获取到Pilot的地址,通过数据面标准API的xDS接口从pilot拉取动态配置信息.
参考文档:
1.https://istio.io/docs/setup/kubernetes/sidecar-injection/
2.https://zhaohuabing.com/post/2018-09-25-istio-traffic-management-impl-intro/