hopeless-dream

导航

kubenetes--------kubeadm init的工作流程

结合初始化信息理解kubeadm init的工作流程

初始化信息

W0519 01:47:26.317891    2272 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 10.0.0.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.0.0.50 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.0.0.50 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0519 01:50:33.295075    2272 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0519 01:50:33.296322    2272 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.005020 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: q1k6jx.wnw5mn8qqt0ia3wc
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.50:6443 --token q1k6jx.wnw5mn8qqt0ia3wc \
    --discovery-token-ca-cert-hash sha256:c99d2b8e33ab852ec41e9a0fe816683d5d791d7afa8041a802b986cac8456553 

1、Preflight Checks

  1. Linux 内核的版本必须是否是 3.10 以上
  2. Linux Cgroups 模块是否可用
  3. 机器的 hostname 是否标准在 Kubernetes 项目里,机器的名字以及一切存储在 Etcd 中的 API 对象,都必须使用标准的 DNS 命名(RFC 1123)。
  4. 用户安装的 kubeadm 和 kubelet 的版本是否匹配
  5. 机器上是不是已经安装了 Kubernetes 的二进制文件
  6. Kubernetes 的工作端口 10250/10251/10252 端口是不是已经被占用
  7. ip、mount 等 Linux 指令是否存在
  8. Docker 是否已经安装

等等

2、生成提供服务所需的证书和对应目录

Kubernetes 对外提供服务时,需要通过HTTPS访问kube-apiserver。
生成证书位置:/etc/kubernetes/pki/ca.{crt,key}

kube-apiserver 请求 kubelet时,如:使用 kubectl 获取容器日志等 streaming 操作,也需要安全的连接
生成的证书位置:/etc/kubernetes/pki/apiserver-kubelet-client.{crt,key}

apiserver向etcd请求存储持久化信息时,需要的证书:/etc/kubernetes/pki/apiserver-etcd-client.{crt,key}

...

3、生成访问 kube-apiserver配置文件

为其他组件生成访问 kube-apiserver 所需的配置文件。

配置文件的路径:/etc/kubernetes/xxx.conf
[root@master kubelet]# ll /etc/kubernetes/*.conf
-rw-------. 1 root root 5445 May 19 01:50 /etc/kubernetes/admin.conf
-rw-------. 1 root root 5481 May 19 01:50 /etc/kubernetes/controller-manager.conf
-rw-------. 1 root root 1857 May 19 01:50 /etc/kubernetes/kubelet.conf
-rw-------. 1 root root 5433 May 19 01:50 /etc/kubernetes/scheduler.conf

这些文件记录了Master节点的IP地址、端口、证书信息、工作上下文环境,例如:

[root@master kubelet]# cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EVXhPREUzTlRBek1Gb1hEVE13TURVeE5qRTNOVEF6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWF6CjdKcEkrL0JjVkRlMUFpNy80WDFaYmE2b1IwcE8zb1MxVTVXREllRmJacEpaVlpUcEtFSDJUK0phSEZLTEU3Q3MKc1lNQ3R5aHMwSEdYeHkwY211N1lPR2N1N2JHcUgrajR2MDM4ckpLWm9XM0x5bkVtOUs1MXUvZmNDR0NsQ3NxTgplQWVWSWtQWUlkeWprZEkvckdKZWZQTG82M2xWSHI1Yi9LTUtETTNQQndiZlE3NFRWNS9rVE5ZdjY5VDNIdm4wClBDUUVnMXMxeFMvSk5PdTQxQy9mMVczNUxIZHYvVlg1YnI5WXN5cG1RSG1RRUQxU2xEa0xyQ28vZ016cTdlMEwKNnhHbEZtYUlYMkd6RmF5MGNyZVVGTkFtTUQwaGx5WHF0MnUrdndjSEFZZERGNzhSekRvY2ZEa2VGMGl0TnEvOQp0cWNNbUpCOUlOWjZ1b2ovcC9rQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKSWJRME85dHlWTVdUaUVWYU9TdUQrWEFjTlMKRXkzT2JCR1NyM1RlTnA1bWFpazg0TjE3b2g5Z0FSQVhPMS8xckpPeWpIK1FrOE4zRnY2WXBzNVY0MktnTWx3SQplT0VFdy9YSlZGU3JkVi9ob0pUMUxyV1dGY1pqTis3eGw5bkJ0MmdyTXo2K1JxcDQ4alpPUWpIRytZcEJvNWkyCktvMWFKcHFGL0hzM2RzbytsVWNrWitRRS9NZ3ViVkloRE96ZUF6LzkwMFNlcC9XNkl2NTVkTUJsbTh1SEFNQmQKTEsvdlJ2K2J3bHdwZ1dhVVErNlFmL1ZMRzYxYXRiUmR4WW8wVWJDRUtyL3dxU1N5MURyQmRvOUFsKyt3c0JBWgpYa3YzSzFHL29zUGRIbnVnOUdjak1WZWFrcDhVYzlJVXVpREhoNVBUN1VtYmYyNmNWaEh0VG5mdjRoND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.0.0.50:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJTHJPNTd3UDUyZkV3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBMU1UZ3hOelV3TXpCYUZ3MHlNVEExTVRneE56VXdNekphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJ0U3I4ZzB6Qlp4aTFxTEYKU2pJd09NeVVYVXovbjFIcGNVL0hjQ0tSalQ0L1k5dUtZNVMrTWJXVVg3UkxZTXBzcWhKb1ZRSTNRYzdRRUw1TwpjdUlIVDFKbDIxVFh5Z3JlQlZNRGR4cEtCcnFZQll2YThkVjZpWlZEL1VieXJCSCt5SnA5WGxoWE83NEovOEZJCnNmVDltR3pPRENyS1VTOE1nU1lxTnI3elZLN3BVSEh1ZVlHMC9aTVhqSmF4NkFaeXdEU2FNUnBtRllLRkNEMVUKM0dqTVVJVmtqNFVEZHZRN1QxOExDK0grUFVhSzgyQ1p2MHR4b243cE9ZZ3JobmFFUEQ0WFZzV3g5OVA3Y24xUwpkSXFMUFNlUXhrMXk2N3orcnpYV3pSL0ZPVGJGaGdtWlNvaThaSmtyTmdmZFVSWlNoazdCcU5KWGlkdFdDQTRUCmJiV0lSd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFISjdQUEZjUFRDN3FBTnN2T3hqd2NUVXowK0FobGJ5cDhmVgpaTE0vdmJCUFRHOU9uRitNQkZieVBmR1pPN0FHcGRtREY3UTR5WGFyK3FIMHJiY3pXWmJqMW9WRkcwWXlUaDhECmFEZjd0RmdQMkJTUktqclpzODdJcUtEYXhCWUhmaG9SS1AxdFpUN1UyTUFQUXg3OUdUUjgxNmNIMzVidlp2d2kKTlpjUEU0NFBsNUhuUTN2U3E1RzMzcEJPSFd5cStUOTlEZUo1ci9oOHJVTnFidDN0cGhEaThLVys0Rkl4RlFKSQpqdGJ6c3RZSkR5YWRWNDFlV1d0RVYwTDdDWUovWWVCRi9PWWMwNU1CSVRIMkcvYmd1bDJwbjhQREkwUU5MWG5hCllUOTV5dm10RHhvWnpmYjdlYnV5UVVkZkRJcnB4YUprL2FOV2dkYi9jYldoNTUwODhQST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMnRTcjhnMHpCWnhpMXFMRlNqSXdPTXlVWFV6L24xSHBjVS9IY0NLUmpUNC9ZOXVLClk1UytNYldVWDdSTFlNcHNxaEpvVlFJM1FjN1FFTDVPY3VJSFQxSmwyMVRYeWdyZUJWTURkeHBLQnJxWUJZdmEKOGRWNmlaVkQvVWJ5ckJIK3lKcDlYbGhYTzc0Si84RklzZlQ5bUd6T0RDcktVUzhNZ1NZcU5yN3pWSzdwVUhIdQplWUcwL1pNWGpKYXg2QVp5d0RTYU1ScG1GWUtGQ0QxVTNHak1VSVZrajRVRGR2UTdUMThMQytIK1BVYUs4MkNaCnYwdHhvbjdwT1lncmhuYUVQRDRYVnNXeDk5UDdjbjFTZElxTFBTZVF4azF5Njd6K3J6WFd6Ui9GT1RiRmhnbVoKU29pOFpKa3JOZ2ZkVVJaU2hrN0JxTkpYaWR0V0NBNFRiYldJUndJREFRQUJBb0lCQVFDMW0ydjduSktzWkdYdQpoUFZBcHpnMzJ5aUI2ZVgyM2E3ajEvYkhEQmxKWTlDTjJlUVcwcG1wZlcxZW82MHU3YStTMFdYK3JyRVhEMERECnRIdzhnWExabEtOdGpCTHQzV2oyZURkVy85MVJpa2VoeXJod25OOXVFUTkwd2cyaFdlbmRwOERGcklEdzFyMUwKb0tmbzhFNEowcnFKaEhXVlBIdWZMd0kzbnU4b1pkdHFGdUtMQnBSWnNyQzNKWXZlRkRkODZtMWR4TmVWczc4Qwo3Q1VRZzVJOUp6Vkl3WDVoQVhZbWdtTm1YNDA3cnpHTERzYlpYN0JhNmlrTDJHSUNiQngzMlh0QXVCNnRBdEIyCk5ZWmNrSk9mZHlvV1BtWjU0V1JOZEpGYWlmZXBYZ2M4N28wTlhTV2Z0SDdNUEx5eWlYWmRwSVVMRllpbjhHL1IKbGR6WjhMNHhBb0dCQU9FNzZEdE04MTh3VFZCZkVxc0RZRVJpcnphQitUa1Q5TmhxNXRLSUtGb3JYYURTRk5sZwpvVUtSejBvdVZleTNuenpQa004cTc3b2t4YkFaR1Q2QnB4WVJzcWNaWlJTTzErOU1rVEZOZGhqSzhVM0ZoNnlFCjNJTDJPVURYNjNrTnprNm8rRyt6Z21WaStteFFKZERNVmhKbnNCQjVRMzFhdHNjYUtDWFROTm1KQW9HQkFQaTQKMmlDVzY2TVFMMHFpdjJPekZ5cCs1SlZ5MHh4OTcwV1dtTlZKNWY3dGh5SG1melR4aW51Nkl4c2Y4bG1NdGpjbApWSnJBcjBoOTRGMkl0aERORWtYSlZIaTNQK3lQUWh0QWs4WmJmdGJtTWEzcjZnNTlpNjN0S1p1SHZHWS9vVTBhCnpZams4Z29TNWZqTEZhQnZLaWpPWU5tN2VIeVR2ZFV5a0Iya2RHOVBBb0dBVVFWeTBib3BwZWxETnBFc3J2WGsKOEZTcmdLa2FsTXkzL0EzZ2dJVllOcTk0Mjd3V29lZWZ1c21tenFHQ2FVZlljVkNkWDlpcktjUEdsVVZDRG5rbgpPTW9mQVBzaW9GV09HZGZxTnRrTmpYZWJmQVY5ZTdMRGZCekVsYTNXVjlKK2owODdKenRrd2NIc0lZQm5TZ2ZuClFuR29KUlRxRVRMTG95Mm1tWXl6YXprQ2dZQTZnM0o5bkVQUFZ1MXBSNlJ6Rmh6ckdITTZYWXNnOXRlbHJXcEQKTTJGeWVmc0NsTEYwaVNhbE9RTXRUSFM5Y0ljbHJoaWJWNTFsRm9nRU9UZHIrSExHRERsZE5POUsvZUQxZkZuSApucHJXZjgxTU4yWVhCcDRueXRXeEMxdmRTamJ1WnRIWDFOVEVYZkhRZUNhY1djcTNVdVlpRXlLalhEYWF3NHg0CkRNcC9yUUtCZ0F6bCtjSDVDMkhOdUJZeGV4My9rMXM4d3lFQVdHTUcrU0NWRWd0QTNPaW5GUFp1SmhpN3pXcjEKaVZRL05UdDE2WGdZZFY4WVJidG5Wa25qKzNXckRCVVE1Qk5JVkUzMzNZMUFFL3NnS3VjS1NtQ2hCWFl2UEd4VApKYlVtTUROY1ZNeUd4UzdaS0t5TUQrSlZvVFI2ZlhqeHhKUzdFRW8vdEFmZ2xmaWZGcnBxCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

4、生成Master组件的Pod配置文件

为 kube-apiserver、kube-controller-manager、kube-scheduler 和 etcd 生成YAML文件。通过Static Pod的方法启动

组件的YAML文件路径:

[root@master ~]# ll /etc/kubernetes/manifests/
total 16
-rw-------. 1 root root 1855 May 19 01:50 etcd.yaml
-rw-------. 1 root root 2726 May 19 01:50 kube-apiserver.yaml
-rw-------. 1 root root 2594 May 19 01:50 kube-controller-manager.yaml
-rw-------. 1 root root 1149 May 19 01:50 kube-scheduler.yaml

 

kubelet 启动时,它会自动检查这个目录,加载所有的 Pod YAML 文件,然后启动它们

Master 容器启动后,kubeadm 会通过 localhost:6443/healthz 做健康检查 

5、生成bootstrap token

Node节点可以使用这个token,通过kubeadm join的方式加入集群

6、保存配置信息

kubeadm 会将 Master 各个重要信息通过 ConfigMap 的方式保存到 Etcd 中,提供给 Node 节点使用
ConfigMap 的名字是 cluster-info

7、安装默认插件(附件)

CoreDNS和kube-proxy 这两个是必须安装的

 kubeadm join 的工作流程

获取ConfigMap中cluster-info保存的地址、端口、证书,使用kubeadm init 生成的 bootstrap token通过"安全模式"访问 kube-apiserver,

posted on 2020-05-19 21:01  hopeless-dream  阅读(1237)  评论(0编辑  收藏  举报