初始化master节点时,日志内容分析

 

 

 

 1 root@master:~/code/shell# kubeadm init --image-repository registry.aliyuncs.com/google_containers 
 2 ++ kubeadm init --image-repository registry.aliyuncs.com/google_containers
 3 I0520 19:44:29.163146    5438 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
 4 I0520 19:44:29.163265    5438 version.go:97] falling back to the local client version: v1.14.2
 5 [init] Using Kubernetes version: v1.14.2
 6 [preflight] Running pre-flight checks
 7 [preflight] Pulling images required for setting up a Kubernetes cluster
 8 [preflight] This might take a minute or two, depending on the speed of your internet connection
 9 [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
10 [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
11 [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
12 [kubelet-start] Activating the kubelet service
13 [certs] Using certificateDir folder "/etc/kubernetes/pki"
14 [certs] Generating "etcd/ca" certificate and key
15 [certs] Generating "apiserver-etcd-client" certificate and key
16 [certs] Generating "etcd/server" certificate and key
17 [certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.16.151.146 127.0.0.1 ::1]
18 [certs] Generating "etcd/peer" certificate and key
19 [certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.16.151.146 127.0.0.1 ::1]
20 [certs] Generating "etcd/healthcheck-client" certificate and key
21 [certs] Generating "ca" certificate and key
22 [certs] Generating "apiserver" certificate and key
23 [certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.151.146]
24 [certs] Generating "apiserver-kubelet-client" certificate and key
25 [certs] Generating "front-proxy-ca" certificate and key
26 [certs] Generating "front-proxy-client" certificate and key
27 [certs] Generating "sa" key and public key
28 [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
29 [kubeconfig] Writing "admin.conf" kubeconfig file
30 [kubeconfig] Writing "kubelet.conf" kubeconfig file
31 [kubeconfig] Writing "controller-manager.conf" kubeconfig file
32 [kubeconfig] Writing "scheduler.conf" kubeconfig file
33 [control-plane] Using manifest folder "/etc/kubernetes/manifests"
34 [control-plane] Creating static Pod manifest for "kube-apiserver"
35 [control-plane] Creating static Pod manifest for "kube-controller-manager"
36 [control-plane] Creating static Pod manifest for "kube-scheduler"
37 [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
38 [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
39 [apiclient] All control plane components are healthy after 19.503432 seconds
40 [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
41 [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
42 [upload-certs] Skipping phase. Please see --experimental-upload-certs
43 [mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
44 [mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
45 [bootstrap-token] Using token: bggbum.mj3ogzhnm1wz07mj
46 [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
47 [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
48 [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
49 [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
50 [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
51 [addons] Applied essential addon: CoreDNS
52 [addons] Applied essential addon: kube-proxy
53 
54 Your Kubernetes control-plane has initialized successfully!
55 
56 To start using your cluster, you need to run the following as a regular user:
57 
58   mkdir -p $HOME/.kube
59   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
60   sudo chown $(id -u):$(id -g) $HOME/.kube/config
61 
62 You should now deploy a pod network to the cluster.
63 Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
64   https://kubernetes.io/docs/concepts/cluster-administration/addons/
65 
66 Then you can join any number of worker nodes by running the following on each as root:
67 
68 kubeadm join 172.16.151.146:6443 --token bggbum.mj3ogzhnm1wz07mj \
69     --discovery-token-ca-cert-hash sha256:8f02f83357a965b5db5c5c70bc1dec4c57d507ebae8c50b0d2efef4c32f5d106 

 

posted @ 2019-05-21 10:52  lakeslove  阅读(688)  评论(0编辑  收藏  举报