kubernetes 1.7.2 安装 记录过程
系统信息
cat /etc/redhat-release CentOS Linux release 7.4.1708 (Core)
环境信息
IP地址 |
主机名称 |
10.10.6.11 | master |
10.10.6.12 | node1 |
10.10.6.13 | node2 |
第一部分
基础环境设置(三台设备均需设置,以下master为例)
设置主机名
hostnamectl set-hostname master
禁用selinux 和firewalld
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
systemctl disable firewalld
systemctl stop firewalld
设置环境变量
cat >> /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl -p /etc/sysctl.d/k8s.conf
设置docker 、kubernetes yum 源
cat >> /etc/yum.repos.d/docker.repo <<EOF [docker-repo] name=Docker Repository baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7 enabled=1 gpgcheck=0 EOF cat >> /etc/yum.repos.d/kubernetes.repo <<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 EOF
第二部分(三台设备都需要执行)
安装docker 和kubeadm
yum install -y docker-ce
cat > /etc/docker/daemon.json <<EOF { "registry-mirrors": ["https://vaflkxbk.mirror.aliyuncs.com"] } EOF
启动docker ,查看docker信息 docker version
docker version Client: Version: 17.12.0-ce API version: 1.35 Go version: go1.9.2 Git commit: c97c6d6 Built: Wed Dec 27 20:10:14 2017 OS/Arch: linux/amd64 Server: Engine: Version: 17.12.0-ce API version: 1.35 (minimum version 1.12) Go version: go1.9.2 Git commit: c97c6d6 Built: Wed Dec 27 20:12:46 2017 OS/Arch: linux/amd64 Experimental: false
安装kubernetes,
cat > /root/kubernetes.sh <<EOF KUBE_VERSION=1.7.2 KUBE_PAUSE_VERSION=3.0 KUBE_CNI_VERSION=0.5.1 ETCD_VERSION=3.0.17 yum install -y kubernetes-cni-${KUBE_CNI_VERSION}-0.x86_64 kubelet-${KUBE_VERSION}-0.x86_64 kubectl-${KUBE_VERSION}-0.x86_64 kubeadm-${KUBE_VERSION}-0.x86_64 EOF
chmod +x /root/kubernetes.sh && sh /root/kubernetes.sh
设置Cgroup Driver: cgroupfs 类型
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
## cgroupfs 是根据docker info 中 的 Cgroup Driver: cgroupfs 来设定。
启动服务
systemctl enable docker
systemctl enable kubelet
systemctl start docker
systemctl start kubelet
下载 images
cat images.sh
set -o errexit set -o nounset set -o pipefail KUBE_VERSION=v1.7.2 KUBE_PAUSE_VERSION=3.0 ETCD_VERSION=3.0.17 DNS_VERSION=1.14.4 FLANNEL=v0.8.0-amd64 GCR_URL=gcr.io/google_containers ALIYUN_URL=registry.cn-hangzhou.aliyuncs.com/szss_k8s images=(kube-proxy-amd64:${KUBE_VERSION} kube-scheduler-amd64:${KUBE_VERSION} kube-controller-manager-amd64:${KUBE_VERSION} kube-apiserver-amd64:${KUBE_VERSION} pause-amd64:${KUBE_PAUSE_VERSION} etcd-amd64:${ETCD_VERSION} k8s-dns-sidecar-amd64:${DNS_VERSION} k8s-dns-kube-dns-amd64:${DNS_VERSION} k8s-dns-dnsmasq-nanny-amd64:${DNS_VERSION} flannel:${FLANNEL}) for imageName in ${images[@]} ; do docker pull $ALIYUN_URL/$imageName docker tag $ALIYUN_URL/$imageName $GCR_URL/$imageName docker rmi $ALIYUN_URL/$imageName done
查看下载images 确认无误
docker images REPOSITORY TAG IMAGE ID CREATED SIZE gcr.io/google_containers/kube-apiserver-amd64 v1.7.2 4935105a20b1 6 months ago 186MB gcr.io/google_containers/kube-proxy-amd64 v1.7.2 13a7af96c7e8 6 months ago 115MB gcr.io/google_containers/kube-controller-manager-amd64 v1.7.2 2790e95830f6 6 months ago 138MB gcr.io/google_containers/kube-scheduler-amd64 v1.7.2 5db1f9874ae0 6 months ago 77.2MB gcr.io/google_containers/flannel v0.8.0-amd64 9db3bab8c19e 6 months ago 50.7MB gcr.io/google_containers/k8s-dns-sidecar-amd64 1.14.4 38bac66034a6 7 months ago 41.8MB gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 1.14.4 f7f45b9cb733 7 months ago 41.4MB gcr.io/google_containers/kubernetes-dashboard-amd64 v1.6.0 8b3d11182363 10 months ago 109MB gcr.io/google_containers/k8s-dns-kube-dns-amd64 1.14.4 f8363dbf447b 11 months ago 52.4MB gcr.io/google_containers/etcd-amd64 3.0.17 243830dae7dd 11 months ago 169MB gcr.io/google_containers/pause-amd64 3.0 99e59f495ffa 21 months ago 747kB
第三部分
在master 10.10.6.11 上执行
kubeadm init --apiserver-advertise-address=10.10.6.11 --kubernetes-version=v1.7.2 --token=863f67.19babbff7bfe8543 --pod-network-cidr=10.244.0.0/16
输出信息
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.7.2 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks [preflight] WARNING: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 1.12 [preflight] WARNING: hostname "master" could not be reached [preflight] WARNING: hostname "master" lookup master on 114.114.114.114:53: no such host [preflight] Starting the kubelet service [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated CA certificate and key. [certificates] Generated API server certificate and key. [certificates] API Server serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.6.11] [certificates] Generated API server kubelet client certificate and key. [certificates] Generated service account token signing key and public key. [certificates] Generated front-proxy CA certificate and key. [certificates] Generated front-proxy client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [apiclient] Created API client, waiting for the control plane to become ready [apiclient] All control plane components are healthy after 31.001278 seconds [token] Using token: 863f67.19babbff7bfe8543 [apiconfig] Created RBAC rules [addons] Applied essential addon: kube-proxy [addons] Applied essential addon: kube-dns Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 863f67.19babbff7bfe8543 10.10.6.11:6443
设置环境变量,这里是把变量放到/etc/profile
export KUBECONFIG=/etc/kubernetes/admin.conf
下载 kube-flannel-rbac.yml 和
vi kube-flannel-rbac.yml
wget https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel-rbac.yml wget https://raw.githubusercontent.com/coreos/flannel/v0.8.0/Documentation/kube-flannel.yml
其中kube-flannel.yml 的flannel镜像 要与上面下载的flannel 一致
# Create the clusterrole and clusterrolebinding: # $ kubectl create -f kube-flannel-rbac.yml # Create the pod using the same namespace used by the flannel serviceaccount: # $ kubectl create --namespace kube-system -f kube-flannel.yml --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel rules: - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system
vi kube-flannel.yml
--- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "type": "flannel", "delegate": { "isDefaultGateway": true } } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: template: metadata: labels: tier: node app: flannel spec: hostNetwork: true nodeSelector: beta.kubernetes.io/arch: amd64 tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule serviceAccountName: flannel containers: - name: kube-flannel image: gcr.io/google_containers/flannel:v0.8.0-amd64 command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ] securityContext: privileged: true env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run - name: flannel-cfg mountPath: /etc/kube-flannel/ - name: install-cni image: gcr.io/google_containers/flannel:v0.8.0-amd64 command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done" ] volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
执行以下命令:
kubectl --namespace kube-system apply -f kube-flannel-rbac.yml
kubectl --namespace kube-system apply -f kube-flannel.yml
在两个node 节点上执行
kubeadm join --token 863f67.19babbff7bfe8543 10.10.6.11:6443 --skip-preflight-checks
输出信息
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [preflight] Skipping pre-flight checks [discovery] Trying to connect to API Server "10.10.6.11:6443" [discovery] Created cluster-info discovery client, requesting info from "https://10.10.6.11:6443" [discovery] Cluster info signature and contents are valid, will use API Server "https://10.10.6.11:6443" [discovery] Successfully established connection with API Server "10.10.6.11:6443" [bootstrap] Detected server version: v1.7.2 [bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1) [csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request [csr] Received signed certificate from the API server, generating KubeConfig... [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" Node join complete: * Certificate signing request sent to master and response received. * Kubelet informed of new secure connection details. Run 'kubectl get nodes' on the master to see this machine join.
在master 上面查看信息
[root@master ~]# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-master 1/1 Running 0 2h kube-system kube-apiserver-master 1/1 Running 0 2h kube-system kube-controller-manager-master 1/1 Running 0 2h kube-system kube-dns-2425271678-glrxd 3/3 Running 0 2h kube-system kube-flannel-ds-7tb2x 2/2 Running 0 2h kube-system kube-flannel-ds-pvwfv 2/2 Running 0 2h kube-system kube-flannel-ds-t5b3t 2/2 Running 1 2h kube-system kube-proxy-2k10j 1/1 Running 0 2h kube-system kube-proxy-6tdhl 1/1 Running 0 2h kube-system kube-proxy-dgfrb 1/1 Running 0 2h kube-system kube-scheduler-master 1/1 Running 0 2h [root@master ~]# kubectl get pods -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE etcd-master 1/1 Running 0 2h 10.10.6.11 master kube-apiserver-master 1/1 Running 0 2h 10.10.6.11 master kube-controller-manager-master 1/1 Running 0 2h 10.10.6.11 master kube-dns-2425271678-glrxd 3/3 Running 0 2h 10.244.0.3 master kube-flannel-ds-7tb2x 2/2 Running 0 2h 10.10.6.13 node2 kube-flannel-ds-pvwfv 2/2 Running 0 2h 10.10.6.11 master kube-flannel-ds-t5b3t 2/2 Running 1 2h 10.10.6.12 node1 kube-proxy-2k10j 1/1 Running 0 2h 10.10.6.13 node2 kube-proxy-6tdhl 1/1 Running 0 2h 10.10.6.12 node1 kube-proxy-dgfrb 1/1 Running 0 2h 10.10.6.11 master kube-scheduler-master 1/1 Running 0 2h 10.10.6.11 master [root@master ~]#
确保都是running 的状态