Ubuntu(18.04)k8s安装教程
Ubuntu(18.04)使用kubeadm安装k8s教程
-
更换docker源
cat <<EOF> /etc/docker/daemon.json { "registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"] } EOF
-
重启docker
systemctl daemon-reload && systemctl restart docker
-
禁用swap
# 编辑分区配置文件/etc/fstab,注释掉有swap那一行 # 注意修改完毕之后需要重启linux服务 vim /etc/fstab /swap.img none swap sw 0 0 # /swap.img none swap sw 0 0
-
重启系统
reboot
-
添加k8s阿里云apt源及key并更新apt
# 可以打开 /etc/apt/sources.list 文件,添加一行 deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
apt install curl -y && curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add && apt update
-
安装kubectl、kubelet、kubeadm(1.17.17)
apt install kubeadm=1.17.17-00 kubelet=1.17.17-00 kubectl=1.17.17-00
-
拉取k8s基础镜像,由于GFW原因google原生镜像无法拉取,故从docker_hub拉取备份镜像并重新打tag
# pull_k8s.sh images=( kube-apiserver:v1.17.17 kube-controller-manager:v1.17.17 kube-scheduler:v1.17.17 kube-proxy:v1.17.17 pause:3.1 etcd:3.4.3-0 coredns:1.6.5 ) for imageName in ${images[@]};do docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName done
-
添加shell脚本 执行权限并执行脚本
chmod +x pull_k8s.sh && bash pull_k8s.sh
-
执行kubelet命令测试kubelet是否可以正常运行
kubelet # 如果没有异常报错则证明kubelet将会正常运行
-
执行以下命令启动k8s master节点
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.17.17 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
如果出现以下提示,则证明k8s master节点启动成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.102:6443 --token dgt6ww.466onmm47x5hifwl \
--discovery-token-ca-cert-hash sha256:7033f3d2597cfe04ecdc0c97c0908ca46f749c4c1e4b7b0e6ddc59fe51aa1cc4
-
复制配置文件
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
查看k8s master节点各controller组件运行状态
kubectl get cs
如果发现scheduler和controller-manager为Unhealthy状态则进行以下操作
vim /etc/kubernetes/manifests/kube-controller-manager.yaml vim /etc/kubernetes/manifests/kube-scheduler.yaml 需注释掉- --port=0这行参数
注释完成后,稍等片刻再次查询
kubectl get cs
即正常运行 -
查看k8s master节点各个Pod的运行状态
kubectl get pod -n kube-system
如果发现coredns-xx-xx的状态为Pending,则说明kube-flannel未安装运行,执行以下命令
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
如果提示
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
,则是网络问题,创建以下文件并启动即可。# kube-flannel.yml --- apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp.flannel.unprivileged annotations: seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default spec: privileged: false volumes: - configMap - secret - emptyDir - hostPath allowedHostPaths: - pathPrefix: "/etc/cni/net.d" - pathPrefix: "/etc/kube-flannel" - pathPrefix: "/run/flannel" readOnlyRootFilesystem: false # Users and groups runAsUser: rule: RunAsAny supplementalGroups: rule: RunAsAny fsGroup: rule: RunAsAny # Privilege Escalation allowPrivilegeEscalation: false defaultAllowPrivilegeEscalation: false # Capabilities allowedCapabilities: ['NET_ADMIN', 'NET_RAW'] defaultAddCapabilities: [] requiredDropCapabilities: [] # Host namespaces hostPID: false hostIPC: false hostNetwork: true hostPorts: - min: 0 max: 65535 # SELinux seLinux: # SELinux is unused in CaaSP rule: 'RunAsAny' --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel rules: - apiGroups: ['extensions'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: ['psp.flannel.unprivileged'] - apiGroups: - "" resources: - pods verbs: - get - apiGroups: - "" resources: - nodes verbs: - list - watch - apiGroups: - "" resources: - nodes/status verbs: - patch --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: flannel roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: flannel subjects: - kind: ServiceAccount name: flannel namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: flannel namespace: kube-system --- kind: ConfigMap apiVersion: v1 metadata: name: kube-flannel-cfg namespace: kube-system labels: tier: node app: flannel data: cni-conf.json: | { "name": "cbr0", "cniVersion": "0.3.1", "plugins": [ { "type": "flannel", "delegate": { "hairpinMode": true, "isDefaultGateway": true } }, { "type": "portmap", "capabilities": { "portMappings": true } } ] } net-conf.json: | { "Network": "10.244.0.0/16", "Backend": { "Type": "vxlan" } } --- apiVersion: apps/v1 kind: DaemonSet metadata: name: kube-flannel-ds namespace: kube-system labels: tier: node app: flannel spec: selector: matchLabels: app: flannel template: metadata: labels: tier: node app: flannel spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/os operator: In values: - linux hostNetwork: true priorityClassName: system-node-critical tolerations: - operator: Exists effect: NoSchedule serviceAccountName: flannel initContainers: - name: install-cni-plugin image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.2 command: - cp args: - -f - /flannel - /opt/cni/bin/flannel volumeMounts: - name: cni-plugin mountPath: /opt/cni/bin - name: install-cni image: quay.io/coreos/flannel:v0.15.0 command: - cp args: - -f - /etc/kube-flannel/cni-conf.json - /etc/cni/net.d/10-flannel.conflist volumeMounts: - name: cni mountPath: /etc/cni/net.d - name: flannel-cfg mountPath: /etc/kube-flannel/ containers: - name: kube-flannel image: quay.io/coreos/flannel:v0.15.0 command: - /opt/bin/flanneld args: - --ip-masq - --kube-subnet-mgr resources: requests: cpu: "100m" memory: "50Mi" limits: cpu: "100m" memory: "50Mi" securityContext: privileged: false capabilities: add: ["NET_ADMIN", "NET_RAW"] env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace volumeMounts: - name: run mountPath: /run/flannel - name: flannel-cfg mountPath: /etc/kube-flannel/ volumes: - name: run hostPath: path: /run/flannel - name: cni-plugin hostPath: path: /opt/cni/bin - name: cni hostPath: path: /etc/cni/net.d - name: flannel-cfg configMap: name: kube-flannel-cfg
执行
kubectl apply -f kube-flannel.yml
待出现以下信息时候,稍等片刻coredns-xx-xx两个Pod就会正常运行
podsecuritypolicy.policy/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds created
-
根据配置式启动一个Pod
# nginx.yaml apiVersion: v1 kind: Namespace metadata: name: dev --- apiVersion: v1 kind: Pod metadata: name: nginxpod namespace: dev spec: containers: - name: nginx-containers image: nginx:latest
执行
kubectl apply -f nginx.yaml
则会创建一个Pod如果是在master节点上创建Pod则需要去除master节点污点才能调度Pod到master上,master节点默认是禁用调度Pod到master的,执行以下命令是去除所有节点的所有污点
kubectl taint nodes --all node-role.kubernetes.io/master-
-
执行
kubectl get pod -n dev
出现创建的Pod并Running后证明创建并运行成功