K8S安装部署在centos7下

所有节点操作

K8S的安装部署可以参考文档:http://m.bubuko.com/infodetail-3144195.html

 

需要在每一台机器上执行的操作

l 各节点禁用防火墙

# systemctl stop firewalld

# systemctl disable firewalld

禁用SELINUX:

# setenforce 0

# sed -i ‘s/SELINUX=enforcing/SELINUX=disabled/g‘ /etc/selinux/config

SELINUX=disabled

创建/etc/sysctl.d/k8s.conf文件,添加如下内容:

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

执行命令使修改生效。

# modprobe br_netfilter

# sysctl -p /etc/sysctl.d/k8s.conf

在所有的Kubernetes节点上执行以下脚本:

# cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

EOF

# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset软件包

# yum -y install ipset

 

为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm 

# yum -y install ipvsadm

 

部署master节点

安装kubeadm和kubelet:
配置kubernetes.repo的源,由于官方源国内无法访问,这里使用阿里云yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要×××

# curl https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

 

# yum -y makecache fast

# yum install -y kubelet kubeadm kubectl

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。 关闭系统的Swap方法如下:

#  swapoff -a

 

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,

# UUID=2d1e946c-f45d-4516-86cf-946bde9bdcd8 swap                    swap    defaults        0 0

 

使用free -m确认swap已经关闭。 swappiness参数调整,修改/etc/sysctl.d/k8s.conf添加下面一行:

 vm.swappiness=0

 

使修改生效

 # sysctl -p /etc/sysctl.d/k8s.conf

使用kubeadm init初始化集群
开机启动kubelet服务:

systemctl enable kubelet.service

 

配置Master节点

# mkdir working && cd working

 

生成配置文件

# kubeadm config print init-defaults ClusterConfiguration > kubeadm.yaml

 

修改配置文件

# vim kubeadm.yaml

 

# 修改imageRepository:k8s.gcr.io

 imageRepository: registry.aliyuncs.com/google_containers

# 修改KubernetesVersion:v1.15.0

 kubernetesVersion: v1.15.0

# 配置MasterIP

 advertiseAddress: 192.168.1.21

# 配置子网网络

 networking:

   dnsDomain: cluster.local

   podSubnet: 10.244.0.0/16

   serviceSubnet: 10.96.0.0/12

 scheduler: {}

kubeadm init --config kubeadm.yaml --ignore-preflight-errors=Swap

初始化出错[kubelet-check] Initial timeout of 40s passed.的时候,可以参考

https://blog.csdn.net/gs80140/article/details/92798027

注意这一条命令需要保存好(添加集群使用) 
kubeadm join 192.168.169.21:6443 –token 4qcl2f.gtl3h8e5kjltuo0r \ –discovery-token-ca-cert-hash sha256:7ed5404175cc0bf18dbfe53f19d4a35b1e3d40c19b10924275868ebf2a3bbe6e

下面的命令是配置常规用户如何使用kubectl访问集群:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

查看一下集群状态,确认个组件都处于healthy状态:

# kubectl get cs

集群初始化如果遇到问题,可以使用下面的命令进行清理:

# kubeadm reset

# ifconfig cni0 down

# ip link delete cni0

# ifconfig flannel.1 down

# ip link delete flannel.1

# rm -rf /var/lib/cni/

安装Pod Network
接下来安装flannel network add-on:

# mkdir -p ~/k8s/

# cd ~/k8s

# curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# kubectl apply -f  kube-flannel.yml

 

 kubectl get pod -n kube-system

测试集群DNS是否可用

kubectl run curl --image=radial/busyboxplus:curl -it

 

部署node节点

yum install -y yum-utils device-mapper-persistent-data lvm2 

yum-config-manager     --add-repo     https://download.docker.com/linux/centos/docker-ce.repo 

yum install -y --setopt=obsoletes=0 docker-ce

安装kubeadm和kubelet:
配置kubernetes.repo的源,由于官方源国内无法访问,这里使用阿里云yum源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

测试地址https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64是否可用,如果不可用需要×××

# curl https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

# yum -y makecache fast

# yum install -y kubelet kubeadm kubectl

 

systemctl start docker

systemctl enable docker

Node节点加入集群

kubeadm join 192.168.30.30:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:eac10da3dbc0414542f3a4c0f220264706b693467611e856844229d1b96b9f6d

vi /etc/sysconfig/kubelet  KUBELET_EXTRA_ARGS="--fail-swap-on=false" (不执行此操作导致node节点一直为notready)

 

 

 

 

kubeadm部署k8s

 

环境

 

master01:192.168.1.110 (最少2CPU

 

node01:192.168.1.100

 

规划

 

services网络:10.96.0.0/12

 

pod网络:10.244.0.0/16

 

1.配置hosts解析各主机

 

vim /etc/hosts

 

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

 

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.110 master01192.168.1.100 node01

 

2.同步各主机时间

 

yum install -y ntpdate

 

ntpdate time.windows.com

 

14 Mar 16:51:32 ntpdate[46363]: adjust time server 13.65.88.161 offset -0.001108 sec

 

3.关闭SWAP,关闭selinux

 

swapoff -a

 

vim /etc/selinux/config

 

 

 

# This file controls the state of SELinux on the system.

 

# SELINUX= can take one of these three values:

 

#     enforcing - SELinux security policy is enforced.

 

#     permissive - SELinux prints warnings instead of enforcing.

 

#     disabled - No SELinux policy is loaded.

 

SELINUX=disabled

 

4.安装docker-ce

 

yum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum makecache fastyum -y install docker-ce

 

Docker 安装后出现:WARNING: bridge-nf-call-iptables is disabled 的解决办法

 

vim /etc/sysctl.conf

 

 

 

# sysctl settings are defined through files in

 

# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.

 

#

 

# Vendors settings live in /usr/lib/sysctl.d/.

 

# To override a whole file, create a new file with the same in

 

# /etc/sysctl.d/ and put new settings there. To override

 

# only specific settings, add a file with a lexically later

 

# name in /etc/sysctl.d/ and put new settings there.

 

#

 

# For more information, see sysctl.conf(5) and sysctl.d(5).

 

net.bridge.bridge-nf-call-ip6tables=1

 

net.bridge.bridge-nf-call-iptables=1

 

net.bridge.bridge-nf-call-arptables=1

 

systemctl enable docker && systemctl start docker

 

5.安装kubernetes

 

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

 

[kubernetes]

 

name=Kubernetes

 

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

 

enabled=1

 

gpgcheck=1

 

repo_gpgcheck=1

 

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF

 

setenforce 0yum install -y kubelet kubeadm kubectl

 

systemctl enable kubelet && systemctl start kubelet

 

6.初始化集群

 

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.13.1 --pod-network-cidr=10.244.0.0/16

 

Your Kubernetes master has initialized successfully!

 

 

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

 

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

 

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

 

 

You should now deploy a pod network to the cluster.

 

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

 

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of machines by running the following on each node

 

as root:

 

 

 

  kubeadm join 192.168.1.110:6443 --token wgrs62.vy0trlpuwtm5jd75 --discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0

 

7.手动部署flannel

 

flannel网址:https://github.com/coreos/flannel

 

for Kubernetes v1.7+

 

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

 

podsecuritypolicy.extensions/psp.flannel.unprivileged created

 

clusterrole.rbac.authorization.k8s.io/flannel created

 

clusterrolebinding.rbac.authorization.k8s.io/flannel created

 

serviceaccount/flannel created

 

configmap/kube-flannel-cfg created

 

daemonset.extensions/kube-flannel-ds-amd64 created

 

daemonset.extensions/kube-flannel-ds-arm64 created

 

daemonset.extensions/kube-flannel-ds-arm created

 

daemonset.extensions/kube-flannel-ds-ppc64le created

 

daemonset.extensions/kube-flannel-ds-s390x created

 

8.node配置

 

安装docker kubelet kubeadm

 

docker安装同步骤4kubelet kubeadm安装同步骤5

 

9.node加入到master

 

kubeadm join 192.168.1.110:6443 --token wgrs62.vy0trlpuwtm5jd75 --discovery-token-ca-cert-hash sha256:6e947e63b176acf976899483d41148609a6e109067ed6970b9fbca8d9261c8d0

 

kubectl get nodes  #查看node状态

 

NAME                    STATUS     ROLES    AGE     VERSION

 

localhost.localdomain   NotReady   <none>   130m    v1.13.4

 

master01                Ready      master   4h47m   v1.13.4

 

node01                  Ready      <none>   94m     v1.13.4

 

 

 

kubectl get cs  #查看组件状态

 

NAME                 STATUS    MESSAGE              ERROR

 

scheduler            Healthy   ok                   

 

controller-manager   Healthy   ok                   

 

etcd-0               Healthy   {"health": "true"}  

 

 

 

kubectl get ns  #查看名称空间

 

NAME          STATUS   AGE

 

default       Active   4h41m

 

kube-public   Active   4h41m

 

kube-system   Active   4h41m

 

 

 

kubectl get pods -n kube-system  #查看pod状态

 

NAME                               READY   STATUS    RESTARTS   AGE

 

coredns-78d4cf999f-bszbk           1/1     Running   0          4h44m

 

coredns-78d4cf999f-j68hb           1/1     Running   0          4h44m

 

etcd-master01                      1/1     Running   0          4h43m

 

kube-apiserver-master01            1/1     Running   1          4h43m

 

kube-controller-manager-master01   1/1     Running   2          4h43m

 

kube-flannel-ds-amd64-27x59        1/1     Running   1          126m

 

kube-flannel-ds-amd64-5sxgk        1/1     Running   0          140m

 

kube-flannel-ds-amd64-xvrbw        1/1     Running   0          91m

 

kube-proxy-4pbdf                   1/1     Running   0          91m

 

kube-proxy-9fmrl                   1/1     Running   0          4h44m

 

kube-proxy-nwkl9                   1/1     Running   0          126m

 

kube-scheduler-master01            1/1     Running   2          4h43m

 

posted @ 2019-10-12 18:01  爱写bug的小猿  阅读(3624)  评论(1编辑  收藏  举报