k8s内网安装部署(二)

续上篇

https://www.cnblogs.com/wangql/p/13397034.html

一、kubeadm安装

1.kube-proxy开启ipvs的前置条件

modprobe br_netfilter //加载net filter模块

 

 

cat > /etc/sysconfig/modules/ipvs.modules <<EOF

#!/bin/bash

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules &&  lsmod | grep -e ip_vs -e nf_conntrack_ipv4

2.安装docker软件

下载地址:https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/test/Packages/

docker-ce-17.03.3.ce-1.el7.x86_64.rpm 

docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm  

yum install -y yum-utils device-mapper-persistent-data lvm2  bind-utils

 

yum -y install docker-ce

 

## 创建 /etc/docker 目录

mkdir /etc/docker

 

# 配置 daemon. 加速(内网可以不配置,只配置自己私有仓库)

cat > /etc/docker/daemon.json <<EOF

{

"insecure-registries":["192.168.4.88:5000"]  #这里我用的是内网的仓库

}

EOF

mkdir -p /etc/systemd/system/docker.service.d

# 重启docker服务

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

联网安装方法:

yum install -y yum-utils device-mapper-persistent-data lvm2

yum-config-manager \

--add-repo \

http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo //导入阿里镜像仓库

yum update -y && yum install -y docker-ce

## 创建 /etc/docker 目录

mkdir /etc/docker

# 配置 daemon.

cat > /etc/docker/daemon.json <<EOF

{

"exec-opts": ["native.cgroupdriver=systemd"],

"log-driver": "json-file",

"log-opts": {

"max-size": "100m"

}

}

EOF

mkdir -p /etc/systemd/system/docker.service.d

# 重启docker服务

systemctl daemon-reload && systemctl restart docker && systemctl enable docker

 

重启一下系统看看内核有没有变

 

3.安装 Kubeadm (主从配置)

把包做成yum

yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1

systemctl enable kubelet.service 

解压镜像

tar -xvf kubeadm-basic.images.tar.gz    #需要安装包的话可在我的公众号【大隆爱分享】获取

 

4. 初始化主节点

注:集群初始化如果遇到问题,可以使用下面的命令进行清理:

kubeadm reset 

 

1】配置私有仓库地址

 

[root@k8s-master01 flannel]# cat /etc/docker/daemon.json

{

"exec-opts": ["native.cgroupdriver=systemd"],

"log-driver": "json-file",

"log-opts": {

"max-size": "100m"

},

"insecure-registries":["192.168.4.88:5000"] //加自己的私有仓库地址

}

 

 

初始化主机点(只需要主做)

kubeadm config print init-defaults > kubeadm-config.yaml


vim kubeadm-config.yaml

 

12   advertiseAddress: 192.168.4.10 //当前服务器节点地址

32 imageRepository: 192.168.4.88:5000 //自己私有仓库地址

34 kubernetesVersion: v1.15.1 //版本号

36   dnsDomain: cluster.local

37   podSubnet: "10.244.0.0/16" //添加这一行pod的网段

38   serviceSubnet: 10.96.0.0/12 //默认即可

--- //添加下面的,默认把调度方式改为IP VS

apiVersion: kubeproxy.config.k8s.io/v1alpha1

kind: KubeProxyConfiguration

featureGates:

SupportIPVSProxyMode: true

mode: ipvs

 

kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

//指定yaml文件以及颁发证书  把所有信息都写到kubeadm-init.log中

............

...........

 

Your Kubernetes control-plane has initialized successfully! //代表初始化成功

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

Then you can join any number of worker nodes by running the following on each as root:

 

kubeadm join 192.168.4.10:6443 --token abcdef.0123456789abcdef \

    --discovery-token-ca-cert-hash sha256:bb6ae2db244800ce95a72e47e715a01dbc1aa712d0fec5a252e572b5a33cd083

 

执行

cd /etc/kubernetes/pki/

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config //拷贝集群管理员的配置文件

sudo chown $(id -u):$(id -g) $HOME/.kube/config //授权 当前属主属者

 

[root@k8s-master01 ~]# kubectl get node //查看当前节点

NAME           STATUS     ROLES    AGE     VERSION

k8s-master01   NotReady   master   4m37s   v1.15.1

5.部署网络

mkdir install-k8s

mv kubeadm-config.yaml   kubeadm-init.log install-k8s/ //把重要文件放到这个里面

 

 cd install-k8s/

mkdir core

mv kubeadm-* core/

 

 

 mkdir plugin

 cd plugin/

 mkdir flannel

cd flannel/

下载地址: wget https:
//raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml [root@k8s-master01 flannel]# vim kube-flannel.yml 172 image: 192.168.4.88:5000/flannel:v1 //镜像地址 186 image: 192.168.4.88:5000/flannel:v1 //里面的全都要改 192 - --iface=eth0 //指定网卡,都要改 创建flannel.yml kubectl apply -f kube-flannel.yml 都是Running说明成功 [root@k8s-master01 flannel]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-6f5f787f5b-cch5j 1/1 Running 0 15m coredns-6f5f787f5b-fscnt 1/1 Running 0 15m etcd-k8s-master01 1/1 Running 0 15m kube-apiserver-k8s-master01 1/1 Running 0 14m kube-controller-manager-k8s-master01 1/1 Running 0 15m kube-flannel-ds-amd64-q4hnk 1/1 Running 0 10m kube-proxy-pfhj2 1/1 Running 0 15m kube-scheduler-k8s-master01 1/1 Running 0 15m [root@k8s-master01 flannel]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master01 Ready master 17m v1.15.1

 

6. node节点加入

日志最后一行在从节点执行即可

 在这个文件里    kubeadm-init.log

kubeadm join 192.168.4.10:6443 --token abcdef.0123456789abcdef \

    --discovery-token-ca-cert-hash sha256:bb6ae2db244800ce95a72e47e715a01dbc1aa712d0fec5a252e572b5a33cd083

7.节点下载方法

kubeadm config print init-defaults >kubeadm.conf

将配置文件的imageRepository: 修改为自己的私有仓

imageRepository: docker.emarbox.com/google_containers

kubernetesVersion 改为自有版本

kubernetesVersion: v1.15.1

kubeadm config images list --config kubeadm.conf 

kubeadm config images pull --config kubeadm.conf

 

8.节点操作

下载镜像:这些镜像在我的镜像仓库里

docker pull 192.168.4.88:5000/flannel:v1

docker pull 192.168.4.88:5000/pause:3.1

docker pull 192.168.4.88:5000/kube-proxy:v1.15.1

9. 报错解决

报错信息

error execution phase preflight: couldn't validate the identity of the API Server: abort connecting to API servers after timeout of 5m0s

报错原因: 与API服务器认证失败,八成token失效了,
查看token

kubeadm token list

创建token

kubeadm token create

kubeadm token list

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

 

 

kubeadm join 192.168.4.10:6443 --token abcdef.0123456789abcdef \    把这个token换掉

    --discovery-token-ca-cert-hash sha256:eb1e1a3ce9e819ebafdf73b8a4819e2e40d9da6dfdb0272a4ab1925be3fc12f3    //重新加入试试

 

node节点不能查看

[root@k8s-node02 ~]# kubectl  get node

The connection to the server localhost:8080 was refused - did you specify the right host or port?

 

 将主节点(master节点)中的【/etc/kubernetes/admin.conf】文件拷贝到从节点相同目录下:

 scp /etc/kubernetes/admin.conf 192.168.4.63:/etc/kubernetes/.

node上

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

source ~/.bash_profile

 

二、移除节点

 

Master上:

[root@k8s-master01 ~]# kubectl  drain k8s-node02 --delete-local-data  --force  --ignore-daemonsets

node/k8s-node02 cordoned

WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-l4j57, kube-system/kube-proxy-9d9nv

node/k8s-node02 drained

[root@k8s-master01 ~]# kubectl  delete node k8s-node02

node "k8s-node02" deleted

[root@k8s-master01 ~]# kubectl  get node

NAME           STATUS   ROLES    AGE     VERSION

k8s-master01   Ready    master   4d19h   v1.15.1

k8s-node01     Ready    <none>   5m49s   v1.15.1

加回来:

[root@k8s-node02 docker.service.d]# systemctl  stop kubelet

[root@k8s-node02 docker.service.d]# rm -rf /etc/kubernetes/*

[root@k8s-node02 docker.service.d]# kubeadm join 192.168.4.10:6443 --token v2xaat.qip3csxdge8vicxj     --discovery-token-ca-cert-hash sha256:eb1e1a3ce9e819ebafdf73b8a4819e2e40d9da6dfdb0272a4ab1925be3fc12f3

[root@k8s-node02 docker.service.d]# kubectl  get nodes

NAME           STATUS   ROLES    AGE     VERSION

k8s-master01   Ready    master   4d19h   v1.15.1

k8s-node01     Ready    <none>   21m     v1.15.1

k8s-node02     Ready    <none>   18s     v1.15.1

 

 

还有好多没来的及整理,会慢慢更新,欢迎点赞关注。

 

posted @ 2020-11-26 17:01  亿千万  阅读(1051)  评论(0编辑  收藏  举报