Centos+K8S(单主)集群搭建

k8s集群:1个master主节点,2个node节点

 

网络及主要程序版本如下:

name版本ip
master Centos7.2 172.30.31.80
node1 Centos7.2 172.30.31.90
node2 Centos7.2 172.30.31.91
docker-ce 20.10.17  
kubectl 1.19.0  

参考:马哥视频教K8S-docker安装.docx

参考:https://blog.51cto.com/loong576/2398136

本文所有脚本和配置文件已上传github:https://github.com/loong576/Centos7.6-install-k8s-v1.14.2-cluster.git

1、系统处理,所有机器上

1.1 验证mac地址uuid

cat /sys/class/net/ens32/address
cat /sys/class/dmi/id/product_uuid

1.2 关闭防火墙

仅用于测试,生产请不要使用

systemctl disable --now firewalld
​
### 关闭 SELinux
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

1.3 设置系统时区、同步时间

timedatectl set-timezone Asia/Shanghai
systemctl enable --now chronyd
​
#将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
​
#重启依赖于系统时间的服务
systemctl restart rsyslog && systemctl restart crond

2、Docker安装,所有机器上

2.1 安装必要依赖

yum install -y yum-utils device-mapper-persistent-data lvm2

2.2 添加aliyun docker-ce yum源

yum -y install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 或者下载源
# cd etc/yum.repos.d/
# wget http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

注意: 如果是nfs系统,下载后的docker-ce.repo需要修改里面的变量,nfs3改为7,nfs4改为8即可

# nfs操作
sed -i -e 's/\$releasever/7/g' -e 's/$basearch/x86_64/g' /etc/yum.repos.d/docker-ce.repo
​
### 重建yum缓存
yum makecache fast

2.3 安装指定版本的docker

yum install -y docker-ce-18.09.6
​
systemctl start docker
systemctl enable docker

2.4 镜像加速

mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://cd6xo91e.mirror.aliyuncs.com"]
}
EOF
##重启服务
systemctl daemon-reload
systemctl restart docker
##验证
docker --version
docker run hello-world

3、k8s安装(kubeadm)

安装Centos是已经禁用了防火墙和selinux并设置了阿里源。master和node节点都执行本部分操作。

3.1 配置主机名和hosts文件

more /etc/hostname             
​
cat >> /etc/hosts << EOF
172.30.31.80    master
172.30.31.90    node01
172.30.31.91    node02
EOF

3.2 禁用swap

##临时修改
swapoff -a
##永久修改
sed -i.bak '/swap/s/^/#/' /etc/fstab

3.3 修改内核参数:使桥接流量对iptables可见

##临时修改
sysctl net.bridge.bridge-nf-call-iptables=1
sysctl net.bridge.bridge-nf-call-ip6tables=1
##永久修改
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

3.4 修改Cgroup Driver

# 修改daemon.json,新增‘"exec-opts": ["native.cgroupdriver=systemd"’
​
cat <<EOF > /etc/docker/daemon.json
{
  "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
​
### 重新加载,并重启
​
systemctl daemon-reload
systemctl restart docker

3.5 安装kubelet、kubeadm和kubectl,并启动

### 5.1. 设置kubernetes源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
​
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
​
##更新缓存
yum clean all
yum -y makecache
​
### 5.2. 版本查看
yum list kubelet --showduplicates | sort -r 
​
### 5.3. 安装并启动kubelet、kubeadm和kubectl
yum install -y kubelet-1.19.0 kubeadm-1.19.0 kubectl-1.19.0
systemctl enable kubelet && systemctl start kubelet

4、Master节点安装

4.1 更换k8s.gcr.io为阿里,下载K8s镜像

hostnamectl set-hostname master
##====自己创建vi image.sh文件,内容如下:============================
​
tee image.sh <<-'EOF'
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
version=v1.19.0
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]}; do
  docker pull $url/$imagename
  docker tag $url/$imagename k8s.gcr.io/$imagename
  docker rmi -f $url/$imagename
done
EOF
​
##====结束========================================================
​
chmod u+x image.sh
./image.sh
docker images
#+--------------------------------------------------------------------------------------------------+
#|这里出现如下结果表示成功:
#|k8s.gcr.io/kube-proxy                v1.19.0      bc9c328f379c        2 months ago        118MB
#|k8s.gcr.io/kube-controller-manager   v1.19.0      09d665d529d0        2 months ago        111MB
#|k8s.gcr.io/kube-apiserver            v1.19.0      1b74e93ece2f        2 months ago        119MB
#|k8s.gcr.io/kube-scheduler            v1.19.0      cbdc8369d8b1        2 months ago        45.6MB
#|k8s.gcr.io/etcd                      3.4.9-1      d4ca8726196c        4 months ago        253MB
#|k8s.gcr.io/coredns                   1.7.0        bfe3a36ebd25        4 months ago        45.2MB
#|k8s.gcr.io/pause                     3.2          80d28bedfe5d        8 months ago        683kB
#+--------------------------------------------------------------------------------------------------+

 

4.2 初始化Master

用apiserver-advertise-address #指定master的interface,pod-network-cidr指定Pod网络的范围,这里使用flannel网络方案。

kubeadm init --apiserver-advertise-address 192.168.0.100 --pod-network-cidr=10.244.0.0/16

或者用马哥的,指定版本

kubeadm init --kubernetes-version="v1.19.0" --pod-network-cidr="10.244.0.0/16" --ignore-preflight-errors=Swap

Your Kubernetes control-plane has initialized successfully!表示成功

4.3 记录kubeadm join输出

记录kubeadm join的输出,在各node运行,将各个节点加入集群中。

kubeadm join 172.30.31.80:6443 --token cgfdbp.o6s4db05la737szv \ --discovery-token-ca-cert-hash sha256:f8bc85cecd7fa7b5a7b176cfd70047d583d62da8ef1ff4d29ebffd1e94189ec5 

4.4 加载环境变量

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source ~/.bash_profile
​
# 本文所有操作都在root用户下执行,若为非root用户,则执行如下操作:
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

4.5 安装pod网络

安装pod网络,因为raw.githubusercontent.com被污染,可以打开https://ping.chinaz.com/通过ping检测找到其他ip节点如185.199.109.133

echo -e "185.199.109.133   raw.githubusercontent.com" >>/etc/hosts
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

4.6 检查

应该有master node和kube-system pod

kubectl get nodes
# NAME     STATUS     ROLES    AGE     VERSION
# master   NoReady    master   10m   v1.19.0
#只有master节点,且处于NoReady未就绪状态
​
kubectl get pods -n kube-system
# NAME                             READY   STATUS    RESTARTS   AGE
# coredns-f9fd979d6-c5jbg          1/1     Running   0          88m
# coredns-f9fd979d6-zczxg          1/1     Running   0          88m
# etcd-master                      1/1     Running   0          88m
# kube-apiserver-master            1/1     Running   0          88m
# kube-controller-manager-master   1/1     Running   0          88m
# kube-proxy-7vg4f                 1/1     Running   0          88m
# kube-proxy-ck99m                 1/1     Running   0          24m
# kube-scheduler-master            1/1     Running   0          88m
# 出现任何一个不是Running,请用kubectl describe看看原因并解决
 

5、Node节点安装

hostnamectl set-hostname node01

5.1 更换k8s.gcr.io为阿里,并下载K8s镜像

tee image.sh <<-'EOF'
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
version=v1.19.0
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]}; do
  docker pull $url/$imagename
  docker tag $url/$imagename k8s.gcr.io/$imagename
  docker rmi -f $url/$imagename
done
EOF
​
chmod u+x image.sh
./image.sh
docker images

5.2 加入集群

##执行master上的
kubeadm join 172.30.31.80:6443 --token cgfdbp.o6s4db05la737szv \
    --discovery-token-ca-cert-hash sha256:f8bc85cecd7fa7b5a7b176cfd70047d583d62da8ef1ff4d29ebffd1e94189ec5
  • 问题1:前面4.3忘记记录可以重新查看一下:

kubeadm token create --print-join-command
  • 问题2:令牌超过24小时,需要重新生成

#1 查看令牌
kubeadm token list
# j5eoyz.zu0x6su7wzh752b3   <invalid>   2019-06-04T17:40:41+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token
# 发现之前初始化时的令牌已过期
#2 生成新的令牌
kubeadm token create
# 1zl3he.fxgz2pvxa3qkwxln
#3 生成新的加密串
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null |  openssl dgst -sha256 -hex | sed 's/^.* //'
# 5f656ae26b5e7d4641a979cbfdffeb7845cc5962bbfcd1d5435f00a25c02ea50
#4 node节点加入集群
#在node节点上分别执行如下操作:
kubeadm join 172.27.9.131:6443 --token 1zl3he.fxgz2pvxa3qkwxln  --discovery-token-ca-cert-hash sha256:5f656ae26b5e7d4641a979cbfdffeb7845cc5962bbfcd1d5435f00a25c02ea50

5.2 检查,msater上查看

kubectl get nodes
# NAME     STATUS   ROLES    AGE    VERSION
# master   Ready    master   100m   v1.19.0
# node01   Ready    <none>   36m    v1.19.0
#只有node1节点已经加入,且处于Ready就绪状态

5.3 添加其他node

循环上面一、二、三、五步骤,看看结果

kubectl get node
#NAME     STATUS   ROLES    AGE    VERSION
#master   Ready    master   2d1h   v1.19.0
#node01   Ready    <none>   2d     v1.19.0
#node02   Ready    <none>   31h    v1.19.0

6、Dashboard的安装

6.1.准备yaml文件

1 下载yaml文件

echo -e "185.199.109.133   raw.githubusercontent.com" >>/etc/hosts
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml
​
​
#由于默认的镜像仓库网络访问不通,故改成阿里镜像
sed -i 's/k8s.gcr.io/registry.cn-hangzhou.aliyuncs.com\/kuberneters/g' kubernetes-dashboard.yaml

2 修改Service类型为nodeport

sed -i '/targetPort: 8443/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' recommended.yaml

3 创建管理员账号

cat >> recommended.yaml << EOF
---
# ------------------- dashboard-admin ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kubernetes-dashboard
​
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
EOF

6.2 部署dashboard

#部署dashboard
kubectl apply -f recommended.yaml

#查看状态
kubectl get all -n kubernetes-dashboard

#获取令牌
kubectl describe secrets -n kubernetes-dashboard dashboard-admin

6.3 访问检测

火狐浏览器访问https://master_ip:30001/,使用刚获取的token进行认证即可登录。

 

7 集群测试

7.1 部署应用

1.1 命令方式:部署apache服务

# 创建nginx-deploy的pod,replicas副本数为3
kubectl run nginx-deploy --image=nginx:1.14-alpine --port=80 --replicas=3
# 会反馈以下错误:
# Flag --replicas has been deprecated, has no effect and will be removed in the future.
# pod/nginx-deploy created
​
# 查看
kubectl get pods
​
# 删除nginx-deploy
kubectl delete pods nginx-deploy

注意:在K8s v1.18.0版本以后,–replicas已弃用 ,推荐用配置文件创建 pods

1.2 配置文件方式:部署nginx服务

cat >> nginx.yaml << EOF
​
# API 版本号
apiVersion: apps/v1
# 类型,如:Pod/ReplicationController/Deployment/Service/Ingress
kind: Deployment
metadata:
  # Kind 的名称
  name: nginx-app
spec:
  selector:
    matchLabels:
      # 容器标签的名字,发布 Service 时,selector 需要和这里对应
      app: nginx
  # 部署的实例数量
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      # 配置容器,数组类型,说明可以配置多个容器
      containers:
      # 容器名称
      - name: nginx
        # 容器镜像
        image: nginx:1.17
        # 只有镜像不存在时,才会进行镜像拉取
        imagePullPolicy: IfNotPresent
        ports:
        # Pod 端口
        - containerPort: 80
EOF
​
# 创建Pod
kubectl apply -f nginx.yaml

7.2 状态查看

2.1 命令查看

#-----------1查看节点状态
kubectl get pods
# NAME                         READY   STATUS    RESTARTS   AGE
# nginx-app-7f4fc68488-lg6l2   1/1     Running   0          58s
# nginx-app-7f4fc68488-s2g58   1/1     Running   0          58s
​
#-----------2查看pod状态
kubectl get pod --all-namespaces
# NAMESPACE           NAME                             READY   STATUS    RESTARTS   AGE
# default             nginx-app-7f4fc68488-lg6l2       1/1     Running   0          2m7s
# default             nginx-app-7f4fc68488-s2g58       1/1     Running   0          2m7s
# kube-flannel        kube-flannel-ds-b4t5c            1/1     Running   0          2d3h
# .。。。。。
​
#-----------3查看副本数
kubectl get deployments
# NAME        READY   UP-TO-DATE   AVAILABLE   AGE
# nginx-app   2/2     2            2           80s
kubectl get pod -o wide
# NAME                        READY  STATUS   RESTARTS  AGE  IP          NODE    NOMINATED NODE   READINESS GATES
# nginx-app-7f4fc68488-lg6l2  1/1    Running  0         10m  10.244.3.3  node02  <none>    <none>
# nginx-app-7f4fc68488-s2g58  1/1    Running  0         10m  10.244.1.4  node01  <none>    <none>
# 可以看到nginx和httpd的3个副本pod均匀分布在3个节点上
​
#-----------4查看deployment详细信息
kubectl describe deployments
​
#-----------5查看集群基本组件状态
kubectl get cs

2.2 dashboard查看

172.30.31.80:30001

7.3 暴露服务

kubectl expose deployment nginx-app --port=80 --type=LoadBalancer

查看服务状态(查看对外的端口)

kubectl get services
# NAME         TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
# kubernetes   ClusterIP      10.96.0.1      <none>        443/TCP        2d4h
# nginx-app    LoadBalancer   10.100.15.94   <pending>     80:30531/TCP   16s

浏览器校验

http://Master+NodeIP端口
172.30.31.80:30531

至此完成Centos7.6下k8s(v1.14.2)集群部署。

posted on 2022-10-19 10:26  lxsky  阅读(290)  评论(0编辑  收藏  举报

导航