k8s+kubeedge+sedna部署·

1. 安装要求

 

在开始之前,部署Kubernetes集群机器需要满足以下几个条件:

 

  • 一台或多台机器,操作系统 ubuntu18.04
  • 硬件配置:2GB或更多RAM,2个CPU或更多CPU,硬盘70GB或更多(这里30G也行)
  • 可以访问外网,需要拉取镜像,如果服务器
  • 不能上网,需要提前下载镜像并导入节点
  • 禁止swap分区

2. 准备环境

 创建一个虚拟机并克隆,其中还要下docker:

 2.1 换源

gedit /etc/apt/sources.list
#将原来的所有内容全部注释,选择阿里源,然后点击保存关闭。
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
#更改完成之后执行以下命令
sudo apt-get update
sudo apt-get upgrade

2.2 相关配置

2.2.1 关闭ufw防火墙
ufw disable
2.2.2开启ipv4转发,配置iptables参数
modprobe br_netfilter
cat >> /etc/sysctl.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p                                            #执行此命令生效配置

2.2.3 禁用swap分区

swapoff -a                                          #临时关闭
sed -ri 's/.*swap.*/#&/' /etc/fstab                 #永久关闭

 

2.3 安装docker

2.3.1 安装依赖包

apt-get -y install apt-transport-https ca-certificates curl software-properties-common wget

2.3.2 国内使用docker官方源下载太慢,使用阿里源

curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu \
$(lsb_release -cs) stable"

2.3.3 查看版本并安装

apt-cache madison docker-ce  #版本
apt-get install docker-ce=5:18.09.0~3-0~ubuntu-bionic docker-ce-cli=5:18.09.0~3-0~ubuntu-bionic containerd.io     #我这里安装的是18.09

2.3.4 启动docker 并设置开机自启

systemctl enable docker
systemctl start docker

2.3.5 配置docker镜像源和system启动方式,然后重启

cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload                                 #载入配置
systemctl restart docker 

    到这里 虚拟机基本环境已经完成,关机再克隆虚拟机三台,然后编辑设置三台虚拟机的使用固定IP。这台虚拟机克隆后就不用管,如果后面其他虚拟机出错可直接克隆。

 

2.4 节点配置

  编辑设置三台虚拟机的使用固定IP,三台及其IP依次为:

| master01 | 192.168.32.41 |
| node01 | 192.168.32.42 |
| edge01 | 192.168.32.43 |

  所有节点都要进行编辑,以master为例进行如下操作:

2.4.1 查看netplan

ls /etc/netplan/ #Ubuntu18.04采用的是netplan来管理network

2.4.2 编辑

vi /etc/netplan/01-network-manager-all.yaml

#打开文档后换成如下内容
network:
  version: 2
  renderer: NetworkManager
  ethernets:
    ens32:
      dhcp4: no  #设置为no表示要使用固定IP
      addresses: [192.168.32.41/24] #静态IP地址
      gateway4: 192.168.32.2        #网关
      nameservers:
        addresses: [192.168.32.2]  

 2.4.3 保存后设置生效

netplan apply
ip address           #检查是否生效

2.4.5更改主机名

hostnamectl set-hostname master01
bash

2.4.6 DNS解析(只在master01上操作)

cat >> /etc/hosts <<EOF
192.168.32.41   master01
192.168.32.42   node01
192.169.32.43   edge01
EOF

   到这里,环境准备完成,接下来安装kubeadm,kubelet和kubectl。

 

3 安装kubeadm,kubelet和kubectl

  以下操作只在master01和node01两个节点上进行操作。

3.1 添加k8s阿里源

curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-cache madison kubeadm           #查看版本

3.2 查看版本

apt-cache madison kubeadm

3.3 安装kubeadm、kubelet、kubectl

apt install -y kubelet=1.20.15-00 kubeadm=1.20.15-00 kubectl=1.20.15-00    #版本为1.20.15-00

3.3.1 设置kubelet开机启动,并启动kubelet

systemctl enable kubelet && systemctl start kubelet

 3.4 master节点执行初始化配置(master01上进行)

kubeadm init \
  --apiserver-advertise-address=192.168.32.41 \   #修改为自己master ip
  --image-repository registry.aliyuncs.com/google_containers \    #设置阿里镜像仓库 
  --kubernetes-version v1.20.15 \                     #指定k8s版本
  --service-cidr=10.96.0.0/12 \                         #指定service  ip网段
  --pod-network-cidr=10.244.0.0/16 \                 #指定pod ip网段        
  --ignore-preflight-errors=all

  成功后的输出内容:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.32.41:6443 --token xgxodx.x80z8fh2wiak91or \
    --discovery-token-ca-cert-hash sha256:a1ff3f60c712623bbe4adb0b73bfe31016f6b2fcd9359ed95febe6933e17098d 

3,4.1 根据输出内容执行命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

 3.4.2 将node节点加入集群(node01上操作)

kubeadm join 192.168.32.41:6443 --token drzx3o.u8ibteijya3zmerg \
--discovery-token-ca-cert-hash sha256:19c59defc6f01a5d099daf1ea4d2b08a90a529ebd74819684873bf7d8a663902

  输出内容:

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

3.4.3 配置命令补全工具

apt-get -y install bash-completion
source <(kubectl completion bash)                               #临时生效
echo "source <(kubectl completion bash)" >> ~/.bashrc           #永久生效

3.4.4 安装网络插件

wget https://docs.projectcalico.org/manifests/calico.yaml   #所有节点
kubectl apply -f calico.yaml                #master01上  

  如果没有安装成功,参考这篇文章

  在master上执行 kubectl get nodes命令,就可以看到节点的网络是Ready状态了。

root@master01:/home/zys# kubectl get nodes
NAME       STATUS   ROLES                  AGE   VERSION
master01   Ready    control-plane,master   36m   v1.22.11
node01     Ready    <none>                 11m   v1.22.11

  如果是NotReady,参考这篇文章

3.4.5 设置智能提醒

  编辑 /etc/profile 文件,在文件首行插入一行命令:source <(kubectl completion bash)。

gedit /etc/profile

  设置使其生效。

source /etc/profile

  到这里,Ubuntu18.04上部署搭建kubernetes集群就成功了。现在就可以查看集群信息了:

root@master01:/home/zys# kubectl cluster-info
Kubernetes control plane is running at https://192.168.32.41:6443
CoreDNS is running at https://192.168.32.41:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

root@master01:/home/zys# kubectl get nodes
NAME       STATUS   ROLES                  AGE   VERSION
master01   Ready    control-plane,master   77m   v1.22.11
node01     Ready    <none>                 52m   v1.22.11

root@master01:/home/zys# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-69f595f8f8-2br2c   1/1     Running   0          44m
kube-system   calico-node-rg6fv                          1/1     Running   0          44m
kube-system   calico-node-zjdbq                          1/1     Running   0          44m
kube-system   coredns-7f6cbbb7b8-c9rh6                   1/1     Running   0          77m
kube-system   coredns-7f6cbbb7b8-q5fgl                   1/1     Running   0          77m
kube-system   etcd-master01                              1/1     Running   0          77m
kube-system   kube-apiserver-master01                    1/1     Running   0          77m
kube-system   kube-controller-manager-master01           1/1     Running   0          77m
kube-system   kube-proxy-9hpzl                           1/1     Running   0          52m
kube-system   kube-proxy-t5w4f                           1/1     Running   0          77m
kube-system   kube-scheduler-master01                    1/1     Running   0          77m
root@master01:/home/zys# 

 

 

4 部署kubeedge

  kubeedge版本为1.8.2,需要在master和edge上部署。kubeedge部署须知:

       · master以成功部署kubernetes,并且master结点处于ready状态.
       · edge未执行kubeadm join命令

4.1 准备安装环境(主节点和从节点均有)

wget https://raw.githubusercontent.com/ansjin/katacoda-scenarios/main/getting-started-with-kubeedge/assets/daemon.json

mv daemon.json /etc/docker/daemon.json

systemctl daemon-reload

service docker restart 

docker info | grep -i cgroup

   如果有WARING:No swap limit support,参考这篇文章

4.2 部署golang与gcc(master01和edge01都要部署)

  可以不部署,如果后面要用到的话建议下载。

apt install golang-go
apt-get install gcc
go version &&gcc -v   #查看版本

4.3  master01节点上:

4.3.1 使用keadm部署kubeedge

#可自行前往官网下载
wget https://github.com/kubeedge/kubeedge/releases/download/v1.8.2/keadm-v1.8.2-linux-amd64.tar.gz
#解压压缩包
tar -zxvf keadm-v1.8.2-linux-amd64.tar.gz
#master部署kubeedge
cd keadm-v1.8.2-linux-amd64/keadm
#在keadm目录下,执行init操作(ip为master结点ip):
./keadm init --advertise-address="192.168.32.41" --kubeedge-version=1.8.2
#【注】在这里会出现错误,原因为raw.githubusercontent.com无法访问
#解决方案:在/etc/hosts文件中,加入以下内容:
# GitHub Start
185.199.109.133 raw.githubusercontent.com
185.199.108.133 raw.githubusercontent.com
185.199.111.133 raw.githubusercontent.com
185.199.110.133 raw.githubusercontent.com
# GitHub End
#再次执行init即可

   若因connected失败导致kubeedge未能成功下载,可点击此处下载kubeedge放到/etc/kubeedge目录下并将其解压,之后重新执行init命令。

mv kubeedge-v1.8.2-linux-amd64.tar.gz /etc/kubeedge
cd /etc/kubeedge
tar -zxvf kubeedge-v1.8.2-linux-amd64.tar.gz 

    在keadm-v1.8.2-linux-amd64/keadm目录下执行./keadm gettoken获取token。
   成功输出:

KubeEdge cloudcore is running, For logs visit:  /var/log/kubeedge/cloudcore.log
CloudCore started

root@master01:/home/zys/keadm-v1.8.2-linux-amd64/keadm# ./keadm gettoken
c40f88cd04548c8e84473873f7eb0979e5f6ce8e17797df2a35678e56ec289d1.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTY0OTg4NTJ9.NTYtv5vT-H0t1BqEiYlog3058sgloQTWVktGeni72zk

 

 

4.4 edge01节点上:

  和4.3基本一致,只不过是jion不同

#在keadm目录下,执行join操作(注意修改ip与edgenode-name,并在token后添加在master中获取到的token):
./keadm join --cloudcore-ipport=192.168.32.41:10000 --edgenode-name=test --kubeedge-version=1.8.2 --token=c40f88cd04548c8e84473873f7eb0979e5f6ce8e17797df2a35678e56ec289d1.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTY0OTg4NTJ9.NTYtv5vT-H0t1BqEiYlog3058sgloQTWVktGeni72zk

 然后在master01上验证是否成功:

kubectl get nodes -owide

  如果出现版本中带有kebeedge中的node,则说明部署成功。

 

5 部署edgemesh

  参照官网即可。

6 安装Sedna

 首先官网链接在这里。

  5和6下载步骤也参考了这篇文章,具体内容点击查看,这篇内容也重要。

 

参考:1:在线部署kubeedge 1.6详细教程(Ubuntu)

           2:kubeadm完成一个kubernetes集群的部署

           3:ubuntu18.04 上搭建kubernetes集群

           4:ubuntu 安装docker (详细版)

           5: 搭建Kubernetes集群基于calico网络插件

            6:Kubeedge & Edgemesh & Sedna 配置

 

 

posted on 2022-06-30 15:51  鹦鹉理  阅读(547)  评论(0编辑  收藏  举报

导航