k8s 1.22.12 + kubeedge 1.12.1 离线部署指南
摘要:由于MEF需要用户提供开源的k8s和kubeedge环境,实验室内网代理配置复杂,以下提供通过离线包的形式安装kubenetes 1.22.12+kubeedge 1.12.1 的指南
kubenetes 1.22.12+kubeedge 1.12.1 离线手动安装指南
k8s 1.19.16 arm64离线安装包,已验证ubuntu22.04环境能够正常使用
https://onebox.huawei.com/p/2fcc721a484a3ecf68fb167485c2b352
cloudcore 1.12.2 arm64离线安装包,适配MEFEdge edgecore版本
https://onebox.huawei.com/p/d4c539b70d8f3d42e8ed24c1cf48517a
安装顺序 k8s > cloudcore
请依次解压后执行安装脚本,输入本机IP地址后执行安装
已知k8s离线安装脚本问题:
- 对于openeuler系统,需要在执行完成脚本之后,手动修改kubelet和docker的cgroup配置为crgoupfs,并重启docker 和kubelet,否则会导致业务容器和coredns无法启动。
····················································································································································································································································································································································································································
原手动安装指南,供参考
对应系统及架构: linux ubuntu22.04 arm64
已知问题: openeluer系统安装k8s时必须配置docker及kubelet的cgroup属性为 cgroupfs, 否则会导致k8s组件coredns无法启动。
前置条件:
- 支持k8s1.22版本的docker环境
- 请清理环境上运行其他程序或软件(其他包含edged或者kubelet的进程会导致docker容器竞争管理,导致安装出错)
https://onebox.huawei.com/p/0bfbe253b61d132431732c04fdcc1e83
点击下载打包好的压缩包
包内的结构如图所示
k8s 1.22.12离线安装
k8s和kubeedge的cloudcore组件安装于云侧节点上
首先导入需要的全部docker容器镜像:
包括 k8s的组件7个(kube-images目录)
1
|
docker load < coredns_v1.8.4.tar |
2
|
docker load < etcd_3.5.0-0.tar |
3
|
docker load < pause_3.5.tar |
4
|
docker load < kube-apiserver_v1.22.12.tar |
5
|
docker load < kube-controller-manager_v1.22.12.tar |
6
|
docker load < kube-proxy_v1.22.12.tar |
7
|
docker load < kube-scheduler_v1.22.12.tar |
和calico组件4个(plugin/calico目录)
1
|
docker load < calico-cni-v3.17.6.tar |
2
|
docker load < calico-kube-controllers-v3.17.6.tar |
3
|
docker load < calico-node-v3.17.6.tar |
4
|
docker load < calico-pod2daemon-flexvol-v3.17.6.tar |
接下来依次执行:
- cni二进制安装
新建cni路径
1
|
mkdir -p /opt/cni/bin |
将cni对应压缩包内内容解压至新建的路径 并添加执行权限
-
crictl二进制安装
将crictl对应压缩包内二进制解压至用户path /usr/local/bin 并添加执行权限 -
将kubeadm、kubelet、kubectl二进制放入系统path /usr/bin 并添加执行权限
-
配置文件以及启动kubelet服务
1
|
# 对应 10-kubeadm.conf |
2
|
mkdir -p /etc/systemd/system/kubelet.service.d/ |
1
|
# 对应 kubelet.service |
2
|
mkdir -p /etc/systemd/system/ |
创建服务路径,并将安装包中的k8s目录下的配置文件依次将文件放入之后,启动kubelet系统服务
1
|
systemctl enable --now kubelet |
5.检查docker 设置
kubelet需要和docker拥有的相同的cgroup才能正常运行,kubelet默认配置文件中cgroup设置为 systemd 和docker默认的 cgroupfs
由于多数linux发行版中cgroup也为systemd,因此k8s社区推荐配置为与系统一致,检查/etc/docker/daemon.json路径下的配置文件是否配置为systemd
1
|
# 如果没有相关配置 则需要创建路径 |
2
|
vim /etc/docker/daemon.json |
1
|
# 修改native.cgroupdriver为systemd |
2
|
{ |
3
|
··· |
4
|
"exec-opts": ["native.cgroupdriver=systemd"], |
5
|
··· |
6
|
} |
1
|
systemctl daemon-reload |
2
|
systemctl restart docker |
- 通过kubeadm执行k8s初始化
1
|
# 首先关闭swap分区 |
2
|
swapoff -a |
3
|
|
4
|
# 重置k8s及相关路径 |
5
|
kubeadm reset |
6
|
rm -rf /etc/cni/net.d |
7
|
rm -rf /root/.kube/ |
8
|
|
9
|
# 禁用代理 |
10
|
unset http_proxy |
11
|
unset https_proxy |
12
|
|
13
|
# 通过kubeadm执行初始化化,配置版本信息、网络端口以及apiserver的地址(即本机地址) |
14
|
kubeadm init --kubernetes-version=v1.22.12 --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=51.38.66.67 |
安装日志会显示缺失的相关工具软件 如socat和conntrack-tools等,缺失时请先安装后再执行k8s初始化
启动成功的情况会显示以下结果
如果因为各种原因导致k8s初始化失败,在排除其他问题之后重新初始化之前需要先清理之前失败的各项初始化数据。
1
|
# 重置k8s及相关路径 |
2
|
kubeadm reset |
3
|
|
4
|
# 再次执行初始化 |
5
|
kubeadm init --kubernetes-version=v1.22.12 --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=51.38.66.67 |
根据回显的内容,创建k8s默认配置文件路径,完成此项配置后kubectl可以获取集群信息
1
|
mkdir -p $HOME/.kube |
2
|
cp -f /etc/kubernetes/admin.conf $HOME/.kube/config |
3
|
chown $(id -u):$(id -g) $HOME/.kube/config |
此时通过kubectl获取节点和pod时会发现节点未就绪,且coredns未能正常运行
- 配置calico(k8s依赖的网络插件)
1
|
# 完成相关镜像导入之后,直接通过kubectl启动calico.yaml |
2
|
# 路径为安装包内 k8s目录内的 plugin/calico/ 子目录内 |
3
|
|
4
|
kubectl apply -f calico.yaml |
如果未配置相关主节点标签,会导致calico相关的controller和node镜像无法启动
k8s默认主节点不能用于容器部署,需要去掉相关标签
1
|
kubectl taint nodes --all node-role.kubernetes.io/master- |
完成以上步骤之后可以通过kubectl查询当前节点及pod,安装成功的话全部kubesystem命名空间下pod均会处于正常运行状态
部分相关问题排查:
- 设备重启之后,kubelet服务启动失败,告警为不支持交换分区,由于部分系统重启会重置分区,需要重新关闭swap分区
1
|
swapoff -a |
- kubelet服务报错
安装时报错
通过检查kubelet服务日志进行排查
1
|
journalctl -u kubelet |
如果未提前检查docker的cgroup设置 可能情况为docker和k8s的cgroup设置不相同
需要修改docker配置
1
|
# 如果没有相关配置 则需要创建路径 |
2
|
vim /etc/docker/daemon.json |
1
|
# 修改native.cgroupdriver为systemd |
2
|
{ |
3
|
··· |
4
|
"exec-opts": ["native.cgroupdriver=systemd"], |
5
|
··· |
6
|
} |
1
|
systemctl daemon-reload |
2
|
systemctl restart docker |
检查节点注册成功后,kubelet启动成功
-
coredns无法启动:
缺少网络插件,在calico成功运行之后即可成功解决 -
calico相关的controller和node镜像无法启动
k8s默认主节点不能用于容器部署,需要去掉相关标签
1
|
kubectl taint nodes --all node-role.kubernetes.io/master- |
2
|
kubeedge(版本1.12.1)安装
kubeedge最方便的安装方式为通过keadm二进制安装, 然而keadm安装过程需要连接外部https代理在线安装,尚未解决代理tls问题,因此一下采用手动离线安装方式
云侧安装
首先通过kubectl安装离线包中的kubeedge用于设备管理的CRD配置文件
1
|
kubectl apply -f cluster_objectsync_v1alpha1.yaml |
2
|
kubectl apply -f devices_v1alpha2_device.yaml |
3
|
kubectl apply -f devices_v1alpha2_devicemodel.yaml |
4
|
kubectl apply -f objectsync_v1alpha1.yaml |
5
|
kubectl apply -f router_v1_rule.yaml |
6
|
kubectl apply -f router_v1_ruleEndpoint.yaml |
1
|
# 创建kubeedge工作目录, 如果存在对应目录 请清理目录下的./ca/ 和 ./certs/ |
2
|
mkdir -p /etc/kubeedge/ |
将软件包中的cloudcore二进制文件放入 /usr/local/bin/ 目录并添加执行权限
1
|
# /etc/kubeedge/config/ 目录需要手动创建 |
2
|
mkdir -p /etc/kubeedge/config/ |
3
|
|
4
|
# 生成cloudcore最小配置 |
5
|
cloudcore --minconfig > /etc/kubeedge/config/cloudcore.yaml |
修改cloudcore.yaml配置文件:
添加kubeAPIConfig项 用于导入k8s配置
设置advertiseAddress为云侧IP地址
将cloudhub下的 https.enable 项设置为true 开启cloudhub
完成配置后需要使用到离线包内的证书生成脚本(位置为离线包kubeedge目录下) certgen.sh
keadm安装的时候自动会执行脚本,手动二进制安装过程,在社区文档中未提及,此步骤不可缺
1
|
# 将该脚本放置于任意目录,并执行最后的ip地址为云侧ip |
2
|
chmod +x ./certgen.sh |
3
|
bash certgen.sh genCertAndKey server /etc/kubeedge/certs 51.38.66.67 |
执行以上脚本之后,可以在/etc/kubeedge/目录下看到生成的/ca/目录和/certs/目录, 及其中的具体证书内容
注意,使用证书生成脚本的时候需要设置正确的设备时间,防止出现证书时间问题
完成以上配置之后, 即可通过以下命令启动cloudcore云侧
1
|
# 配置文件路径和日志路径可按需更改 |
2
|
cloudcore --config /etc/kubeedge/config/cloudcore.yaml > cloudcore.log 2>&1 & |
如果其运行日志出现以下输出,表示云侧启动正常
此时可以通过以下命令获取边侧所需要的的token,将会在接下来的边侧部署中使用
1
|
kubectl get secret -nkubeedge tokensecret -o=jsonpath='{.data.tokendata}' | base64 -d |
以上步骤及包括MEF云侧需要安装的环境部分,如果需要通过MEF
k8s

#!/bin/bash read -p "请输入本机IP地址:" hostip echo "\n1/7 ..........重置 kubeadm 及相关配置文件路径..........\n" echo kubeadm reset -f kubeadm reset -f # 删除残留 k8s cni 配置项 echo rm -rf /etc/cni/net.d rm -rf /etc/cni/net.d echo rm -rf /root/.kube/config rm -rf /root/.kube/config echo "\n\n2/7 ..........开始导入k8s二进制文件..........\n" echo chmod +x kubeadm kubectl kubelet chmod +x kubeadm kubectl kubelet echo cp -f kubeadm /usr/bin/ cp -f ./kubeadm /usr/bin/ echo cp -f kubectl /usr/bin/ cp -f ./kubectl /usr/bin/ echo cp -f kubelet /usr/bin/ cp -f ./kubelet /usr/bin/ echo "\n\n3/7 ..........开始导入插件二进制文件..........\n" echo chmod +x crictl socat conntrack chmod +x crictl socat conntrack echo cp -f crictl /usr/local/bin cp -f crictl /usr/local/bin echo cp -f socat /usr/local/bin cp -f socat /usr/local/bin echo cp -f conntrack /usr/local/bin cp -f conntrack /usr/local/bin echo mkdir -p /opt/cni/bin mkdir -p /opt/cni/bin echo chmod +x cni-plugins-linux-arm64-v1.1.1/* chmod +x cni-plugins-linux-arm64-v1.1.1/* echo cp -f cni-plugins-linux-arm64-v1.1.1/* /opt/cni/bin cp -f cni-plugins-linux-arm64-v1.1.1/* /opt/cni/bin echo "\n\n4/7 .........开始创建kubelet服务..........\n" echo mkdir -p /etc/systemd/system/kubelet.service.d/ mkdir -p /etc/systemd/system/kubelet.service.d/ echo cp -f ./kubelet-service-config/10-kubeadm.conf /etc/systemd/system/kubelet.service.d cp -f ./kubelet-service-config/10-kubeadm.conf /etc/systemd/system/kubelet.service.d echo mkdir -p /etc/systemd/system/ mkdir -p /etc/systemd/system/ echo cp -f ./kubelet-service-config/kubelet.service /etc/systemd/system/ cp -f ./kubelet-service-config/kubelet.service /etc/systemd/system/ echo systemctl enable --now kubelet systemctl enable --now kubelet echo "\n\n5/7 ..........开始修改docker配置并导入镜像.........\n" echo cp -f daemon.json /etc/docker cp -f daemon.json /etc/docker echo systemctl daemon-reload systemctl daemon-reload echo systemctl restart docker systemctl restart docker echo 导入 k8s 镜像 docker load -i ./images/coredns_1.7.0.tar docker load -i ./images/etcd_3.4.13-0.tar docker load -i ./images/kube-apiserver_v1.19.16.tar docker load -i ./images/kube-controller-manager_v1.19.16.tar docker load -i ./images/kube-proxy_v1.19.16.tar docker load -i ./images/kube-scheduler_v1.19.16.tar docker load -i ./images/pause_3.2.tar echo 导入 calico 镜像 docker load -i ./images/calico-cni-v3.17.6.tar docker load -i ./images/calico-kube-controllers-v3.17.6.tar docker load -i ./images/calico-node-v3.17.6.tar docker load -i ./images/calico-pod2daemon-flexvol-v3.17.6.tar echo "\n\n6/7 ..........开始执行kubeadm初始化...........\n" # 关闭swap分区 echo swapoff -a swapoff -a # 禁用 http 代理 echo unset http_proxy echo unset https_proxy unset http_proxy unset https_proxy echo kubeadm 初始化 kubeadm init --kubernetes-version=v1.19.16 --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address="$hostip" kubectl taint nodes --all node-role.kubernetes.io/master- # 指定 k8s 配置路径 echo mkdir -p $HOME/.kube mkdir -p $HOME/.kube echo cp -f /etc/kubernetes/admin.conf $HOME/.kube/config cp -f /etc/kubernetes/admin.conf $HOME/.kube/config echo chown $(id -u):$(id -g) $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config echo "\n\n7/7 ..........安装calico..........\n" kubectl apply -f calico.yaml echo 安装脚本执行完毕 请通过kubectl查看 k8s 主节点运行状态
cloudcore
#!/bin/bash read -p "请输入本机的IP地址:" hostip echo ..........清理环境残留的cloudcore.......... kubectl delete ns kubeedge echo echo ..........部署cloudcore CRD设备管理yaml文件.......... kubectl apply -f ./CRD-config/cluster_objectsync_v1alpha1.yaml kubectl apply -f ./CRD-config/devices_v1alpha2_device.yaml kubectl apply -f ./CRD-config/devices_v1alpha2_devicemodel.yaml kubectl apply -f ./CRD-config/objectsync_v1alpha1.yaml kubectl apply -f ./CRD-config/router_v1_rule.yaml kubectl apply -f ./CRD-config/router_v1_ruleEndpoint.yaml echo echo ..........添加cloudcore二进制到 /usr/local/bin 目录.......... chmod +x cloudcore cp -f cloudcore /usr/local/bin echo echo ..........准备kubeedge配置及证书路径.......... mkdir -p /etc/kubeedge/config rm -rf /etc/kubeedge/ca/* rm -rf /etc/kubeedge/certs/* echo echo ..........准备kubeedge配置文件.......... sed -i "13c\ - $hostip" cloudcore.yaml echo echo ..........复制kubeedge配置文件.......... cp -f cloudcore.yaml /etc/kubeedge/config/ #echo #echo ..........生成kubeedge证书文件.......... #chmod +x ./certgen.sh #bash certgen.sh genCertAndKey server /etc/kubeedge/certs $hostip echo echo ..........配置并启动cloudcore服务.......... cp -f cloudcore.service /lib/systemd/system systemctl enable --now cloudcore systemctl restart cloudcore sleep 5 #systemctl status cloudcore echo 请通过 'systemctl status cloudcore' 命令查看cloudcore服务状态
CRD-config/cluster_objectsync_v1alpha1.yaml

--- apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: annotations: controller-gen.kubebuilder.io/version: v0.6.2 creationTimestamp: null name: clusterobjectsyncs.reliablesyncs.kubeedge.io spec: group: reliablesyncs.kubeedge.io names: kind: ClusterObjectSync listKind: ClusterObjectSyncList plural: clusterobjectsyncs singular: clusterobjectsync scope: Cluster versions: - name: v1alpha1 schema: openAPIV3Schema: description: ClusterObjectSync stores the state of the cluster level, nonNamespaced object that was successfully persisted to the edge node. ClusterObjectSync name is a concatenation of the node name which receiving the object and the object UUID. properties: apiVersion: description: 'APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources' type: string kind: description: 'Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds' type: string metadata: type: object spec: description: ObjectSyncSpec stores the details of objects that persist to the edge. properties: objectAPIVersion: description: ObjectAPIVersion is the APIVersion of the object that was successfully persist to the edge node. type: string objectKind: description: ObjectType is the kind of the object that was successfully persist to the edge node. type: string objectName: description: ObjectName is the name of the object that was successfully persist to the edge node. type: string type: object status: description: ObjectSyncSpec stores the resourceversion of objects that persist to the edge. properties: objectResourceVersion: description: ObjectResourceVersion is the resourceversion of the object that was successfully persist to the edge node. type: string type: object type: object served: true storage: true subresources: status: {} status: acceptedNames: kind: "" plural: "" conditions: [] storedVersions: []
边侧部署
以下步骤为部署开源kubeedge边侧edgecore方式,需要原生使用开源edgecore或者MEF仅调试节点管理及应用管理需要使用此类部署方式,如果是需要搭建完整MEF云边系统,请使用MEF的installer进行边侧的安装部署
如果选用其他版本的kubeedge,请使用相同版本包内的云侧cloudcore和边侧edgecore, kubeedge兼容性参考官方文档:https://github.com/kubeedge/kubeedge#kubernetes-compatibility
1
|
# 创建kubeedge工作目录, 如果存在对应目录 请清理目录下的./ca/ 和 ./certs/ |
2
|
mkdir -p /etc/kubeedge/ |
将软件包中的edgecore二进制文件放入 /usr/local/bin/ 目录并添加执行权限
1
|
# 生成edgecore最小配置 |
2
|
edgecore --minconfig > /etc/kubeedge/config/edgecore.yaml |
修改edgecore配置文件:
edgehub部分
enable项修改为true
修改httpserver和websocket为云侧ip地址
修改token为从cloudcore处获取的token
edged部分
hostnameOverride可以配置边侧节点的名称,注意k8s节点名称必须唯一,否则会导致注册失败
podSandboxImage需要配置为可用的pause镜像,可以使用k8s搭建时的pause_3.5.tar导入的镜像
其他选项根据需要进行配置
1
|
# 配置文件路径和日志路径可按需更改 |
2
|
edgecore --config /etc/kubeedge/config/edgecore.yaml >> edgecore.log 2>&1 & |
如果日志文件出现以下输出,表示边侧已正常连接云侧
此时云侧k8s能够获取节点信息, 若节点状态就绪,则安装成功。
1
|
kubectl get node |
补充部分, 如果需要将k8s主节点作为MEF云侧使用,在云侧安装功能未开发完成时,需要以下前置操作
需要执行以下命令:
1
|
# 创建主节点标签 |
2
|
kubectl label nodes ${主机节点名} masterselector=dls-master-node |
3
|
# 创建mindx-edge用户空间 |
4
|
kubectl create ns mindx-edge |
5
|
# 创建云侧用户 |
6
|
useradd -d /home/ MEFCenter -u 8000 -m |
7
|
# 创建日志目录 |
8
|
mkdir -p /var/log/mindx-edge/edge-manager |
9
|
chmod -R 750 /var/log/mindx-edge/edge-manager |
10
|
chown -R MEFCenter:MEFCenter /var/log/mindx-edge/edge-manager |
未验证功能及待后续补充:
当边侧节点连接上集群之后,云侧可以正常部署应用到边侧,以下未验证项影响待确定:
k8s会自动下发kube-proxy以及先前部署的calico的daemonset到边侧节点,目前这两个pod可以不拉起
1.12.1版本kubeedge会在云侧部署cloudcore的应用,其生效方式和作用目前未知
待补充:
以上软件系统服务化部署可根据需求自行设置
评论列表
本文来自博客园,作者:易先讯,转载请注明原文链接:https://www.cnblogs.com/gongxianjin/p/17931143.html
关闭访问墙,即可
firewall-cmd --zone=public --add-port=6443/tcp
firewall-cmd --reload