本地k8s环境minikube搭建过程
首先要安装docker这个环境是需要自己安装的。相关步骤如下:
1
2
3
4
5
6
7
8
9
10
11
|
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https: //download .docker.com /linux/centos/docker-ce .repo 安装docker yum list docker-ce --showduplicates | sort -r 查看docker相关版本 #yum install docker-ce #由于repo中默认只开启stable仓库,故这里安装的是最新稳定版。 #yum install <FQPN> # 例如:yum install docker-ce-18.06.0.ce -y #以下是验证过的版本,建议安装 yum install docker-ce-18.06.0.ce -y systemctl start docker systemctl enable docker docker version(因为安装的是1.13.4版本的k8s,建议安装docker18.06) |
然后使用阿里云修改好的minikube进行安装,否则在初始化minikube的时候会卡在墙上下不来
1
2
3
|
curl -Lo minikube http: //kubernetes .oss-cn-hangzhou.aliyuncs.com /minikube/releases/v0 .35.0 /minikube-linux-amd64 chmod +x minikube mv minikube /usr/bin/minikube |
注意一点要关掉swap:关闭命令swapoff -a |
加载阿里云k8s的官方源并且安装相关命令组件
1
2
3
4
5
6
7
8
9
10
|
cd /etc/yum .repos.d/ cat >>kubernetes.repo<<EOF [kubernetes] name=Kubernetes baseurl=https: //mirrors .aliyun.com /kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https: //mirrors .aliyun.com /kubernetes/yum/doc/yum-key .gpg https: //mirrors .aliyun.com /kubernetes/yum/doc/rpm-package-key .gpg EOF |
1
2
|
yum install kubectl kubelet kubeadm -y systemctl start kubelet && systemctl enable kubelet |
使用缺省VirtualBox驱动来创建Kubernetes本地环境
minikube start --registry-mirror=https: //registry .docker-cn.com |
出现如下字样
- Verifying component health .....
+ kubectl is now configured to use "minikube"
= Done! Thank you for using minikube!
则本地的minikube安装完成。当然这个不能访问外网,单独装ingress或者端口转发即可
############################################################################
----------------------------------我是分割线------------------------------------割一下--------------------------------
############################################################################
ingress安装方法:
生成ingress:
创建depolyment.yaml:
apiVersion: v1 kind: Namespace metadata: name: kube-system labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: kube-system labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: kube-system labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx --- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: kube-system labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx --- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: kube-system labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx --- apiVersion: rbac.authorization.k8s.io /v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses /status verbs: - update --- apiVersion: rbac.authorization.k8s.io /v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: kube-system labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: # Defaults to "<election-id>-<ingress-class>" # Here: "<ingress-controller-leader>-<nginx>" # This has to be adapted if you change either parameter # when launching the nginx-ingress-controller. - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get --- apiVersion: rbac.authorization.k8s.io /v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: kube-system labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: kube-system --- apiVersion: rbac.authorization.k8s.io /v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: kube-system --- apiVersion: apps /v1 kind: Deployment metadata: name: nginx-ingress-controller namespace: kube-system labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx spec: replicas: 1 selector: matchLabels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx template: metadata: labels: app.kubernetes.io /name : ingress-nginx app.kubernetes.io /part-of : ingress-nginx annotations: prometheus.io /port : "10254" prometheus.io /scrape : "true" spec: serviceAccountName: nginx-ingress-serviceaccount containers: - name: nginx-ingress-controller image: quay.io /kubernetes-ingress-controller/nginx-ingress-controller :0.23.0 args: - /nginx-ingress-controller - --configmap=$(POD_NAMESPACE) /nginx-configuration - --tcp-services-configmap=$(POD_NAMESPACE) /tcp-services - --udp-services-configmap=$(POD_NAMESPACE) /udp-services - --publish-service=$(POD_NAMESPACE) /ingress-nginx - --annotations-prefix=nginx.ingress.kubernetes.io securityContext: allowPrivilegeEscalation: true capabilities: drop: - ALL add: - NET_BIND_SERVICE # www-data -> 33 runAsUser: 33 env : - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - name: http containerPort: 80 - name: https containerPort: 443 livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP periodSeconds: 10 successThreshold: 1 timeoutSeconds: 10 --- |
---
---
再创建svc,yaml:
Service:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
|
apiVersion: v1 kind: Service metadata: annotations: #service.beta.kubernetes.io/alicloud-loadbalancer-id: "lb-wz9du18pa4e7f93vetzww" labels: app: nginx-ingress name: nginx-ingress namespace: kube-system spec: ports: - name: http nodePort: 30468 port: 80 protocol: TCP targetPort: 80 - name: https nodePort: 30471 port: 443 protocol: TCP targetPort: 443 selector: #app: ingress-nginx app.kubernetes.io /name : ingress-nginx #type: LoadBalancer type : NodePort status: loadBalancer: ingress: - ip: 39.108.26.119(此处更改成自己本机ip) |
以上yaml创建pod的命令是:
kubectl apply -f xxxx.yaml |
业务镜像可以拉取gitlab的,这里没做cofigmap,需要自己配。生成业务编排需自己编写yaml
以下是简单安装脚本。
#!/bin/bash #安装docker相关,用以拉取本地所需镜像,版本采用docker-ce 18.06版,支持1.13版kubernetes #检测网卡是否是固定ip grep -rE "dhcp" /etc/sysconfig/network-scripts/ifcfg-* if [ $? -eq 0 ]; then echo "网卡为DHCP模式请更改为规定ip" exit else echo "网卡正常。" fi yum clean all && yum repolist yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce-18.06.0.ce -y systemctl start docker systemctl enable docker VERSION=`docker version` if [ $? -eq 0 ]; then echo "输出docker版本信息:$VERSION" else echo "docker安装出错,请检查错误日志" exit fi echo "1" >/proc/sys/net/bridge/bridge-nf-call-iptables #此步是保证iptables正确转发获取镜像,否则会报dns解析错误 ########获取minikube二进制文件并且添加系统命令######## cd /data curl -Lo minikube http://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/releases/v0.35.0/minikube-linux-amd64 chmod +x minikube mv minikube /usr/bin/minikube swapoff -a #强制关闭swap不然初始化的时候会提示错误 cd /etc/yum.repos.d/ cat>>kubernetes.repo<<EOF [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install kubectl kubelet kubeadm -y systemctl start kubelet && systemctl enable kubelet ########启动minikube######## minikube start --vm-driver=none if [ $? -eq 0 ]; then echo "minikube初始化成功" else echo "minikube初始化失败,请检查报错输出,重新执行初始化命令minikube start --vm-driver=none 命令,如果仍有报错,请执行清理集群命令minikube delete,并重新执行初始化命令!" minikube delete exit fi #缺省Minikube使用VirtualBox驱动来创建Kubernetes本地环境 #minikube start --registry-mirror=https://registry.docker-cn.com STATUS=`kubectl get node | awk '{print$2}' | sed -n '2p'` if [ $STATUS = "Ready" ]; then echo "输出集群状态$STATUS" else echo "输出状态不是Ready,请联系运维." fi #echo "输出集群状态$STATUS" #echo "输出状态不是Ready,请联系运维."
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· AI与.NET技术实操系列:基于图像分类模型对图像进行分类
· go语言实现终端里的倒计时
· 如何编写易于单元测试的代码
· 10年+ .NET Coder 心语,封装的思维:从隐藏、稳定开始理解其本质意义
· .NET Core 中如何实现缓存的预热?
· 分享一个免费、快速、无限量使用的满血 DeepSeek R1 模型,支持深度思考和联网搜索!
· 25岁的心里话
· 基于 Docker 搭建 FRP 内网穿透开源项目(很简单哒)
· ollama系列01:轻松3步本地部署deepseek,普通电脑可用
· 按钮权限的设计及实现