kubeadm部署一个k8s集群

kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。

这个工具能通过两条指令完成一个kubernetes集群的部署:

# 创建一个Master节点

$ kubeadm init

# 将一个Node节点加入到当前集群中

$ kubeadm join <Master节点的IP和端口>

 

主机

IP

k8s-master

192.168.200.147

k8s-node1

192.168.200.148

k8s-node2

192.168.200.149

 

# 关闭防火墙

systemctl stop firewalld

systemctl disable firewalld

#关闭selinux

sed -i 's/enforcing/disabled/' /etc/selinux/config #永久

setenforce 0 #临时

#关闭swap

swapoff -a #临时

sed -ri 's/.*swap.*/#&/' /etc/fstab #永久

#根据规划设置主机名

hostnamectl set-hostname <hostname> #分别设置为k8s-master、k8s-node1、k8s-node2

hostname #确认是否配置生效

#在master添加hosts 

cat >> /etc/hosts << EOF

192.168.200.147 k8s-master

192.168.200.148 k8s-node1

192.168.200.149 k8s-node2

EOF

ping k8s-node1 #确认配置生效

#将桥接的IPv4流量传递到iptables的链cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system #生效

#时间同步

yum install ntpdate -y

ntpdate ntp1.aliyun.com

所有节点安装Docker/kubeadm/kubelet

安装Docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

yum -y install docker-ce

systemctl enable docker && systemctl start docker

cat > /etc/docker/daemon.json <<EOF

{

  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"]

}

EOF

systemctl daemon-reload && systemctl restart docker && systemctl status docker

添加阿里云YUM软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

repo_gpgcheck=0

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

安装kubeadm,kubelet和kubectl

 yum install -y kubelet-1.18.0 kubeadm-1.18.0 kubectl-1.18.0

 systemctl enable kubelet

部署Kubernetes Master

kubeadm init --apiserver-advertise-address=192.168.200.147 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16   --token-ttl=0

初始过程中,可以clone session,在另一个窗口中输入docker images,查看拉取的镜像。

确认看到“Your Kubernetes control-plane has initialized successfully!”则表明初始化成功。

--apiserver-advertise-address 集群通告地址
--image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
--kubernetes-version K8s版本,与上面安装的一致
--service-cidr 集群内部虚拟网络,Pod统一访问入口
--pod-network-cidr Pod网络,,与下面部署的CNI网络组件yaml中保持一致

--token-ttl:默认token的有效期为24小时,如果不想过期,可以加上 --token-ttl=0 这个参数

使用kubectl工具:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl get nodes

加入Kubernetes Node

向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:

 kubeadm join 192.168.200.146:6443 --token oslxvk.o4511j5kfhamf5ct \

    --discovery-token-ca-cert-hash sha256:9e8066adee832cb648fdd062a049d795b8a48428bdd7f31bed97e8c829c7cf74

k8s-node1和k8s-node2加入成功后,可执行以下命令查看:

 kubectl get nodes

NAME         STATUS     ROLES    AGE   VERSION

k8s-master   NotReady   master   17m   v1.18.0

k8s-node1    NotReady   <none>   6s    v1.18.0

k8s-node2    NotReady   <none>   85s   v1.18.0

 

此时仍为NotReady状态,需要执行“部署CNI网络插件”,状态才可以修改为Running

默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,操作如下:

kubeadm token create --print-join-command

部署CNI网络插件

#raw.githubusercontent.com已经无法正常访问

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

创建kube-flannel.yml文件

复制代码
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    k8s-app: flannel
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: flannel
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: flannel
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    k8s-app: flannel
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
    k8s-app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.2.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.22.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.22.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
复制代码

$ kubectl apply -f kube-flannel.yml #部署CNI网络插件

podsecuritypolicy.policy/psp.flannel.unprivileged created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds-amd64 created

daemonset.apps/kube-flannel-ds-arm64 created

daemonset.apps/kube-flannel-ds-arm created

daemonset.apps/kube-flannel-ds-ppc64le created

daemonset.apps/kube-flannel-ds-s390x created

 

$ kubectl get pods -n kube-system #需要等待20分钟,之后查看所有pod都处于running状态

NAME                          READY   STATUS    RESTARTS   AGE

coredns-7ff77c879f-5kjnf      1/1     Running   0          75m

coredns-7ff77c879f-jj46z      1/1     Running   0          75m

kube-flannel-ds-amd64-jvxfw   1/1     Running   0          26m

kube-flannel-ds-amd64-ljdnf   1/1     Running   0          26m

kube-flannel-ds-amd64-n9rhm   1/1     Running   0          28s

kube-proxy-fbznb              1/1     Running   0          75m

kube-proxy-j7jw7              1/1     Running   0          59m

kube-proxy-t4m4t              1/1     Running   0          60m

 

$ kubectl get nodes #查看所有节点都属于ready状态

NAME         STATUS   ROLES    AGE   VERSION

k8s-master   Ready    master   77m   v1.18.0

k8s-node1    Ready    <none>   60m   v1.18.0

k8s-node2    Ready    <none>   61m   v1.18.0

测试kubernetes集群

在Kubernetes集群中创建一个pod,验证是否正常运行:

$ kubectl create deployment nginx --image=nginx

deployment.apps/nginx created

 

$ kubectl expose deployment nginx --port=80 --type=NodePort

service/nginx exposed

 

$ kubectl get pod,svc

NAME                        READY   STATUS    RESTARTS   AGE

pod/nginx-f89759699-zc6h8   1/1     Running   0          2m3s

 

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE

service/kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        82m

service/nginx        NodePort    10.96.13.85   <none>        80:30329/TCP   9s

访问任一节点地址的30329端口:如


http://192.168.200.147:30329,出现如下图片所示则表示访问成功。

 

 

posted @   星尘yuan  阅读(141)  评论(0编辑  收藏  举报
相关博文:
阅读排行:
· CSnakes vs Python.NET:高效嵌入与灵活互通的跨语言方案对比
· DeepSeek “源神”启动!「GitHub 热点速览」
· 我与微信审核的“相爱相杀”看个人小程序副业
· 上周热点回顾(2.17-2.23)
· 如何使用 Uni-app 实现视频聊天(源码,支持安卓、iOS)
点击右上角即可分享
微信分享提示