k8s安装步骤详解

linux/arm64架构下kubernetes集群的搭建

图片

•机器信息:三台linux/arm64架构•系统版本:centos7.6

主机名称 IP地址 说明 软件
Master01 192.168.100.21 master节点 kube-apiserver、kube-controller-manager、kube-scheduler、etcd、 kubelet、kube-proxy
Node01 192.168.100.22 node节点 kubelet、kube-proxy
Node02 192.168.100.23 node节点 kubelet、kube-proxy

安装

配置环境

设置主机名

hostnamectl set-hostname k8s-master01hostnamectl set-hostname k8s-node01hostnamectl set-hostname k8s-node02

安装必要工具

yum -y install wget jq psmisc vim net-tools nfs-utils telnet yum-utils device-mapper-persistent-data lvm2 git network-scripts tar curl -y

关闭防火墙

systemctl disable --now firewalld

关闭SELinux

setenforce 0sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

关闭交换分区

sed -ri 's/.*swap.*/#&/' /etc/fstabswapoff -a && sysctl -w vm.swappiness=0
cat /etc/fstab# /dev/mapper/centos-swap swap swap defaults 0 0

网络配置

# 方式一# systemctl disable --now NetworkManager# systemctl start network && systemctl enable network
# 方式二cat > /etc/NetworkManager/conf.d/calico.conf << EOF [keyfile]unmanaged-devices=interface-name:cali*;interface-name:tunl*EOFsystemctl restart NetworkManager

时间同步

设置时区

mv /etc/localtime /etc/localtime.bakln -s /usr/share/zoneinfo/Asia/Shanghai  /etc/localtime

设置时间同步

# 服务端
yum install chrony -ycat > /etc/chrony.conf << EOF pool ntp.aliyun.com iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3rtcsyncallow 10.0.0.0/24allow 192.168.0.0/16local stratum 10keyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOF
systemctl restart chronyd systemctl enable chronyd
# 客户端
yum install chrony -ycat > /etc/chrony.conf << EOF pool 192.168.100.21 iburstdriftfile /var/lib/chrony/driftmakestep 1.0 3allow 192.168.100.21 # 服务器的ip地址rtcsynckeyfile /etc/chrony.keysleapsectz right/UTClogdir /var/log/chronyEOF
systemctl restart chronydsystemctl enable chronyd
#使用客户端进行验证chronyc sources -v

立即手动同步

chronyc -a makestep

配置ulimit

ulimit -SHn 65535cat >> /etc/security/limits.conf <<EOF* soft nofile 655360* hard nofile 131072* soft nproc 655350* hard nproc 655350* seft memlock unlimited* hard memlock unlimiteddEOF

安装ipvsadm

yum install ipvsadm ipset sysstat conntrack libseccomp -ycat >> /etc/modules-load.d/ipvs.conf <<EOF ip_vsip_vs_rrip_vs_wrrip_vs_shnf_conntrackip_tablesip_setxt_setipt_setipt_rpfilteript_REJECTipipEOF
systemctl restart systemd-modules-load.service
lsmod | grep -e ip_vs -e nf_conntrackip_vs_sh 16384 0ip_vs_wrr 16384 0ip_vs_rr 16384 0ip_vs 180224 6 ip_vs_rr,ip_vs_sh,ip_vs_wrrnf_conntrack 176128 1 ip_vsnf_defrag_ipv6 24576 2 nf_conntrack,ip_vsnf_defrag_ipv4 16384 1 nf_conntracklibcrc32c 16384 3 nf_conntrack,xfs,ip_vs

修改内核参数

cat <<EOF > /etc/sysctl.d/k8s.confnet.ipv4.ip_forward = 1net.bridge.bridge-nf-call-iptables = 1fs.may_detach_mounts = 1vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.netfilter.nf_conntrack_max=2310720
net.ipv4.tcp_keepalive_time = 600net.ipv4.tcp_keepalive_probes = 3net.ipv4.tcp_keepalive_intvl =15net.ipv4.tcp_max_tw_buckets = 36000net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_max_orphans = 327680net.ipv4.tcp_orphan_retries = 3net.ipv4.tcp_syncookies = 1net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.ip_conntrack_max = 65536net.ipv4.tcp_max_syn_backlog = 16384net.ipv4.tcp_timestamps = 0net.core.somaxconn = 16384
net.ipv6.conf.all.disable_ipv6 = 0net.ipv6.conf.default.disable_ipv6 = 0net.ipv6.conf.lo.disable_ipv6 = 0net.ipv6.conf.all.forwarding = 1EOF
sysctl --system

节点配置hosts本地解析

cat > /etc/hosts <<EOF127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.21 k8s-master01192.168.100.22 k8s-node01192.168.100.23 k8s-node02EOF

生成证书

1.下载cfssl二进制包

github二进制包下载地址:https://github.com/cloudflare/cfssl/releases (没看到有linux/arm64的我就直接自己编译了)

git clone git@github.com:cloudflare/cfssl.gitcd cfsslmakemv bin/* /usr/local/bin/chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

2.创建目录存放生成证书信息

mkdir pkicd pkicat > admin-csr.json << EOF {  "CN": "admin",  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "Beijing",      "L": "Beijing",      "O": "system:masters",      "OU": "Kubernetes-manual"    }  ]}EOF
cat > ca-config.json << EOF { "signing": { "default": { "expiry": "876000h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "876000h" } } }}EOF
cat > etcd-ca-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ], "ca": { "expiry": "876000h" }}EOF
cat > front-proxy-ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "ca": { "expiry": "876000h" }}EOF
cat > kubelet-csr.json << EOF { "CN": "system:node:\$NODE", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "system:nodes", "OU": "Kubernetes-manual" } ]}EOF
cat > manager-csr.json << EOF { "CN": "system:kube-controller-manager", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-controller-manager", "OU": "Kubernetes-manual" } ]}EOF
cat > apiserver-csr.json << EOF { "CN": "kube-apiserver", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ]}EOF

cat > ca-csr.json << EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "Kubernetes", "OU": "Kubernetes-manual" } ], "ca": { "expiry": "876000h" }}EOF
cat > etcd-csr.json << EOF { "CN": "etcd", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security" } ]}EOF

cat > front-proxy-client-csr.json << EOF { "CN": "front-proxy-client", "key": { "algo": "rsa", "size": 2048 }}EOF

cat > kube-proxy-csr.json << EOF { "CN": "system:kube-proxy", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-proxy", "OU": "Kubernetes-manual" } ]}EOF

cat > scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing", "O": "system:kube-scheduler", "OU": "Kubernetes-manual" } ]}EOF

3.配置bootstrap定义


cd ..mkdir bootstrapcd bootstrapcat > bootstrap.secret.yaml << EOF apiVersion: v1kind: Secretmetadata: name: bootstrap-token-c8ad9c namespace: kube-systemtype: bootstrap.kubernetes.io/tokenstringData: description: "The default bootstrap token generated by 'kubelet '." token-id: c8ad9c token-secret: 2e4d610cf3e7426e usage-bootstrap-authentication: "true" usage-bootstrap-signing: "true" auth-extra-groups: system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: kubelet-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrappersubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-bootstraproleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:bootstrappers:default-node-token---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: node-autoapprove-certificate-rotationroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclientsubjects:- apiGroup: rbac.authorization.k8s.io kind: Group name: system:nodes---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubeletrules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics verbs: - "*"---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: name: system:kube-apiserver namespace: ""roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubeletsubjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kube-apiserverEOF

4.配置coredns

cd ..mkdir corednscd corednscat > coredns.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata:  name: coredns  namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  labels:    kubernetes.io/bootstrapping: rbac-defaults  name: system:corednsrules:  - apiGroups:    - ""    resources:    - endpoints    - services    - pods    - namespaces    verbs:    - list    - watch  - apiGroups:    - discovery.k8s.io    resources:    - endpointslices    verbs:    - list    - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  annotations:    rbac.authorization.kubernetes.io/autoupdate: "true"  labels:    kubernetes.io/bootstrapping: rbac-defaults  name: system:corednsroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: system:corednssubjects:- kind: ServiceAccount  name: coredns  namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata:  name: coredns  namespace: kube-systemdata:  Corefile: |    .:53 {        errors        health {          lameduck 5s        }        ready        kubernetes cluster.local in-addr.arpa ip6.arpa {          fallthrough in-addr.arpa ip6.arpa        }        prometheus :9153        forward . /etc/resolv.conf {          max_concurrent 1000        }        cache 30        loop        reload        loadbalance    }---apiVersion: apps/v1kind: Deploymentmetadata:  name: coredns  namespace: kube-system  labels:    k8s-app: kube-dns    kubernetes.io/name: "CoreDNS"spec:  # replicas: not specified here:  # 1. Default is 1.  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.  strategy:    type: RollingUpdate    rollingUpdate:      maxUnavailable: 1  selector:    matchLabels:      k8s-app: kube-dns  template:    metadata:      labels:        k8s-app: kube-dns    spec:      priorityClassName: system-cluster-critical      serviceAccountName: coredns      tolerations:        - key: "CriticalAddonsOnly"          operator: "Exists"      nodeSelector:        kubernetes.io/os: linux      affinity:         podAntiAffinity:           preferredDuringSchedulingIgnoredDuringExecution:           - weight: 100             podAffinityTerm:               labelSelector:                 matchExpressions:                   - key: k8s-app                     operator: In                     values: ["kube-dns"]               topologyKey: kubernetes.io/hostname      containers:      - name: coredns        image: registry.cn-beijing.aliyuncs.com/dotbalo/coredns:1.8.6         imagePullPolicy: IfNotPresent        resources:          limits:            memory: 170Mi          requests:            cpu: 100m            memory: 70Mi        args: [ "-conf", "/etc/coredns/Corefile" ]        volumeMounts:        - name: config-volume          mountPath: /etc/coredns          readOnly: true        ports:        - containerPort: 53          name: dns          protocol: UDP        - containerPort: 53          name: dns-tcp          protocol: TCP        - containerPort: 9153          name: metrics          protocol: TCP        securityContext:          allowPrivilegeEscalation: false          capabilities:            add:            - NET_BIND_SERVICE            drop:            - all          readOnlyRootFilesystem: true        livenessProbe:          httpGet:            path: /health            port: 8080            scheme: HTTP          initialDelaySeconds: 60          timeoutSeconds: 5          successThreshold: 1          failureThreshold: 5        readinessProbe:          httpGet:            path: /ready            port: 8181            scheme: HTTP      dnsPolicy: Default      volumes:        - name: config-volume          configMap:            name: coredns            items:            - key: Corefile              path: Corefile---apiVersion: v1kind: Servicemetadata:  name: kube-dns  namespace: kube-system  annotations:    prometheus.io/port: "9153"    prometheus.io/scrape: "true"  labels:    k8s-app: kube-dns    kubernetes.io/cluster-service: "true"    kubernetes.io/name: "CoreDNS"spec:  selector:    k8s-app: kube-dns  clusterIP: 10.96.0.10   ports:  - name: dns    port: 53    protocol: UDP  - name: dns-tcp    port: 53    protocol: TCP  - name: metrics    port: 9153    protocol: TCPEOF

5.配置metrics



cd ..mkdir metrics-servercd metrics-servercat > metrics-server.yaml << EOF apiVersion: v1kind: ServiceAccountmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server rbac.authorization.k8s.io/aggregate-to-admin: "true" rbac.authorization.k8s.io/aggregate-to-edit: "true" rbac.authorization.k8s.io/aggregate-to-view: "true" name: system:aggregated-metrics-readerrules:- apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata: labels: k8s-app: metrics-server name: system:metrics-serverrules:- apiGroups: - "" resources: - pods - nodes - nodes/stats - namespaces - configmaps verbs: - get - list - watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server-auth-reader namespace: kube-systemroleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: extension-apiserver-authentication-readersubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: metrics-server:system:auth-delegatorroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:auth-delegatorsubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata: labels: k8s-app: metrics-server name: system:metrics-serverroleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-serversubjects:- kind: ServiceAccount name: metrics-server namespace: kube-system---apiVersion: v1kind: Servicemetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: ports: - name: https port: 443 protocol: TCP targetPort: https selector: k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: metrics-server name: metrics-server namespace: kube-systemspec: selector: matchLabels: k8s-app: metrics-server strategy: rollingUpdate: maxUnavailable: 0 template: metadata: labels: k8s-app: metrics-server spec: containers: - args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --kubelet-use-node-status-port - --metric-resolution=15s - --kubelet-insecure-tls - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-extra-headers-prefix=X-Remote-Extra- image: registry.cn-beijing.aliyuncs.com/dotbalo/metrics-server:0.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /livez port: https scheme: HTTPS periodSeconds: 10 name: metrics-server ports: - containerPort: 4443 name: https protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /readyz port: https scheme: HTTPS initialDelaySeconds: 20 periodSeconds: 10 resources: requests: cpu: 100m memory: 200Mi securityContext: readOnlyRootFilesystem: true runAsNonRoot: true runAsUser: 1000 volumeMounts: - mountPath: /tmp name: tmp-dir - name: ca-ssl mountPath: /etc/kubernetes/pki nodeSelector: kubernetes.io/os: linux priorityClassName: system-cluster-critical serviceAccountName: metrics-server volumes: - emptyDir: {} name: tmp-dir - name: ca-ssl hostPath: path: /etc/kubernetes/pki
---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata: labels: k8s-app: metrics-server name: v1beta1.metrics.k8s.iospec: group: metrics.k8s.io groupPriorityMinimum: 100 insecureSkipTLSVerify: true service: name: metrics-server namespace: kube-system version: v1beta1 versionPriority: 100EOF

6.生成Etcd证书

#创建目录mkdir /etc/etcd/ssl -p#生成证书cd pki# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)# 若没有IPv6 可删除可保留 cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-cacfssl gencert \   -ca=/etc/etcd/ssl/etcd-ca.pem \   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \   -config=ca-config.json \   -hostname=127.0.0.1,k8s-master01,192.168.100.21,192.168.100.22,192.168.100.23 \   -profile=kubernetes \   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

如果是多个master节点则将证书复制过去

Master='k8s-master02 k8s-master03'for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

7.生成Kubernetes证书

mkdir -p /etc/kubernetes/pkicfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
# 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备# 10.96.0.1是service网段的第一个地址,需要计算,192.168.1.69为高可用vip地址# 若没有IPv6 可删除可保留
cfssl gencert \-ca=/etc/kubernetes/pki/ca.pem \-ca-key=/etc/kubernetes/pki/ca-key.pem \-config=ca-config.json \-hostname=10.96.0.1,192.168.1.69,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.1.61,192.168.1.62,192.168.1.63,192.168.1.64,192.168.1.65,192.168.1.66,192.168.1.67,192.168.1.68,192.168.1.75,192.168.1.75,10.0.0.40,10.0.0.41 \-profile=kubernetes apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

8.生成apiserver证书

cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 
# 有一个警告,可以忽略
cfssl gencert \-ca=/etc/kubernetes/pki/front-proxy-ca.pem \-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \-config=ca-config.json \-profile=kubernetes front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

9.生成controller-manage的证书

cfssl gencert \   -ca=/etc/kubernetes/pki/ca.pem \   -ca-key=/etc/kubernetes/pki/ca-key.pem \   -config=ca-config.json \   -profile=kubernetes \   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager
# 设置一个集群项
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置一个环境项,一个上下文
kubectl config set-context system:kube-controller-manager@kubernetes \ --cluster=kubernetes \ --user=system:kube-controller-manager \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置一个用户项
kubectl config set-credentials system:kube-controller-manager \ --client-certificate=/etc/kubernetes/pki/controller-manager.pem \ --client-key=/etc/kubernetes/pki/controller-manager-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
# 设置默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \ --client-certificate=/etc/kubernetes/pki/scheduler.pem \ --client-key=/etc/kubernetes/pki/scheduler-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config set-context system:kube-scheduler@kubernetes \ --cluster=kubernetes \ --user=system:kube-scheduler \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
kubectl config use-context system:kube-scheduler@kubernetes \ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
cfssl gencert \ -ca=/etc/kubernetes/pki/ca.pem \ -ca-key=/etc/kubernetes/pki/ca-key.pem \ -config=ca-config.json \ -profile=kubernetes \ admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-credentials kubernetes-admin \ --client-certificate=/etc/kubernetes/pki/admin.pem \ --client-key=/etc/kubernetes/pki/admin-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config set-context kubernetes-admin@kubernetes \ --cluster=kubernetes \ --user=kubernetes-admin \ --kubeconfig=/etc/kubernetes/admin.kubeconfig
kubectl config use-context kubernetes-admin@kubernetes --kubeconfig=/etc/kubernetes/admin.kubeconfig

10.生成kube-proxy证书

cfssl gencert \   -ca=/etc/kubernetes/pki/ca.pem \   -ca-key=/etc/kubernetes/pki/ca-key.pem \   -config=ca-config.json \   -profile=kubernetes \   kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy
kubectl config set-cluster kubernetes \ --certificate-authority=/etc/kubernetes/pki/ca.pem \ --embed-certs=true \ --server=https://192.168.1.69:8443 \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \ --client-certificate=/etc/kubernetes/pki/kube-proxy.pem \ --client-key=/etc/kubernetes/pki/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config set-context kube-proxy@kubernetes \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
kubectl config use-context kube-proxy@kubernetes --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

11.创建ServiceAccount Key secret

openssl genrsa -out /etc/kubernetes/pki/sa.key 2048openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

12.查看证书

ls /etc/kubernetes/pki/# 26个证书

13.复制证书到其他节点

cd /etc/kubernetes/
for NODE in k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

安装Etcd

1.下载

github二进制包下载地址:https://github.com/etcd-io/etcd/releases

https://github.com/etcd-io/etcd/releases/download/v3.4.22/etcd-v3.4.22-linux-arm64.tar.gz

2.解压

tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/

3.查看版本

etcdctl version

4.配置etcd

# 如果要用IPv6那么把IPv4地址修改为IPv6即可cat > /etc/etcd/etcd.config.yml << EOF name: 'k8s-master01'data-dir: /var/lib/etcdwal-dir: /var/lib/etcd/walsnapshot-count: 5000heartbeat-interval: 100election-timeout: 1000quota-backend-bytes: 0listen-peer-urls: 'https://192.168.1.61:2380'listen-client-urls: 'https://192.168.1.61:2379,http://127.0.0.1:2379'max-snapshots: 3max-wals: 5cors:initial-advertise-peer-urls: 'https://192.168.100.21:2380'advertise-client-urls: 'https://192.168.100.21:2379'discovery:discovery-fallback: 'proxy'discovery-proxy:discovery-srv:initial-cluster: 'k8s-master01=https://192.168.100.21:2380'initial-cluster-token: 'etcd-k8s-cluster'initial-cluster-state: 'new'strict-reconfig-check: falseenable-v2: trueenable-pprof: trueproxy: 'off'proxy-failure-wait: 5000proxy-refresh-interval: 30000proxy-dial-timeout: 1000proxy-write-timeout: 5000proxy-read-timeout: 0client-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truepeer-transport-security:  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'  peer-client-cert-auth: true  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'  auto-tls: truedebug: falselog-package-levels:log-outputs: [default]force-new-cluster: falseEOF

5.创建服务

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]Description=Etcd ServiceDocumentation=https://coreos.com/etcd/docs/latest/After=network.target
[Service]Type=notifyExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.ymlRestart=on-failureRestartSec=10LimitNOFILE=65536
[Install]WantedBy=multi-user.targetAlias=etcd3.service
EOF

6.配置证书

mkdir /etc/kubernetes/pki/etcdln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/systemctl daemon-reloadsystemctl enable --now etcd

安装Containerd

1.下载etcdctl二进制包

github二进制包下载地址:https://github.com/etcd-io/etcd/releases

wget https://github.com/etcd-io/etcd/releases/download/v3.4.22/etcd-v3.4.22-linux-arm64.tar.gz

2.containerd二进制包下载

github下载地址:https://github.com/containerd/containerd/releases

wget https://github.com/containerd/containerd/releases/download/v1.7.0-beta.0/containerd-1.7.0-beta.0-linux-arm64.tar.gz

3.下载带cni插件的二进制包。

wget https://github.com/containerd/containerd/releases/download/v1.7.0-beta.0/cri-containerd-1.7.0-beta.0-linux-arm64.tar.gzwget https://github.com/containerd/containerd/releases/download/v1.7.0-beta.0/cri-containerd-cni-1.7.0-beta.0-linux-arm64.tar.gz

4.cni插件下载

github下载地址:https://github.com/containernetworking/plugins/releases

wget https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz

5.crictl客户端二进制下载

github下载:https://github.com/kubernetes-sigs/cri-tools/releases

wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.25.0/crictl-v1.25.0-linux-arm64.tar.gz

6.解压缩到指定文件夹

#创建cni插件所需目录mkdir -p /etc/cni/net.d /opt/cni/bin #解压cni二进制包tar xf cni-plugins-linux-arm64-v1.1.1.tgz -C /opt/cni/bin/tar -xzf cri-containerd-cni-1.7.0-beta.0-linux-arm64.tar.gz -C /

7.创建服务启动文件

cat > /etc/systemd/system/containerd.service <<EOF[Unit]Description=containerd container runtimeDocumentation=https://containerd.ioAfter=network.target local-fs.target
[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5LimitNPROC=infinityLimitCORE=infinityLimitNOFILE=infinityTasksMax=infinityOOMScoreAdjust=-999
[Install]WantedBy=multi-user.targetEOF

8.配置Containerd所需的模块

cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF

9.加载模块

systemctl restart systemd-modules-load.servicesystemctl status systemd-modules-load.service

10.配置Containerd所需的内核

cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.confnet.bridge.bridge-nf-call-iptables  = 1net.ipv4.ip_forward                 = 1net.bridge.bridge-nf-call-ip6tables = 1EOF
# 加载内核
sysctl --system

11.创建Containerd的配置文件

# 创建默认配置文件mkdir -p /etc/containerdcontainerd config default | tee /etc/containerd/config.toml
# 修改Containerd的配置文件sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep SystemdCgroup
sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.tomlcat /etc/containerd/config.toml | grep sandbox_image

12.启动并设置为开机启动

systemctl daemon-reloadsystemctl enable --now containerd

13.配置crictl客户端连接的运行时位置

#解压tar xf crictl-v1.25.0-linux-arm64.tar.gz -C /usr/bin/
#生成配置文件cat > /etc/crictl.yaml <<EOFruntime-endpoint: unix:///run/containerd/containerd.sockimage-endpoint: unix:///run/containerd/containerd.socktimeout: 10debug: falseEOF
#测试systemctl restart containerdcrictl info

安装kubernetes

1.下载kubernetes

github二进制包下载地址:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.25.md

wget https://dl.k8s.io/v1.25.3/kubernetes-server-linux-arm64.tar.gz

2.解压压缩包

tar -xf kubernetes-server-linux-arm64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

3.查看版本

kubelet --version

4.创建必要的目录

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

5.创建api-server

只需要在master节点

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target
[Service]ExecStart=/usr/local/bin/kube-apiserver \\ --v=2 \\ --logtostderr=true \\ --allow-privileged=true \\ --bind-address=0.0.0.0 \\ --secure-port=6443 \\ --advertise-address=192.168.1.61 \\ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\ --service-node-port-range=30000-32767 \\ --etcd-servers=https://192.168.100.21:2379 \\ --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem \\ --etcd-certfile=/etc/etcd/ssl/etcd.pem \\ --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --service-account-key-file=/etc/kubernetes/pki/sa.pub \\ --service-account-signing-key-file=/etc/kubernetes/pki/sa.key \\ --service-account-issuer=https://kubernetes.default.svc.cluster.local \\ --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota \ --authorization-mode=Node,RBAC \\ --enable-bootstrap-token-auth=true \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\ --requestheader-allowed-names=aggregator \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true # --feature-gates=IPv6DualStack=true # --token-auth-file=/etc/kubernetes/token.csv
Restart=on-failureRestartSec=10sLimitNOFILE=65535
[Install]WantedBy=multi-user.target
EOF

启动api-server

systemctl daemon-reload && systemctl enable --now kube-apiserver# 注意查看状态是否启动正常systemctl status kube-apiserver

6.创建kube-controller-manager

# 所有master节点配置,且配置相同# 172.16.0.0/12为pod网段,按需求设置你自己的网段
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target
[Service]ExecStart=/usr/local/bin/kube-controller-manager \\ --v=2 \\ --logtostderr=true \\ --bind-address=127.0.0.1 \\ --root-ca-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\ --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\ --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\ --leader-elect=true \\ --use-service-account-credentials=true \\ --node-monitor-grace-period=40s \\ --node-monitor-period=5s \\ --pod-eviction-timeout=2m0s \\ --controllers=*,bootstrapsigner,tokencleaner \\ --allocate-node-cidrs=true \\ --service-cluster-ip-range=10.96.0.0/12,fd00::/108 \\ --cluster-cidr=172.16.0.0/12,fc00::/48 \\ --node-cidr-mask-size-ipv4=24 \\ --node-cidr-mask-size-ipv6=64 \\ --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # --feature-gates=IPv6DualStack=true
Restart=alwaysRestartSec=10s
[Install]WantedBy=multi-user.target
EOF

启动服务

systemctl daemon-reloadsystemctl enable --now kube-controller-managersystemctl status kube-controller-manager

7.创建kube-scheduler

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target
[Service]ExecStart=/usr/local/bin/kube-scheduler \\ --v=2 \\ --logtostderr=true \\ --bind-address=127.0.0.1 \\ --leader-elect=true \\ --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
Restart=alwaysRestartSec=10s
[Install]WantedBy=multi-user.target
EOF

启动服务

systemctl daemon-reloadsystemctl enable --now kube-schedulersystemctl status kube-scheduler

8.配置TLS Bootstrapping

在master上配置

cd bootstrap
kubectl config set-cluster kubernetes \--certificate-authority=/etc/kubernetes/pki/ca.pem \--embed-certs=true --server=https://192.168.1.69:8443 \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-credentials tls-bootstrap-token-user \--token=c8ad9c.2e4d610cf3e7426e \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config set-context tls-bootstrap-token-user@kubernetes \--cluster=kubernetes \--user=tls-bootstrap-token-user \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
kubectl config use-context tls-bootstrap-token-user@kubernetes \--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
# token的位置在bootstrap.secret.yaml,如果修改的话到这个文件修改mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

查看集群状态

kubectl get cs

创建bootstrap

kubectl create -f bootstrap.secret.yaml

9.创建kubelet

创建服务

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
# 所有k8s节点配置kubelet servicecat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]Description=Kubernetes KubeletDocumentation=https://github.com/kubernetes/kubernetesAfter=containerd.serviceRequires=containerd.service
[Service]ExecStart=/usr/local/bin/kubelet \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --config=/etc/kubernetes/kubelet-conf.yml \\ --container-runtime-endpoint=unix:///run/containerd/containerd.sock \\ --node-labels=node.kubernetes.io/node= # --feature-gates=IPv6DualStack=true # --container-runtime=remote # --runtime-request-timeout=15m # --cgroup-driver=systemd
[Install]WantedBy=multi-user.targetEOF

创建配置文件

cat > /etc/kubernetes/kubelet-conf.yml <<EOFapiVersion: kubelet.config.k8s.io/v1beta1kind: KubeletConfigurationaddress: 0.0.0.0port: 10250readOnlyPort: 10255authentication:  anonymous:    enabled: false  webhook:    cacheTTL: 2m0s    enabled: true  x509:    clientCAFile: /etc/kubernetes/pki/ca.pemauthorization:  mode: Webhook  webhook:    cacheAuthorizedTTL: 5m0s    cacheUnauthorizedTTL: 30scgroupDriver: systemdcgroupsPerQOS: trueclusterDNS:- 10.96.0.10clusterDomain: cluster.localcontainerLogMaxFiles: 5containerLogMaxSize: 10MicontentType: application/vnd.kubernetes.protobufcpuCFSQuota: truecpuManagerPolicy: nonecpuManagerReconcilePeriod: 10senableControllerAttachDetach: trueenableDebuggingHandlers: trueenforceNodeAllocatable:- podseventBurst: 10eventRecordQPS: 5evictionHard:  imagefs.available: 15%  memory.available: 100Mi  nodefs.available: 10%  nodefs.inodesFree: 5%evictionPressureTransitionPeriod: 5m0sfailSwapOn: truefileCheckFrequency: 20shairpinMode: promiscuous-bridgehealthzBindAddress: 127.0.0.1healthzPort: 10248httpCheckFrequency: 20simageGCHighThresholdPercent: 85imageGCLowThresholdPercent: 80imageMinimumGCAge: 2m0siptablesDropBit: 15iptablesMasqueradeBit: 14kubeAPIBurst: 10kubeAPIQPS: 5makeIPTablesUtilChains: truemaxOpenFiles: 1000000maxPods: 110nodeStatusUpdateFrequency: 10soomScoreAdj: -999podPidsLimit: -1registryBurst: 10registryPullQPS: 5resolvConf: /etc/resolv.confrotateCertificates: trueruntimeRequestTimeout: 2m0sserializeImagePulls: truestaticPodPath: /etc/kubernetes/manifestsstreamingConnectionIdleTimeout: 4h0m0ssyncFrequency: 1m0svolumeStatsAggPeriod: 1m0sEOF

启动kubelet

systemctl daemon-reloadsystemctl restart kubeletsystemctl enable --now kubelet

查看集群状态

kubectl  get node

10.创建kube-proxy

将kubeconfig发送至其他节点

for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig;  done

创建服务文件

cat >  /usr/lib/systemd/system/kube-proxy.service << EOF[Unit]Description=Kubernetes Kube ProxyDocumentation=https://github.com/kubernetes/kubernetesAfter=network.target
[Service]ExecStart=/usr/local/bin/kube-proxy \\ --config=/etc/kubernetes/kube-proxy.yaml \\ --v=2
Restart=alwaysRestartSec=10s
[Install]WantedBy=multi-user.target
EOF

创建配置文件

cat > /etc/kubernetes/kube-proxy.yaml << EOFapiVersion: kubeproxy.config.k8s.io/v1alpha1bindAddress: 0.0.0.0clientConnection:  acceptContentTypes: ""  burst: 10  contentType: application/vnd.kubernetes.protobuf  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig  qps: 5clusterCIDR: 172.16.0.0/12,fc00::/48 configSyncPeriod: 15m0sconntrack:  max: null  maxPerCore: 32768  min: 131072  tcpCloseWaitTimeout: 1h0m0s  tcpEstablishedTimeout: 24h0m0senableProfiling: falsehealthzBindAddress: 0.0.0.0:10256hostnameOverride: ""iptables:  masqueradeAll: false  masqueradeBit: 14  minSyncPeriod: 0s  syncPeriod: 30sipvs:  masqueradeAll: true  minSyncPeriod: 5s  scheduler: "rr"  syncPeriod: 30skind: KubeProxyConfigurationmetricsBindAddress: 127.0.0.1:10249mode: "ipvs"nodePortAddresses: nulloomScoreAdj: -999portRange: ""udpIdleTimeout: 250ms
EOF

启动服务

 systemctl daemon-reload systemctl restart kube-proxy systemctl enable --now kube-proxy systemctl status kube-proxy

安装Calico

11.1 获取定义文件

curl https://projectcalico.docs.tigera.io/manifests/calico-typha.yaml -o calico.yaml

11.2 启动服务

kubectl apply -f calico.yaml

11.3 查看状态

kubectl  get pod -A

安装CoreDNS

只需要在master节点执行

获取配置文件

https://github.com/coredns/deployment/[1]

https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed

修改配置

cd coredns/cat coredns.yaml | grep clusterIP:  clusterIP: 10.96.0.10

安装coredns

kubectl  create -f coredns.yaml serviceaccount/coredns createdclusterrole.rbac.authorization.k8s.io/system:coredns createdclusterrolebinding.rbac.authorization.k8s.io/system:coredns createdconfigmap/coredns createddeployment.apps/coredns createdservice/kube-dns created

安装Metrics Server

安装

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

https://github.com/kubernetes-sigs/metrics-server

查看状态

kubectl  top node

安装命令自动补全

yum install bash-completion -ysource /usr/share/bash-completion/bash_completionsource <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc

集群验证是否正常

部署pod资源

cat<<EOF | kubectl apply -f -apiVersion: v1kind: Podmetadata:  name: busybox  namespace: defaultspec:  containers:  - name: busybox    image: busybox:1.28    command:      - sleep      - "3600"    imagePullPolicy: IfNotPresent  restartPolicy: AlwaysEOF
# 查看kubectl get podNAME READY STATUS RESTARTS AGEbusybox 1/1 Running 0 17s

用pod解析默认命名空间中的kubernetes

kubectl get svcNAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGEkubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   17h
kubectl exec busybox -n default -- nslookup kubernetes3Server: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetesAddress 1: 10.96.0.1 kubernetes.default.svc.cluster.local

测试跨命名空间是否可以解析

kubectl exec  busybox -n default -- nslookup kube-dns.kube-systemServer:    10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kube-dns.kube-systemAddress 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

每个节点都必须要能访问Kubernetes的kubernetes svc 443和kube-dns的service 53

telnet 10.96.0.1 443Trying 10.96.0.1...Connected to 10.96.0.1.Escape character is '^]'.
telnet 10.96.0.10 53Trying 10.96.0.10...Connected to 10.96.0.10.Escape character is '^]'.
curl 10.96.0.10:53curl: (52) Empty reply from server

Pod和Pod之前要能通

kubectl get po -owideNAME      READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATESbusybox   1/1     Running   0          17m   172.27.14.193   k8s-node02   <none>           <none>
kubectl get po -n kube-system -owideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATEScalico-kube-controllers-59697b644f-zsj62 1/1 Running 2 (3h37m ago) 20h 172.17.58.194 k8s-node02 <none> <none>calico-node-8pn4f 0/1 Running 2 (23s ago) 30h 192.168.100.21 k8s-master01 <none> <none>calico-node-l4fkz 1/1 Running 7 (3h37m ago) 30h 192.168.100.23 k8s-node02 <none> <none>calico-node-mg92w 1/1 Running 7 (3h37m ago) 30h 192.168.100.22 k8s-node01 <none> <none>calico-typha-6944f58589-qkm94 1/1 Running 1 (3h37m ago) 20h 192.168.100.22 k8s-node01 <none> <none>coredns-6795856f79-p7jg9 1/1 Running 1 (6h2m ago) 11h 172.17.32.139 k8s-master01 <none> <none>metrics-server-c7d4c4dd5-skch4 1/1 Running 1 (6h2m ago) 18h 172.17.32.138 k8s-master01 <none> <none>
# 进入busybox ping其他节点上的pod
kubectl exec -it busybox -- sh/ # ping 192.168.100.22PING 192.168.100.22 (192.168.100.22): 56 data bytes64 bytes from 192.168.100.22: seq=0 ttl=63 time=0.358 ms64 bytes from 192.168.100.22: seq=1 ttl=63 time=0.668 ms
# 可以连通证明这个pod是可以跨命名空间和跨主机通信的

创建三个副本,可以看到3个副本分布在不同的节点上

cat > deployments.yaml << EOFapiVersion: apps/v1kind: Deploymentmetadata:  name: nginx-deployment  labels:    app: nginxspec:  replicas: 3  selector:    matchLabels:      app: nginx  template:    metadata:      labels:        app: nginx    spec:      containers:      - name: nginx        image: nginx:1.14.2        ports:        - containerPort: 80
EOF
kubectl apply -f deployments.yaml deployment.apps/nginx-deployment created
kubectl get pod NAME READY STATUS RESTARTS AGEbusybox 1/1 Running 0 6m25snginx-deployment-9456bbbf9-4bmvk 1/1 Running 0 8snginx-deployment-9456bbbf9-9rcdk 1/1 Running 0 8snginx-deployment-9456bbbf9-dqv8s 1/1 Running 0 8s
# 删除nginx
[root@k8s-master01 ~]# kubectl delete -f deployments.yaml

安装dashboard

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
apiVersion: v1kind: ServiceAccountmetadata:  name: admin-user  namespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: admin-userroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: cluster-adminsubjects:- kind: ServiceAccount  name: admin-user  namespace: kubernetes-dashboard

创建ServiceAccount

kubectl apply -f dashboard-user.yml

更改dashboard的svc为NodePort

kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard  type: NodePort

查看端口号

kubectl get svc kubernetes-dashboard -n kubernetes-dashboardNAME                   TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGEkubernetes-dashboard   NodePort   10.108.37.26   <none>        443:30647/TCP   11h

创建token

kubectl -n kubernetes-dashboard create token admin-usereyJhbGciOiJSUzI1NiIsImtpZCI6IlVfX0ZyLVE5UnlMYk94QU9YZy1tdTJlbDNlVkNNa2hPQm9KcndlR1pSc3cifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY3NjEyNjAzLCJpYXQiOjE2Njc2MDkwMDMsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMjY1YjQ4NDYtNzg2Ni00OWYxLWFkMmYtNWE0NGU3MDkxNDgzIn19LCJuYmYiOjE2Njc2MDkwMDMsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.Y-DG1hpb9mk06nhm6ZnIhFaPBj6AAnlzbg4ngPSiWfEpBOj4c_TVpBS7a9eJqDVisWczHerT5K_2cgzmIeLxUdDffIyU8UcijlSM8Df3PQMMTvbMCCpFZC8x9l6T7rRIhI8-xGL5eFBqt6YRf2xRTBKgpRsdqetX5_zZ7552wv5GhDnRXYo1BJ6IQuxWUi39du3mZdPJJSyvKZXTqx8GfEBX01zprNTIJ4DXUQPM7z4cqiKIFkt32KhyTK55GhuIyh9y3cRBtHVhcybRPQ27KnFTg1n4joGdrJP7Q7fxJsiKFmJFbR2K_ljqGQpss-v3QhJrI5SeowAaqebhCumfrg

访问主机:上面的端口

https://192.168.100.21:30647

安装ingress

https://github.com/kubernetes/ingress-nginx

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.4.0/deploy/static/provider/cloud/deploy.yaml

本地验证

kubectl get service ingress-nginx-controller --namespace=ingress-nginx
kubectl wait --namespace ingress-nginx \ --for=condition=ready pod \ --selector=app.kubernetes.io/component=controller \ --timeout=120s
kubectl create deployment demo --image=httpd --port=80kubectl expose deployment demo
kubectl create ingress demo-localhost --class=nginx \ --rule="demo.localdev.me/*=demo:80"
kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
curl http://demo.localdev.me:8080

外部验证

配置好DNS 指向集群IP[root@k8s-master01 ~]# kubectl get service ingress-nginx-controller --namespace=ingress-nginxNAME                       TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGEingress-nginx-controller   LoadBalancer   10.98.13.147   <pending>     80:32522/TCP,443:31226/TCP   71m
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.98.13.147 www.demo.io192.168.100.21 k8s-master01192.168.100.22 k8s-node01192.168.100.23 k8s-node02
kubectl create ingress demo --class=nginx \  --rule="www.demo.io/*=demo:80"
curl http://www.demo.io/

安装Helm包管理器

Thepackage managerfor Kubernetes

查看自己平台的版本选择下载 : https://github.com/helm/helm/releases

wget https://get.helm.sh/helm-v3.10.1-linux-arm64.tar.gztar -zxvf helm-v3.10.1-linux-arm64.tar.gzmv linux-arm64/helm  /usr/local/bin/helm# 查看版本信息helm versionversion.BuildInfo{Version:"v3.10.1", GitCommit:"9f88ccb6aee40b9a0535fcc7efea6055e1ef72c9", GitTreeState:"clean", GoVersion:"go1.18.7"}

使用教程:https://helm.sh/docs/intro/using_helm/

安装Harbor

企业级 Registry 服务器,存放镜像

错误

1.无法拉取镜像,或者找不到所需平台的镜像问题

这个我们可以直接使用源代码来编译通过Docker打包成镜像或者是dockerhub上面搜别人是否已经分享

2.exec /xxx: exec format error

这种情况应该是你镜像拉错了,需要拉你对应平台的,比如我是linux/arm64

3.docker导出镜像到containerd

docker save httpd > httpd.tarctr -n=k8s.io image import httpd.tar

4.crictl查找不到导入的镜像

containerd有命名空间的概念,需要导入k8s.io才能被k8s使用

5.ingress证书过期

error: failed to create ingress: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": failed to call webhook: Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1/ingresses?timeout=10s": x509: certificate has expired or is not yet valid: current time 2022-11-04T23:34:29-04:00 is before 2022-11-05T10:40:35Z
kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

资料

helm:https://helm.sh/docs[2] 

harbor:https://goharbor.io/docs[3]

References

[1] https://github.com/coredns/deployment/: https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed
[2] https://helm.sh/docs: https://helm.sh/docs/intro/install/
[3] https://goharbor.io/docs: https://goharbor.io/docs/2.6.0/administration/

 

麦奇派
收录于合集 #编程
 2
下一篇如何实现服务的优雅关停/更新
阅读 49
麦奇派
24篇原创内容
 
posted @ 2023-03-16 09:21  往事已成昨天  阅读(308)  评论(0编辑  收藏  举报