kubernets安装-Kubeadm for centos7

kubernets安装

一、准备工作

1.1、环境准备

  • 资源准备

    1master+ 2work

    4C4G

    centos 7 .2

    kernel 5.13.0-1.el7.elrepo.x86_64

  • 环境准备

    - 更新内核
    rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
    rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
    #查看最新版内核: yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
    #安装最新版:
    yum --enablerepo=elrepo-kernel install kernel-ml kernel-ml-devel –y
    #查看当前可用内核版本:
    awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
    #选择最新内核版本,0代表查看当前可用内核版本列表的左侧索引号
    grub2-set-default 0
    #生成grub文件
    grub2-mkconfig -o /boot/grub2/grub.cfg
    #重启linux
    reboot
    
    - 时间同步
    
    yum install -y chronyd
    
    systemctl restart chronyd
    
    systemctl enable chronyd
    
    
    - 关闭防火墙
    
    systemctl stop firewalld
    
    systemctl diable firewalld
    
    - 关闭selinx
    sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
    setenforce 0
    
    -开启二层网桥在转发包时可以被iptables的FORWARD规则所过滤
    
    在/etc/sysctl.d/新增文件添加:
    cat >/etc/sysctl.d/k8s.conf <<EOF
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF
    sysctl –system
    
    
    - 关闭swap 分区
     永久关闭swap分区
    sed -ri 's/.*swap.*/#&/' /etc/fstab  
    mount -a
    swapoff  -a
    查看free -m  
    swap 为0 则正常
    
    
    - 安装docker
    wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
    yum clean all
    yum makecache
    yum list |grep docker-ce
    yum install -y docker 
    查看docker Cgroup Driver引擎是否是使用的systemd
    docker info |grep "Cgroup"
    Cgroup Driver: systemd
    如果不是需要修改docker 启动引擎修改为systemd
    vim /etc/docker/daemon.json
    
    {
      "exec-opts": ["native.cgroupdriver=systemd"]
    }
    
    
    启动并加入开机启动
    systemctl start docker
    systemctl daemon-reload
    systemctl enable docker
    
    验证是否安装成功
    docker version
    
    有client和server两部分表示docker安装启动都成功了
    
    - 无内部DNS时注册主机名时需要手动在各节点添加相关hosts解析
    cat /etc/hosts
    192.168.0.107  k8smasterserver-node107  kubeapi1.liaoxz.com
    192.168.0.108 k8sworkserver-node108
    192.168.0.110 k8sworkserver-node110
    
    

2.2、部署k8S

2.2.1、master部署

采用kubeadm部署,采用pod的形式部署,所以在master上要部署所有插件

  • master

docker

kube-apiserver

kube-controller-manager

kube-scheduler

etcd

  • work node

kubelet

kube-proxy

docker

要点:无论是控制平面还是工作平面都要部署docker,kubelet,kubeadm,kubectl

  • 添加阿里云kubernets镜像yum源
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

  • 安装软件
yum install -y kubelet kubeadm kubectl
初始化安装kubernets master时需要启动kubelet及docker
systectl start kubelet
systemctl enable kubelet 
systemctl restart docker
systemctl enable docker
  • 初始化k8s集群的控制节点,master node

节点之间需要主机名访问,每个节点需要有一个域名名称。

安装 dokcer 与 kubeket为守护进程,其他都为pod

etcd pod ,

kube-proxy pod ,

kube-controller-manager pod,

kube-scheduler pod,

kube-apiserver pod,

flannel pod

#改名了只能在主节点上也就是master上执行,启动第一个matser节点运行,使用方式如下:

kubeadm init --help


2.2.2、手动部署一个K8S MasterNode

  • 因为kuberadm启动相关内容需要连接到google默认的镜像仓库中,为了解决这个情况可以使用阿里云的镜像仓库。地址为:registry.aliyuncs.com/google_containers
 kubeadm init   --apiserver-advertise-address=192.168.0.107   --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.21.2   --control-plane-endpoint kubeapi1.liaoxz.com    --pod-network-cidr=10.244.0.0/16

kubeadm init

--image-repository registry.aliyuncs.com/google_containers

--kubernets-version (版本与kubeadm一致,查看rpm -q kubeadm ,命名规则为 v1.21.2)

--control-plane-endpoint kubeapi.liaoxz.com(宿主机的主机名,K8s通信是通过内部DNS来通信的)

--apiserver-advertise-address 192.168.0.107(指定apiserver的地址)

--pod-network-cidr (10.244.0.0/16指定一个pod的地址获取,为部署的网络插件的地址)

  • 报错处理

    如果提示无法获取cornDNS的镜像是因为阿里上的coredns tag 为coredns:1.8.0而非coredns:v1.8.0,需要手动pull 了更改一下tag,更改完了之后reset了重新安装

    1、pull images
    docker pull registry.aliyuncs.com/google_containers/coredns:1.8.0
    2、change tag
     docker tag registry.aliyuncs.com/google_containers/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns:v1.8.0
    3、reset install
    kubeadm reset
    4、install
     kubeadm init   --apiserver-advertise-address=192.168.0.107   --image-repository registry.aliyuncs.com/google_containers   --kubernetes-version v1.21.2   --control-plane-endpoint kubeapi1.liaoxz.com    --pod-network-cidr=10.244.0.0/16
    
  • 报错处理

    如果coredns一直处于ContainerCreating状态。查看日志排查

    [root@k8sworkserver-node110 ~]# kubectl describe po  coredns-7f6cbbb7b8-8crvj    -n kube-system
    报错信息
      Warning  FailedCreatePodSandBox  14m (x9352 over 3h14m)    kubelet  (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "d9961e746420f6146d18883a63e4790181045067a4cea431ce39bc7b30e2af15" network for pod "coredns-7f6cbbb7b8-8crvj": networkPlugin cni failed to set up pod "coredns-7f6cbbb7b8-8crvj_kube-system" network: failed to set bridge addr: "cni0" already has an IP address different from 10.244.0.1/24
      Normal   SandboxChanged          4m12s (x9879 over 3h14m)  kubelet  Pod sandbox changed, it will be killed and re-created.
    
    
    [root@k8sworkserver-node110 ~]# ifconfig
    cni0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
            inet 10.244.2.1  netmask 255.255.255.0  broadcast 10.244.2.255
            inet6 fe80::942a:15ff:fee3:ad8f  prefixlen 64  scopeid 0x20<link>
            ether 02:40:b2:5a:37:bb  txqueuelen 1000  (Ethernet)
            RX packets 47759  bytes 2508468 (2.3 MiB)
            RX errors 0  dropped 0  overruns 0  frame 0
            TX packets 21059  bytes 2313796 (2.2 MiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    解决办法:将网卡删除等待pod自动重建
    [root@k8sworkserver-node110 ~]# ifconfig cni0 down
    [root@k8sworkserver-node110 ~]# ip link delete cni0
    
    
    

安装成功后会提示如何新增MasterNode和WorkNode

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join kubeapi1.liaoxz.com:6443 --token l915od.tcnna68lbv6ynru9 \
	--discovery-token-ca-cert-hash sha256:5adddf86e4d4c757b391a9c7a5b6218356928bf4cce60abb99d7269145b1f740 \
	--control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join kubeapi1.liaoxz.com:6443 --token l915od.tcnna68lbv6ynru9 \
	--discovery-token-ca-cert-hash sha256:5adddf86e4d4c757b391a9c7a5b6218356928bf4cce60abb99d7269145b1f740
  • copy 配置文件到当前目录下

    mkdir /root/.kube && cp -r /etc/kubernetes/admin.conf  /root/.kube/config
    
  • 下载安装网络插件flannel

    wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlkubectl apply -f kube-flannel.yml
    
  • 查看节点状态

    kubectl get nodes -n kube-systemNAME                      STATUS   ROLES                  AGE   VERSIONk8smasterserver-node107   Ready    control-plane,master   20h   v1.21.2k8smasterserver-node108   Ready    <none>                 20h   v1.21.2k8sworkserver-node110     Ready    <none>                 18h   v1.21.2
    
  • 查看已启用pod

    kubectl get pods -n kube-system -owideNAME                                              READY   STATUS    RESTARTS   AGE   IP              NODE                      NOMINATED NODE   READINESS GATEScoredns-59d64cd4d4-4h5x2                          1/1     Running   0          20h   10.244.0.3      k8smasterserver-node107   <none>           <none>coredns-59d64cd4d4-jrljs                          1/1     Running   0          20h   10.244.0.2      k8smasterserver-node107   <none>           <none>etcd-k8smasterserver-node107                      1/1     Running   0          20h   192.168.0.107   k8smasterserver-node107   <none>           <none>kube-apiserver-k8smasterserver-node107            1/1     Running   0          20h   192.168.0.107   k8smasterserver-node107   <none>           <none>kube-controller-manager-k8smasterserver-node107   1/1     Running   0          20h   192.168.0.107   k8smasterserver-node107   <none>           <none>kube-proxy-9hj4n                                  1/1     Running   1          18h   192.168.0.110   k8sworkserver-node110     <none>           <none>kube-proxy-bzsrd                                  1/1     Running   0          20h   192.168.0.107   k8smasterserver-node107   <none>           <none>kube-proxy-lmrj4                                  1/1     Running   0          20h   192.168.0.108   k8smasterserver-node108   <none>           <none>kube-scheduler-k8smasterserver-node107            1/1     Running   0          20h   192.168.0.107   k8smasterserver-node107   <none>           <none>
    
  • 查看namespace

    [root@k8smasterserver-node107 ~]# kubectl get namespaceNAME              STATUS   AGEdefault           Active   23hkube-node-lease   Active   23hkube-public       Active   23hkube-system       Active   23h
    
  • 添加一个WorkNode

    查看master join信息

    kubeadm token create --print-join-command
    

同样在work节点上需要初始化环境以及安装kube相关组件kubelet,kubectl,kubeadm

  • 报错解决

    [root@k8sworkserver-node108 ~]# kubeadm join kubeapi1.liaoxz.com:6443 --token 3xqb2q.xyblyyhpnbge4a9u --discovery-token-ca-cert-hash sha256:1155cdda27e91124955b6d95efe2ad78e80382540cff1a6bc5864e8ee08cfec3 --control-plane [preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'error execution phase preflight: One or more conditions for hosting a new control plane instance is not satisfied.failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directoryPlease ensure that:* The cluster has a stable controlPlaneEndpoint address.* The certificates that must be shared among control plane instances are provided.To see the stack trace of this error execute with --v=5 or higher解决kubeadm reset rm -fr /etc/kubernetes/卸载重装kube套间rpm -e --nodeps `rpm -qa|grep kube`yum install -y kubectl kubelet kubeadmsystemctl restart kubelet 
    
  • 跑一个pod容器

    ##1、创建一个容器pod编排模板kubectl create deployment nginx --image='docker.io/nginx:stable' --dry-run=client ##已客户端的形式测试运行kubectl create deployment nginx --image='docker.io/nginx:stable' ##创建一个image 为 nginx 项目名称也叫nginx #2、设置pod编排模板运行实例为10个kubectl scale deployment  nginx --replicas=10  ##设置为多个实例	#扩缩kubectl scale deplyment/nginx --replicas=11#3、查看pod运行情况kubectl get pod -o wide 查看pod运行情况[root@k8smasterserver-node107 ~]# kubectl get pods -o wideNAME                        READY   STATUS              RESTARTS   AGE     IP            NODE                      NOMINATED NODE   READINESS GATESnginx-6d8f469586-7nx6b      1/1     Running             0          2m30s   10.244.1.17   k8smasterserver-node108   <none>           <none>nginx-6d8f469586-9cmb4      1/1     Running             0          2m30s   10.244.1.19   k8smasterserver-node108   <none>           <none>nginx-6d8f469586-cwb2t      1/1     Running             0          5m25s   10.244.2.22   k8sworkserver-node110     <none>           <none>#4、查看已定义的pod编排模板kubectl get deployment查看部署的对象[root@k8smasterserver-node107 ~]# kubectl  get deploymentNAME       READY   UP-TO-DATE   AVAILABLE   AGEnginx      10/10   10           10          8m48swebnginx   0/1     1            0           29m
    
  • 删除一个pod

    ​ 删除pod必须删除deployment才会生效,因为创建的pod默认必须保持一个活跃状态,就算删除了,k8s也会马上又重新启动一个pod

    kubectl delete pods {podsname}
    
  • 应用编排

    pod控制器使用deployment编排可以针对pod进行模板生成,针对模板设置标签名称

    其中还包括

    deployment 用于管理无状态的持久化应用,比如http服务。

    statefulset 管理有状态话的持久化应用,比如数据库服务程序

    daemonset 用于确保每个节点都运行了某个pod的副本。常用于kube-proxy 。flannel网络插件,日志收集齐以及agent类似应用。

    job 用于管理运行后终止的作业

    kubectl create deployment {标签名称/标签选择器} --image='imagename'  --dry-run={none|client|servr}kubectl create deployment nginx --image='docker.io/nginx:stable' --dry-run=client ##已客户端的形式测试运行kubectl create deployment nginx --image='docker.io/nginx:stable' ##确保无误后移除测试运行参数,执行创建一个image 为 nginx 项目名称也叫nginx ##查看运行的deploymentkubectl get deploymentkubectl delete deployment {DeploymentName}
    
  • 创建service

    ​ service对象就是一组pod的逻辑组合,通过标签选择器来过滤每一个符合标签的对象,通过clusterIP的地址服务端口接受客户端请求。为工作负载实例提供固定的访问入口及7层代理及负载均衡服务。service对象内置负载均衡可自动关联pod健康状态,当后端副本增加或减少时,service会自动对齐扩缩。

[root@k8smasterserver-node107 ~]# kubectl create service  --help
##使用指定的 subcommand 创建一个 service.

Aliases:
service, svc

Available Commands:
  clusterip    Create a ClusterIP service.
  externalname Create an ExternalName service.
  loadbalancer 创建一个 LoadBalancer service.
  nodeport     创建一个 NodePort service.

Usage:
  kubectl create service [flags] [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).

[root@k8smasterserver-node107 ~]# kubectl create service clusterip  webdaemon --tcp=80:80

kubectl get service ##查看现有的service
[root@k8smasterserver-node107 ~]# kubectl  get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   26h
nginx        ClusterIP   10.101.183.117   <none>        80/TCP    10m
webdaemon    ClusterIP   10.108.110.35    <none>        80/TCP    3m58s
##查看效果!
[root@k8smasterserver-node107 ~]#  curl 10.108.110.35
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: webdaemon-6b4f6479d7-bqj9v, ServerIP: 10.244.1.25!
[root@k8smasterserver-node107 ~]#  curl 10.108.110.35
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: webdaemon-6b4f6479d7-5tr77, ServerIP: 10.244.2.28!
[root@k8smasterserver-node107 ~]#  curl 10.108.110.35
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: webdaemon-6b4f6479d7-tpqwv, ServerIP: 10.244.1.23!
[root@k8smasterserver-node107 ~]#  curl 10.108.110.35
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: webdaemon-6b4f6479d7-g76mv, ServerIP: 10.244.2.30!
[root@k8smasterserver-node107 ~]#  curl 10.108.110.35
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: webdaemon-6b4f6479d7-5tr77, ServerIP: 10.244.2.28!
[root@k8smasterserver-node107 ~]#

##查看deployment对象servie的详细信息
kubectl describe deploymet/demoapp




  • kubectl 命令
kubectl  exec -it mypod -c demoapp -- /bin/bash
posted @ 2022-05-04 15:09  老实人张彡  阅读(87)  评论(0编辑  收藏  举报