利用ansible自动化部署kubernetes集群

机器环境介绍

1.1. 机器信息介绍

主机名 OS版本 ip地址 cpu 内存 磁盘
master.k8s.local Ubuntu 22.04 10.22.4.11 2core 6G 80G
node01.k8s.local Ubuntu 22.04 10.22.4.12 2core 6G 80G
node02.k8s.local Ubuntu 22.04 10.22.4.13 2core 6G 80G

1.2. 规划IP地址介绍

在Kubernetes中CNI网络插件采用Calico,划分三个网段

网络信息介绍 网段 备注
pod ip网段 10.224.0.0/16
cluster ip网段 10.96.0.0/16

1.3. kuberntes安装信息介绍

安装的kubernetets版本为1.28.5,Calico版本为3.26.4,容器运行环境为containerd

如果需要其他版本kuberneres,需要修改下面的脚本

  • 修改kubernetes源里面的版本
  • 修改安装master和worker节点里面定义的版本变量值

如下需要使用其他版本的CNI插件或者不同版本的calico插件,需要对网络插件部分脚本进行修改

安装配置ansible

2.1. ansible软件部署

  • 安装ansible软件
apt update && apt install ansible -y
  • 配置ansible配置
mkdir /etc/ansible/ && touch /etc/ansible/hosts
  • 配置/etc/ansible/hosts文件

[master]
10.22.4.11

[worker]
10.22.4.12
10.22.4.13
  • 配置免密登录, 此过程中不要输入密码
ssh-keygen -t rsa
  • 分发免密登录
ssh-copy-id root@10.22.4.11
ssh-copy-id root@10.22.4.12
ssh-copy-id root@10.22.4.13
  • 配置hosts
cat >> /etc/hosts <<EOF
10.22.4.11 master master.k8s.local
10.22.4.12 worker01 node01.k8s.local
10.22.4.13 worker02 node02.k8s.local
EOF

2.2. 测试ansible连接性

  • 编写测试脚本
cat >test_nodes.yml <<EOF
---
- name: test nodes
  hosts: 
  	master
  	worker
  tasks:
    - name: Ping nodes
      ping:
EOF
  • 执行ansible测试
root@master:~/ansible# ansible-playbook test_node.yml 

PLAY [Manage nodes] ***********************************************************************************************************

TASK [Gathering Facts] ********************************************************************************************************
ok: [10.22.4.13]
ok: [10.22.4.12]
ok: [10.22.4.11]

TASK [Ping nodes] *************************************************************************************************************
ok: [10.22.4.13]
ok: [10.22.4.12]
ok: [10.22.4.11]

PLAY RECAP ********************************************************************************************************************
10.22.4.11                 : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.22.4.12                 : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
10.22.4.13                 : ok=2    changed=0    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

root@master:~/ansible# 

配置kubernetes脚本

3.1. 编写的kubernetes 脚本

  • 编写的install-kubernetes.yml文件内容如下
---
- name: Performance Basic Config
  hosts: 
  	master
    worker
  become: yes
  tasks:
    - name: Check if fstab contains swap
      shell: grep -q "swap" /etc/fstab
      register: fstab_contains_swap

    - name: Temp Disable swap
      command: swapoff -a
      when: fstab_contains_swap.rc == 0

    - name: Permanent Disable swap
      shell: sed -i 's/.*swap.*/#&/g' /etc/fstab
      when: fstab_contains_swap.rc == 0

    - name: Disable Swap unit-files
      shell: |
        swap_units=$(systemctl list-unit-files --type=swap | grep swap | awk '{print $1}')
        for unit in $swap_units; do
          systemctl mask $unit
        done

    - name: Stop UFW service
      service:
        name: ufw
        state: stopped

    - name: Disable UFW at boot
      service:
        name: ufw
        enabled: no

    - name: Set timezone
      shell: TZ='Asia/Shanghai'; export TZ

    - name: Set timezone permanently
      shell: |
        cat >> /etc/profile << EOF
        TZ='Asia/Shanghai'; export TZ
        EOF

    - name: Create .hushlogin file in $HOME
      file:
        path: "{{ ansible_env.HOME }}/.hushlogin"
        state: touch

    - name: Install required packages
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - apt-transport-https
          - ca-certificates
          - curl
          - gnupg
          - lsb-release

    - name: Add Aliyun Docker GPG key
      shell: curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add

    - name: Add Aliyun Docker repository
      shell: echo "deb [arch=amd64 signed-by=/etc/apt/trusted.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker-ce.list

    - name: Add Aliyun Kubernetes GPG key
      shell: curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

    - name: Add Aliyun Kubernetes repository
      shell: echo "deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.28/deb/ /" | tee /etc/apt/sources.list.d/kubernetes.list

    - name: Set apt sources to use USTC mirrors
      shell: sed -i 's#cn.archive.ubuntu.com#mirrors.aliyun.com#g' /etc/apt/sources.list

    - name: Update apt cache
      apt:
        update_cache: yes

    - name: Load br_netfilter on start
      shell: echo "modprobe br_netfilter" >> /etc/profile

    - name: Load br_netfilter
      shell: modprobe br_netfilter

    - name: Update sysctl settings
      sysctl:
        name: "{{ item.name }}"
        value: "{{ item.value }}"
        state: present
        reload: yes
      with_items:
        - { name: "net.bridge.bridge-nf-call-iptables", value: "1" }
        - { name: "net.bridge.bridge-nf-call-ip6tables", value: "1" }
        - { name: "net.ipv4.ip_forward", value: "1" }

    - name: Install IPVS
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - ipset
          - ipvsadm

    - name: Create ipvs modules
      file:
        name: /etc/modules-load.d/ipvs.modules
        mode: 0755
        state: touch

    - name: Write ipvs.modules file
      lineinfile:
        dest: /etc/modules-load.d/ipvs.modules
        line: "#!/bin/bash\nmodprobe -- ip_vs\nmodprobe -- ip_vs_rr\nmodprobe -- ip_vs_wrr\nmodprobe -- ip_vs_sh\nmodprobe -- nf_conntrack\nmodprobe -- overlay\nmodprobe -- br_netfilter"

    - name: Execute ipvs.modules script
      shell: sh /etc/modules-load.d/ipvs.modules

    - name: Install Containerd
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - containerd.io

    - name: Generate default containerd file
      shell: containerd config default > /etc/containerd/config.toml

    - name: Config sandbox image
      shell: sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#g' /etc/containerd/config.toml

    - name: Modify Systemd Cgroup
      shell: sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml

    - name: Restart Containerd
      shell: systemctl restart containerd

    - name: Systemctl enable containerd
      shell: systemctl enable containerd

- name: Install Kubernetes Master
  hosts: master
  become: yes
  vars:
    kubernetes_version: "1.28.5"
    pod_network_cidr: "10.244.0.0/16"
    service_cidr: "10.96.0.0/16"
    image_repository: "registry.aliyuncs.com/google_containers"
    calico_version: "v3.26.4"
  tasks:
    - name: Install Master kubernetes packages
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - kubelet={{ kubernetes_version }}-1.1
          - kubeadm={{ kubernetes_version }}-1.1
          - kubectl={{ kubernetes_version }}-1.1

    - name: Initialize Kubernetes Master
      command: kubeadm init --kubernetes-version={{ kubernetes_version }} --pod-network-cidr={{ pod_network_cidr }} --service-cidr={{ service_cidr }} --image-repository={{ image_repository }}
      register: kubeadm_output
      changed_when: "'kubeadm join' in kubeadm_output.stdout"

    - name: Save join command
      copy:
        content: |
          {{ kubeadm_output.stdout_lines [-2] }}
          {{ kubeadm_output.stdout_lines [-1] }}
        dest: /root/kubeadm_join_master.sh
      when: kubeadm_output.changed

    - name: cope join master script
      shell: sed -i 's/"//g' /root/kubeadm_join_master.sh

    - name: copy kubernetes config
      shell: mkdir -p {{ ansible_env.HOME }}/.kube && cp -i /etc/kubernetes/admin.conf {{ ansible_env.HOME }}/.kube/config

    - name: enable kubectl
      command: systemctl enable kubelet

    - name: Create calico directory
      file:
        path: "{{ ansible_env.HOME }}/calico/{{ calico_version }}"
        state: directory

    - name: download calico tigera-operator.yaml
      command: wget https://ghproxy.net/https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/tigera-operator.yaml -O {{ ansible_env.HOME }}/calico/{{ calico_version }}/tigera-operator.yaml

    - name: download calico custom-resources.yaml
      command: wget https://ghproxy.net/https://raw.githubusercontent.com/projectcalico/calico/{{ calico_version }}/manifests/custom-resources.yaml -O {{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml

    - name: set calico netwok range
      replace:
        path: "{{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml"
        regexp: "blockSize: 26"
        replace: "blockSize: 24"

    - name: set calico ip pools
      replace:
        path: "{{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml"
        regexp: "cidr: 192.168.0.0/16"
        replace: "cidr: {{ pod_network_cidr }}"

    - name: apply calico tigera-operator.yaml
      command: kubectl create -f {{ ansible_env.HOME }}/calico/{{ calico_version }}/tigera-operator.yaml

    - name: apply calico custom-resources.yaml
      command: kubectl create -f {{ ansible_env.HOME }}/calico/{{ calico_version }}/custom-resources.yaml

    - name: set crictl config
      command: crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

- name: Install Kubernetes worker
  hosts: worker
  become: yes
  vars:
    kubernetes_version: "1.28.5"
  tasks:
    - name: Install worker kubernetes packages
      apt:
        name: "{{ packages }}"
        state: present
      vars:
        packages:
          - kubelet={{ kubernetes_version }}-1.1
          - kubeadm={{ kubernetes_version }}-1.1

    - name: copy kubeadm join script to workers
      copy:
        src: /root/kubeadm_join_master.sh
        dest: /root/kubeadm_join_master.sh
        mode: 0755

    - name: worker join to cluster
      command: sh /root/kubeadm_join_master.sh

    - name: set crictl config
      command: crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

    - name: enable kubectl
      command: systemctl enable kubelet

执行kubernetes脚本

  • 执行脚本
root@master:~/kubernetes# ansible-playbook  install-kubernetes.yml 
  • 集群状态
root@master:~# kubectl get node -o wide
NAME               STATUS   ROLES           AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
master.k8s.local   Ready    control-plane   19m   v1.28.5   10.22.4.11    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   containerd://1.6.27
node01.k8s.local   Ready    <none>          18m   v1.28.5   10.22.4.12    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   containerd://1.6.27
node02.k8s.local   Ready    <none>          18m   v1.28.5   10.22.4.13    <none>        Ubuntu 22.04.3 LTS   5.15.0-91-generic   containerd://1.6.27
root@master:~# 
  • 集群pod状态
root@master:~# kubectl get pod -A
NAMESPACE          NAME                                       READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-b897f94cd-4xz87           1/1     Running   0          16m
calico-apiserver   calico-apiserver-b897f94cd-7zt28           1/1     Running   0          16m
calico-system      calico-kube-controllers-57474df497-jgkmt   1/1     Running   0          19m
calico-system      calico-node-mxmq6                          1/1     Running   0          19m
calico-system      calico-node-nqdkn                          1/1     Running   0          19m
calico-system      calico-node-wd5fm                          1/1     Running   0          19m
calico-system      calico-typha-79b8c6d4fd-tjdvm              1/1     Running   0          19m
calico-system      calico-typha-79b8c6d4fd-xddmp              1/1     Running   0          19m
calico-system      csi-node-driver-gxg2g                      2/2     Running   0          19m
calico-system      csi-node-driver-kpdxn                      2/2     Running   0          19m
calico-system      csi-node-driver-ttng2                      2/2     Running   0          19m
kube-system        coredns-66f779496c-lp8hd                   1/1     Running   0          19m
kube-system        coredns-66f779496c-qxcz5                   1/1     Running   0          19m
kube-system        etcd-master.k8s.local                      1/1     Running   3          19m
kube-system        kube-apiserver-master.k8s.local            1/1     Running   3          19m
kube-system        kube-controller-manager-master.k8s.local   1/1     Running   3          19m
kube-system        kube-proxy-7d9z4                           1/1     Running   0          19m
kube-system        kube-proxy-8gqbc                           1/1     Running   0          19m
kube-system        kube-proxy-grkdb                           1/1     Running   0          19m
kube-system        kube-scheduler-master.k8s.local            1/1     Running   3          19m
tigera-operator    tigera-operator-7f8cd97876-dg55s           1/1     Running   0          19m
root@master:~# 
posted @ 2024-01-18 13:29  二乘八是十六  阅读(439)  评论(0编辑  收藏  举报