gitlab-cicd+docker+harbor+k8s实现spring项目部署

环境:

公网IP私网IP配置角色
39.98.160.204 172.22.128.38 2c 8g(阿里云) k8s-master
101.42.166.142 10.0.8.11 8c 16g(腾讯云) k8s-node1、gitlab、gitlab-runner
39.98.49.122 172.19.194.168 2c 8g(阿里云) k8s-node2、harbor

k8s v1.23.1搭建

因为条件有限,只能使用公网的云主机来搭建k8s集群。一般实际使用内网搭建就可以了,而且相比于下面内网搭建要更容易一些。

 # 云主机的安全组全放开
 # 分别给三个云主机创建虚拟网卡
 cat > /etc/sysconfig/network-scripts/ifcfg-eth0:1 << EOF
 BOOTPROTO=static
 DEVICE=eth0:1
 IPADDR=39.98.160.204/IPADDR=101.42.166.142/IPADDR=39.98.49.122
 PREFIX=32
 TYPE=Ethernet
 USERCTL=no
 ONBOOT=yes
 EOF
 # 配置完重启网络
 service network restart
 # 我的云主机下面的部分本身就是关闭的,所以注了。没关的话都关上。
 # 关闭防火墙
 # systemctl stop firewalld && systemctl disable firewalld
 # 关闭selinux
 # setenforce 0 && getenforce
 # sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux
 # 关闭swap
 # swapoff -a
 # sed -ri 's/.*swap.*/#&/' /etc/fstab
 # 分别给三个云主机设置主机名
 hostnamectl set-hostname k8s-master/k8s-node1/k8s-node2
 # 分别在三个云主机上修改hosts文件
 cat >> /etc/hosts << EOF
 39.98.160.204 k8s-master
 101.42.166.142 k8s-node1
 39.98.49.122 k8s-node2
 EOF
 # 修改内核参数
 cat > /etc/modules-load.d/k8s.conf << EOF
 br_netfilter
 EOF
 cat > /etc/sysctl.d/k8s.conf << EOF
 net.bridge.bridge-nf-call-ip6tables = 1
 net.bridge.bridge-nf-call-iptables = 1
 EOF
 sysctl --system
 # 据说要配置时间同步,我没配,暂时还没受到影响
 # yum -y install ntpdate
 # ntpdate time.windows.com
 # 安装docker
 wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
 yum -y install docker-ce-20.10.9 docker-ce-cli-20.10.9 containerd.io
 # 修改docker的驱动 追加以下内容
 mkdir /etc/docker && touch /etc/docker/daemon.json
 cat > /etc/docker/daemon.json <<EOF
 {
   "exec-opts": ["native.cgroupdriver=systemd"],
   "log-driver": "json-file",
   "log-opts": {    
   "max-size": "100m"
    },
   "insecure-registries":["39.98.49.122:5001"],
   "storage-driver": "overlay2",  
   "storage-opts": [    
   "overlay2.override_kernel_check=true"  
  ] ,
   # 下面的地址换成自己的
   "registry-mirrors": ["https://xxxxxxxx.mirror.aliyuncs.com"]
 }
 EOF
 systemctl enable docker && systemctl start docker
 # 配置k8s的yum源
 cat > /etc/yum.repos.d/kubernetes.repo << EOF
 [kubernetes]
 name=Kubernetes
 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
 enabled=1
 gpgcheck=0
 repo_gpgcheck=0
 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
 EOF
 yum install -y kubeadm-1.23.1 kubelet-1.23.1 kubectl-1.23.1 kubernetes-cni
 systemctl enable kubelet
 # 配置kubelet启动文件
 # cat >> /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf << EOF
 # 在最后一行末尾添加
  --node-ip=39.98.160.204
 # EOF
 # cat >> /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf << EOF
  --node-ip=101.42.166.142
 # EOF
 # cat >> /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf << EOF
  --node-ip=39.98.49.122
 # EOF
 # 初始化k8s集群
 kubeadm init \
 --apiserver-advertise-address=39.98.160.204 \
 --image-repository registry.aliyuncs.com/google_containers \
 --kubernetes-version v1.23.1 \
 --control-plane-endpoint=39.98.160.204 \
 --service-cidr=10.96.0.0/16 \
 --pod-network-cidr=10.244.0.0/16 \
 --v=5
 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config
 # join命令过期能重新获取
 # kubeadm token create --print-join-command
 # 从节点加入集群
 kubeadm join 39.98.160.204:6443 --token xxxxxx.xxxxxxxxxxxxxxxxxxxx \
         --discovery-token-ca-cert-hash sha256:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 # 主节点安装网络插件
 wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 # 修改两个地方 见附页
 #       args:
 #       - --public-ip=$(PUBLIC_IP)
 #       - --iface=eth0
 #       - --ip-masq
 #       - --kube-subnet-mgr
 #       env:
 #       - name: PUBLIC_IP
 #         valueFrom:
 #           fieldRef:
 #             fieldPath: status.podIP
 # 安装插件
 kubectl apply -f kube-flannel.yml
 
 # k8s生成拉取镜像的密钥
 kubectl create secret docker-registry \
  regsecret \
   --docker-server=39.98.49.122:5001 \
   --docker-username=admin \
   --docker-password=Harbor12345
   

gitlab gitlab-runner安装

 # 一种方法是每个runner单独弄一个虚拟机。虚拟机集成环境,比如jdk,maven。故障恢复时直接还原快照即可。
 # 这里安装在了宿主机上,对应的执行器选择了shell
 # 另一个方案是容器部署git-runner,执行器选择docker。假如我需要maven和jdk的环境我封装一个带有maven和jdk环境的镜像,需要打包时gitlab-runner下拉这个封装好的镜像来打包。这个环境隔离性更好,故障恢复也更快,有时间我会将这个方案也实践一下。
 # gitlab
 docker run -d \
   --hostname 101.42.166.142 \
   --name gitlab \
   --restart always \
   --publish 8443:443 --publish 8081:80 -p 2222:22 \
   -v /usr/local/gitlab/config:/etc/gitlab \
   -v /usr/local/gitlab/logs:/var/log/gitlab \
   -v /usr/local/gitlab/data:/var/opt/gitlab \
   -v /etc/localtime:/etc/localtime \
  gitlab/gitlab-ce
 # 进入容器,/etc/gitlab/gitlab.rb修改以下参数  
 external_url 'http://101.42.166.142:8081'
 # 修改gitlab拉取代码显示的路径地址
 gitlab_rails['gitlab_shell_ssh_port'] = 2222
 
 # yum安装 gitlab-runner
 curl -L "https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh" | sudo bash
 yum install -y gitlab-runner
 # 宿主机上下载解压jdk maven
 tar xzf apache-maven-3.8.7-bin.tar.gz -C /usr/local
 tar xzf jdk-8u202-linux-i586.tar.gz -C /usr/local
 # 配置一下JDK Maven环境变量
 cat >> /etc/profile << EOF
 # set JDK environment
 export JAVA_HOME=/usr/local/jdk1.8.0_202
 export PATH=$JAVA_HOME/bin:$PATH
 # set Maven environment
 export MAVEN_HOME=/usr/local/apache-maven-3.8.7
 export PATH=$MAVEN_HOME/bin:$PATH
 EOF
 # 使配置文件生效
 source /etc/profile
 # 配置maven仓库
 # vim /usr/local/apache-maven-3.8.7/conf/settings.xml
 <mirror>
  <id>alimaven</id>
  <name>aliyun maven</name>
  <url>https://maven.aliyun.com/repository/public/</url>
  <mirrorOf>central</mirrorOf>
 </mirror>
 # 将gitlab-runner加入docker组
 gpasswd -a gitlab-runner docker
 # 更新docker组
 newgrp docker

harbor安装

 # 安装docker-compose
 # curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
 # 移动并重命名
 mv docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
 chmod +x /usr/local/bin/docker-compose
 # docker-compose --version
 # wget https://storage.googleapis.com/harbor-releases/release-2.5.6/harbor-offline-installer-v2.5.6.tgz
 # 安装harbor
 tar zxf harbor-offline-installer-v2.5.6.tgz -C /usr/local/
 # 生成自签名证书
 mkdir -p /usr/local/harbor/certs && cd /usr/local/harbor/certs
 # 生成ca证书
 openssl genrsa -out ca.key 3072
 openssl req -new -x509 -days 3650 -key ca.key -out ca.crt
 # 生成域名的证书
 openssl genrsa -out harbor.key 3072
 openssl req -new -key harbor.key -out harbor.csr
 # 签发证书
 openssl x509 -req -in harbor.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out harbor.crt -days 3650
 cd ..
 # 安装harbor
 cp harbor.yml.tmpl harbor.yml
 # 编辑harbor文件如附件所示
 # hostname: 39.98.49.122
 # http:
 #   port: 5000
 # https:
 #   port: 5001
 #   certificate: /usr/local/harbor/certs/harbor.crt
 #   private_key: /usr/local/harbor/certs/harbor.key
 # 开始安装
 /usr/local/harbor/install.sh
 # 登录harbor
 docker login -u 用户名 -p 密码 192.168.2.6:85

联机测试

 # 在gitlab新建一个spring项目模板项目。
 # 修改docker file文件如附页所示。
 # 新增appname.yaml文件用于pod部署。
 # 当你看到完成到下面这张图的时候,恭喜你,你闯关成功了!

可能遇到的错误

 1、# pod dns起不来
 #open /run/flannel/subnet.env: no such file or directory
 # 查看各个节点是否有/run/flannel/subnet.env
 cat > /run/flannel/subnet.env << EOF
 FLANNEL_NETWORK=10.244.0.0/16
 FLANNEL_SUBNET=10.244.1.0/24
 FLANNEL_MTU=1450
 FLANNEL_IPMASQ=true
 EOF
 2、# Unable to connect to the server: x509: certificate signed by unknown authority
 rm -rf $HOME/.kube
 3、# k8s"failed to set bridge addr: "cni0" already has an IP address different from 10.244.1.1/24"
 # 将错误的网卡删掉,它会自己重建
 ifconfig cni0 down    
 ip link delete cni0
 4、# The HTTP call equal to ‘curl -sSL http://localhost:10248/healthz‘ failed wite
 # docker的cgroup驱动程序默认设置为system。默认情况下Kubernetes cgroup为systemd,我们需要更改Docker cgroup驱动
 cat > /etc/docker/daemon.json << EOF
 {
   "exec-opts": ["native.cgroupdriver=systemd"]
 }
 EOF

常用命令

 # k8s部分
 # k8s集群日志
 journalctl -xefu kubelet
 # k8s删除一个pod
 # 彻底删除
 kubectl get deployment -n <namespace>
 kubectl delete deployment <deployment name> -n <namespace>
 # 重启pod
 kubectl delete pod <podname> -n <namespace>
 # 查看节点
 kubectl get nodes
 # 清理完pod后 k8s剔除node
 kubectl delete node <node-name>
 # master节点重新生成join指令
 kubeadm token create --print-join-command
 # 查看pods事件
 kubectl describe pods -n <namespace> <podname>
 # 查看pods日志
 kubectl logs -f -n <namespace> <podname>
 # 获取所有命令空间pod的状态
 kubectl get pods -A
 # 运行一个测试pod
 kubectl create deploy my-nginx --image=nginx
 kubectl expose deploy my-nginx --port=80 --type=NodePort
 # 卸载k8s安装包
 yum remove -y kubelet kubeadm kubectl
 # 清理k8s环境
 kubeadm reset -f
 rm -rf /root/.kube/
 rm -rf /etc/cni/net.d
 rm -rf /var/lib/cni
 rm -rf /var/lib/etcd
 rm -rf /etc/kubernetes/
 rm -rf /var/lib/kubelet/
 rm -rf /var/lib/dockershim
 rm -rf /var/run/kubernetes
 
 
 # gitlab部分
 # 查看默认密码 默认账号为root
 docker exec -it gitlab grep 'Password:' /etc/gitlab/initial_root_password
 # gitlab关闭/启动/查看状态
 gitlab-ctl stop/start/status
 # 查看docker name
 sudo docker inspect -f='{{.Name}}' $(sudo docker ps -a -q)
 # 查看dockers ip
 docker inspect -f='{{.NetworkSettings.IPAddress}}' $(sudo docker ps -a -q)
 # 所有容器对应的名称,端口,及ip
  docker inspect -f='{{.Name}} {{.NetworkSettings.IPAddress}} {{.HostConfig.PortBindings}}' $(docker ps -aq)
 docker cp gitlab-runner:/etc/profile /usr/local/mapping-file
 mv profile gitlab-runner-etc-profile
 
 
 # harbor部分
 # 停止Harbor容器
 cd /usr/local/harbor
 docker-compose stop
 # 创建并启动Harbor容器,参数“-d”表示后台运行命令
 docker-compose up -d
 # 登入harbor
 docker login -u admin -p Harbor12345 39.98.49.122
 # 登出
 docker logout 39.98.49.122
 
 scp -r harbor root@39.98.49.122:/usr/local
 scp docker-compose root@39.98.49.122:/usr/local/bin

附页

kube-flannel.yml

 ---
 kind: Namespace
 apiVersion: v1
 metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
 ---
 kind: ClusterRole
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
  name: flannel
 rules:
 - apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
 - apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
 - apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
 - apiGroups:
  - "networking.k8s.io"
  resources:
  - clustercidrs
  verbs:
  - list
  - watch
 ---
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
  name: flannel
 roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
 subjects:
 - kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
 ---
 apiVersion: v1
 kind: ServiceAccount
 metadata:
  name: flannel
  namespace: kube-flannel
 ---
 kind: ConfigMap
 apiVersion: v1
 metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
 data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
 ---
 apiVersion: apps/v1
 kind: DaemonSet
 metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
 spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
        image: docker.io/flannel/flannel-cni-plugin:v1.1.2
        #image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
        image: docker.io/flannel/flannel:v0.21.2
        #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: docker.io/flannel/flannel:v0.21.2
        #image: docker.io/rancher/mirrored-flannelcni-flannel:v0.21.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --public-ip=$(PUBLIC_IP) # 添加此参数,申明公网IP
        - --iface=eth0 # 添加此参数,绑定网卡
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name 
      - name: POD_NAMESPACE 
        valueFrom
          fieldRef
            fieldPath: metadata.namespace 
      - name: PUBLIC_IP #添加环境变量 
        valueFrom
          fieldRef
            fieldPath: status.podIP 
      - name: EVENT_QUEUE_DEPTH 
        value: "5000" 
      volumeMounts
      - name: run 
        mountPath: /run/flannel 
      - name: flannel-cfg 
        mountPath: /etc/kube-flannel/ 
      - name: xtables-lock 
        mountPath: /run/xtables.lock 
    volumes
    - name: run 
      hostPath
        path: /run/flannel 
    - name: cni-plugin 
      hostPath
        path: /opt/cni/bin 
    - name: cni 
      hostPath
        path: /etc/cni/net.d 
    - name: flannel-cfg 
      configMap
        name: kube-flannel-cfg 
    - name: xtables-lock 
      hostPath
        path: /run/xtables.lock 
        type: FileOrCreate 

.gitlab-ci文件

 # image: docker:latest
 stages:
  - mvn-pkg
  - docker-pkg
  - k8s-deploy
 variables:
  APP_NAME: app-name #gitlab项目名
  REGISTRY_URL: 39.98.49.122:5001 #仓库url
 
 cache:
  paths:
    - .m2/repository
 mvn-package:
  image: maven:3.5-jdk-8-alpine
  tags:
    - run1 # 使用的gitlab-runner需要有这个标签
  stage: mvn-pkg
  script:
    - mvn clean package -Dmaven.test.skip=true
  artifacts:
    paths:
      - target/*.jar
 
 docker-pkg:
  stage: docker-pkg
  before_script:
  - docker login -u admin -p Harbor12345 39.98.49.122:5001
  script:
  - docker build -t $REGISTRY_URL/test/$APP_NAME .
  - docker push $REGISTRY_URL/test/$APP_NAME
  allow_failure: false
  only:
  - master
  tags:
  - run1
 
 k8s-deploy:
  stage: k8s-deploy
  script:
  - kubectl apply -f ./appname.yaml
  allow_failure: false
  dependencies:
  - docker-pkg
  only:
  - master
  tags:
  - run1

appname.yaml

 apiVersion: v1
 kind: Service
 metadata:
  name: transport
  labels:
    app: transport
 spec:
  type: NodePort
  ports:
  - name: http
    port: 9501                      #服务端口
    targetPort: 9501
    nodePort: 30018                 #NodePort方式暴露
  selector:
    app: transport
 
 ---
 
 apiVersion: apps/v1
 # 定义资源的类型/角色,deployment为副本控制器,此处资源类型可以是Deployment、Job、Ingress、Service等
 kind: Deployment
 # 定义资源的元数据信息,比如资源的名称、namespace、标签等信息
 metadata:
   #定义资源的名称,在同一个namespace空间中必须是唯一的
  name: transport
 # 定义deployment资源需要的参数属性,诸如是否在容器失败时重新启动容器的属性
 spec:
   # 定义标签选择器
  selector:
     # 定义匹配标签
    matchLabels:
       # 匹配上面的标签,需与上面的标签定义的app保持一致  
      app: transport
  replicas: 3
   # 定义业务模板,如果有多个副本,所有副本的属性会按照模板的相关配置进行匹配
  template:
    metadata:
      labels:
        app: transport
    spec:
      imagePullSecrets:
        - name: regsecret
       # 定义容器属性
      containers:
       # 定义一个容器名,一个- name:定义一个容器
      - name: transport
         # 定义容器使用的镜像以及版本
         # image: hub.sx.com/hyperf/hyperf
        image: 39.98.49.122:5001/test/app-name
        imagePullPolicy: Always
        securityContext:                    
          runAsUser: 0                      #设置以ROOT用户运行容器
          privileged: true                  #拥有特权
        ports:
        - name: http
           # 定义容器的对外的端口
          containerPort: 9501
        resources:
          limits:
            memory: 2Gi
            cpu: "1000m"
          requests:
            memory: 500Mi
            cpu: "500m"
        volumeMounts:
        - mountPath:  /opt/www/runtime      
          name: test-volume  
      volumes:
      - name: test-volume  
        hostPath:  
          path: /data/transport/logs # directory location on host          
          type: DirectoryOrCreate # this field is optional

参考文档

 

写在最后

终于写完了,哈哈哈哈。大家以为的我现在状态👇

 

实际我的状态👇

 

来回地调试真的很费心神,但是工科的东西不要怕麻烦。百闻不如一试,一定要多实践!

最后做完实验,看个电影放松一下吧。这里给大家推荐一个电影——告白(中岛哲也)

That's it, if my content is good for you, don't forget to subscribe to it! Bye!

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

posted @ 2023-03-13 10:32  neutrinos  阅读(201)  评论(0编辑  收藏  举报