基于Kubernetes/K8S构建Jenkins持续集成平台

基于Kubernetes/K8S构建Jenkins持续集成平台

作者:运维人在路上

个人博客https://www.cnblogs.com/hujinzhong

微信公众号:运维人在路上

Bilibili账号https://space.bilibili.com/409609392

个人简介:本人是一枚大型电商平台的运维工程师,对开发及运维有一定了解,现分享技术干货,欢迎大家交流!

一、Kubernetes介绍

1.1、简介

Kubernetes(简称,K8S)是Google开源的容器集群管理系统,在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等一系列完整功能,提高了大规模容器集群管理的便捷性。 其主要功能如下:

  • 使用Docker对应用程序包装(package)、实例化(instantiate)、运行(run)。
  • 以集群的方式运行、管理跨机器的容器。
  • 解决Docker跨机器容器之间的通讯问题
  • Kubernetes的自我修复机制使得容器集群总是运行在用户期望的状态。

1.2、Kubernetes架构

image-20210326155612389

master节点

  • API Server:用于暴露Kubernetes API,任何资源的请求的调用操作都是通过kube-apiserver提供的接口进行的。
  • Etcd:是Kubernetes提供默认的存储系统,保存所有集群数据,使用时需要为etcd数据提供备份计划。
  • Controller-Manager:作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状态。
  • Scheduler:监视新创建没有分配到Node的Pod,为Pod选择一个Node。

node节点

  • Kubelet:负责维护容器的生命周期,同时负责Volume和网络的管理
  • Kube proxy:是Kubernetes的核心组件,部署在每个Node节点上,它是实现Kubernetes Service的通信与负载均衡机制的重要组件

二、Kubernates集群安装

2.1、架构图

image-20210326155725284

image-20210326160316754

大致工作流程:手动/自动构建 -> Jenkins 调度 K8S API ->动态生成 Jenkins Slave pod -> Slave pod拉取 Git 代码/编译/打包镜像 ->推送到镜像仓库 Harbor -> Slave 工作完成,Pod 自动销毁 ->部署到测试或生产 Kubernetes平台。(完全自动化,无需人工干预)

2.2、部署方案好处

1)服务高可用:当 Jenkins Master 出现故障时,Kubernetes 会自动创建一个新的 Jenkins Master容器,并且将 Volume 分配给新创建的容器,保证数据不丢失,从而达到集群服务高可用。

2)动态伸缩,合理使用资源:每次运行 Job 时,会自动创建一个 Jenkins Slave,Job 完成后,Slave自动注销并删除容器,资源自动释放,而且 Kubernetes 会根据每个资源的使用情况,动态分配Slave 到空闲的节点上创建,降低出现因某节点资源利用率高,还排队等待在该节点的情况。

3)扩展性好:当 Kubernetes 集群的资源严重不足而导致 Job 排队等待时,可以很容易的添加一个Kubernetes Node 到集群中,从而实现扩展。

2.3、服务器规划

主机名称 ip地址 安装的软件
代码托管服务器 10.0.0.101 Gitlab
Docker镜像仓库服务器 10.0.0.102 Harbor
k8s-master 10.0.0.103 kube-apiserver、kube-controller-manager、kube-scheduler、docker、etcd、calico,NFS
k8s-node1 10.0.0.104 kubelet、kubeproxy、Docker
k8s-node2 10.0.0.105 kubelet、kubeproxy、Docker

2.4、k8s部署

2.4.1、前置环境

三台服务器都需要完成

1)修改三台机器的hostname及hosts文件

# 修改hostname
hostnamectl set-hostname k8s-master 
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2

# 修改hosts文件
cat >>/etc/hosts <<EOF
10.0.0.103 k8s-master
10.0.0.104 k8s-node1
10.0.0.105 k8s-node2
EOF

2)关闭防火墙和selinux(云服务器无需配置)

# 防火墙
systemctl stop firewalld
systemctl disable firewalld

# selinux
setenforce 0 	# 临时关闭
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux # 永久关闭

3)设置系统参数

# 设置允许路由转发,不对bridge的数据进行处理
cat >>/etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

# 执行生效
sysctl -p /etc/sysctl.d/k8s.conf

4)kube-proxy开启ipvs的前置条件

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

5)所有节点关闭swap(云服务器无需配置)

# 临时关闭
[root@k8s-master ~]# swapoff -a

# 永久关闭(注释swap行)
[root@k8s-master ~]# vim /etc/fstab
#UUID=70e8d1ea-c8a2-4cc4-8146-fe2ef7229b4b swap                    swap    defaults        0 0

6)所有节点安装docker

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
sed -i 's#download.docker.com#mirrors.tuna.tsinghua.edu.cn/docker-ce#g' /etc/yum.repos.d/docker-ce.repo
yum install docker-ce lrzsz -y
docker version
mkdir -p /etc/docker/ && touch /etc/docker/daemon.json
cat >/etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://t09ww3n9.mirror.aliyuncs.com"],
  "insecure-registries": ["10.0.0.101:85"]
}
EOF
systemctl start docker
systemctl enable docker

7)所有节点安装kubelet、kubeadm、kubectl

  • kubeadm: 用来初始化集群的指令。
  • kubelet: 在集群中的每个节点上用来启动 pod 和 container 等。
  • kubectl: 用来与集群通信的命令行工具。
# 清空yum缓存
yum clean all

# 设置yum安装源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装指定版本
yum install -y kubelet-1.17.0 kubeadm-1.17.0 kubectl-1.17.0

# kubelet设置开机启动(注意:先不启动,现在启动的话会报错)
systemctl enable kubelet

# 查看版本
kubelet --version

2.4.2、master节点初始化

# 1、运行初始化命令(注意:apiserver-advertise-address这个地址必须是master机器的IP)
kubeadm init --kubernetes-version=1.17.0 \
--apiserver-advertise-address=10.0.0.103 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

# 该命令需要先记录下来,后续node节点加入需要使用
kubeadm join 10.0.0.103:6443 --token 8ihlea.z732b48fmwxtow6v \
    --discovery-token-ca-cert-hash sha256:a54882ca516cb6615651a4cb5743d9e6733fa9335e46b1d030f6e3f16538e203

image-20210325111031072

# 2、配置kubectl工具
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 3、启动kubelet
[root@k8s-master k8s]# systemctl start kubelet

# 4、安装calico
[root@k8s-master k8s]# mkdir k8s
[root@k8s-master k8s]# cd k8s
[root@k8s-master k8s]# wget https://docs.projectcalico.org/v3.10/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
[root@k8s-master k8s]# sed -i 's/192.168.0.0/10.244.0.0/g' calico.yaml 
[root@k8s-master k8s]# kubectl apply -f calico.yaml

# 5、等待几分钟,查看所有Pod的状态,确保所有Pod都是Running状态
[root@k8s-master k8s]# kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   10m   v1.17.0
[root@k8s-master k8s]# kubectl get pod --all-namespaces -o wide

image-20210325112012532

2.4.3、slave节点加入集群

# 1、让所有节点让集群环境,使用之前Master节点产生的命令加入集群
kubeadm join 10.0.0.103:6443 --token 8ihlea.z732b48fmwxtow6v \
    --discovery-token-ca-cert-hash sha256:a54882ca516cb6615651a4cb5743d9e6733fa9335e46b1d030f6e3f16538e203
    
# 2、启动kubelet
[root@k8s-node1 ~]# systemctl start kubelet
[root@k8s-node1 ~]# systemctl status kubelet

# 3、回到Master节点查看,如果Status全部为Ready,代表集群环境搭建成功
[root@k8s-master k8s]# kubectl get nodes
NAME         STATUS   ROLES    AGE     VERSION
k8s-master   Ready    master   23m     v1.17.0
k8s-node1    Ready    <none>   2m45s   v1.17.0
k8s-node2    Ready    <none>   67s     v1.17.0

2.4.4、kubectl常用命令

kubectl get nodes 	# 查看所有主从节点的状态
kubectl get ns 		# 获取所有namespace资源 
kubectl get pods -n {$nameSpace}  # 获取指定namespace的pod 
kubectl describe pod的名称 -n {$nameSpace}  # 查看某个pod的执行过程 
kubectl logs --tail=1000 pod的名称 | less # 查看日志 
kubectl create -f xxx.yml  # 通过配置文件创建一个集群资源对象 
kubectl delete -f xxx.yml  # 通过配置文件删除一个集群资源对象 
kubectl delete pod名称 -n {$nameSpace} # 通过pod删除集群资源 
kubectl get service -n {$nameSpace} # 查看pod的service情况

三、Kubernetes/Jenkins持续集成部署

3.1、NFS安装配置

3.1.1、NFS简介

NFS(Network File System),它最大的功能就是可以通过网络,让不同的机器、不同的操作系统可以共享彼此的文件。我们可以利用NFS共享Jenkins运行的配置文件、Maven的仓库依赖文件等

3.1.2、NFS安装

我们把NFS服务器安装在master节点上

# 1、安装NFS服务(在所有K8S的节点都需要安装)
[root@k8s-master ~]# yum install -y nfs-utils

# 2、创建共享目录
[root@k8s-master ~]# mkdir -p /opt/nfs/jenkins
[root@k8s-master ~]# vim /etc/exports
/opt/nfs/jenkins *(rw,no_root_squash)	# *代表对所有IP都开放此目录,rw是读写

# 3、启动服务
[root@k8s-master ~]# systemctl enable nfs
[root@k8s-master ~]# systemctl start nfs

# 4、查看NFS共享目录
[root@k8s-node1 ~]# showmount -e 10.0.0.103
Export list for 10.0.0.103:
/opt/nfs/jenkins *

3.2、kubernetes安装Jenkins-Master

3.2.1、创建NFS client provisioner

nfs-client-provisioner 是一个Kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储

1)上传nfs-client-provisioner构建文件

[root@k8s-master nfs-client]# ll
total 12
-rw-r--r-- 1 root root  225 Nov 27  2019 class.yaml
-rw-r--r-- 1 root root  985 Dec  9  2019 deployment.yaml
-rw-r--r-- 1 root root 1526 Nov 27  2019 rbac.yaml

# 注意修改deployment.yaml,使用之前配置NFS服务器和目录

自定义存储资源清单:class.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true"

rbac权限资源清单:rbac.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

自定义deployment:deployment.yaml,注意使用之前配置NFS服务器和目录

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: lizhenliang/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: 10.0.0.103
            - name: NFS_PATH
              value: /opt/nfs/jenkins/
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.0.0.103
            path: /opt/nfs/jenkins/

2)构建nfs-client-provisioner的pod资源

[root@k8s-master nfs-client]# pwd
/root/k8s/nfs-client
[root@k8s-master nfs-client]# kubectl create -f .
[root@k8s-master nfs-client]# kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
nfs-client-provisioner-989bf44d-gtn4k   1/1     Running   0          110m

3.2.2、安装Jenkins-Master

1)上传Jenkins-Master构建文件

[root@k8s-master jenkins-master]# pwd
/root/k8s/jenkins-master
[root@k8s-master jenkins-master]# ll
total 16
-rw-r--r-- 1 root root 1874 Dec  8  2019 rbac.yaml
-rw-r--r-- 1 root root   87 Dec  7  2019 ServiceAcount.yaml
-rw-r--r-- 1 root root  284 Dec  7  2019 Service.yaml
-rw-r--r-- 1 root root 2116 Dec  7  2019 StatefulSet.yaml

自定义rbac权限管理:rbac.yaml

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
  namespace: kube-ops
rules:
  - apiGroups: ["extensions", "apps"]
    resources: ["deployments"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
  - apiGroups: [""]
    resources: ["services"]
    verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
  - apiGroups: [""]
    resources: ["pods"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/exec"]
    verbs: ["create","delete","get","list","patch","update","watch"]
  - apiGroups: [""]
    resources: ["pods/log"]
    verbs: ["get","list","watch"]
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
  namespace: kube-ops
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
  - kind: ServiceAccount
    name: jenkins
    namespace: kube-ops
    
---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkinsClusterRole
  namespace: kube-ops
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
  resources: ["pods/log"]
  verbs: ["get","list","watch"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]
 
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkinsClusterRuleBinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: jenkinsClusterRole
subjects:
- kind: ServiceAccount
  name: jenkins
  namespace: kube-ops

自定义ServiceaAcount:ServiceaAcount.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
  namespace: kube-ops

自定义Service:Service.yaml

apiVersion: v1
kind: Service
metadata:
  name: jenkins
  namespace: kube-ops
  labels:
    app: jenkins
spec:
  selector:
    app: jenkins
  type: NodePort
  ports:
  - name: web
    port: 8080
    targetPort: web
  - name: agent
    port: 50000
    targetPort: agent

自定义有状态应用:StatefulSet.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: jenkins
  labels:
    name: jenkins
  namespace: kube-ops
spec:
  serviceName: jenkins
  selector:
    matchLabels:
      app: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      name: jenkins
      labels:
        app: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
        - name: jenkins
          image: jenkins/jenkins:lts-alpine
          imagePullPolicy: IfNotPresent
          ports:
          - containerPort: 8080
            name: web
            protocol: TCP
          - containerPort: 50000
            name: agent
            protocol: TCP
          resources:
            limits:
              cpu: 1
              memory: 1Gi
            requests:
              cpu: 0.5
              memory: 500Mi
          env:
            - name: LIMITS_MEMORY
              valueFrom:
                resourceFieldRef:
                  resource: limits.memory
                  divisor: 1Mi
            - name: JAVA_OPTS
              value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
          volumeMounts:
            - name: jenkins-home
              mountPath: /var/jenkins_home
          livenessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12
          readinessProbe:
            httpGet:
              path: /login
              port: 8080
            initialDelaySeconds: 60
            timeoutSeconds: 5
            failureThreshold: 12
      securityContext:
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: jenkins-home
    spec:
      storageClassName: "managed-nfs-storage"
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

注意以下两点:

  • StatefulSet.yaml文件,声明了利用nfs-client-provisioner进行Jenkins-Master文件存储
  • Service发布方法采用NodePort,会随机产生节点访问端口

2)创建kube-ops的namespace

# 把Jenkins-Master的pod放到kube-ops下
[root@k8s-master jenkins-master]# kubectl create namespace kube-ops
namespace/kube-ops create

3)构建Jenkins-Master的pod资源

[root@k8s-master jenkins-master]# pwd
/root/k8s/jenkins-master
[root@k8s-master jenkins-master]# kubectl create -f .
[root@k8s-master jenkins-master]# kubectl get pods -n kube-ops
NAME        READY   STATUS    RESTARTS   AGE
jenkins-0   0/1     Running   0          77s

4)查看pod的相关信息

# 查看Pod运行在那个Node上
[root@k8s-master jenkins-master]# kubectl describe pods -n kube-ops

# 查看分配的端口
[root@k8s-master jenkins-master]# kubectl get service -n kube-ops
NAME      TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)                          AGE
jenkins   NodePort   10.1.228.200   <none>        8080:30707/TCP,50000:32495/TCP   2m45s

image-20210325143615764

5)浏览器访问

访问地址:http://10.0.0.105:30707/,ip为node2的ip地址

image-20210325143755452

[root@k8s-master secrets]# pwd
/opt/nfs/jenkins/kube-ops-jenkins-home-jenkins-0-pvc-c64f6c53-01a3-4b5d-bc76-e61f3cb0116a/secrets
[root@k8s-master secrets]# cat initialAdminPassword 
1b0ea53845fa48ef9d5a8e5017a052ca

安装过程与之前一样

6)修改jenkins插件下载地址

[root@k8s-master updates]# pwd
/opt/nfs/jenkins/kube-ops-jenkins-home-jenkins-0-pvc-c64f6c53-01a3-4b5d-bc76-e61f3cb0116a/updates
[root@k8s-master updates]# sed -i 's/https:\/\/updates.jenkins.io\/download/https:\/\/mirrors.tuna.tsinghua.edu.cn\/jenkins/g' default.json
[root@k8s-master updates]# sed -i 's/http:\/\/www.google.com/https:\/\/www.baidu.com/g' default.json

最后,Manage Plugins点击Advanced,把Update Site改为国内插件下载地址后重启Jenkins:

https://mirrors.tuna.tsinghua.edu.cn/jenkins/updates/update-center.json

image-20210325145216437

7)安装基本插件

# 基本插件如下:
Localization:Chinese
Git
Pipeline
Extended Choice Parameter

3.3、Jenkins与Kubernetes整合

1)安装Kubernetes插件

image-20210325151441792

2)实现jenkins与kubernetes整合

系统管理->系统配置->云->新建云->Kubernetes

image-20210325151839293

image-20210325151905510

image-20210325152110679

说明

kubernetes地址采用了kube的服务器发现:https://kubernetes.default.svc.cluster.local

namespace填kube-ops,然后点击Test Connection,如果出现成功的提示信息证明 Jenkins 已经可以和 Kubernetes 系统正常通信

Jenkins URL 地址:http://jenkins.kube-ops.svc.cluster.local:8080

3.4、构建Jenkins-Slave自定义镜像

Jenkins-Master在构建Job的时候,Kubernetes会创建Jenkins-Slave的Pod来完成Job的构建。我们选择运行Jenkins-Slave的镜像为官方推荐镜像:jenkins/jnlp-slave:latest,但是这个镜像里面并没有Maven环境,为了方便使用,我们需要自定义一个新的镜像:

[root@k8s-master jenkins-slave]# ll
total 8948
-rw-r--r-- 1 root root 9142315 Nov  8  2019 apache-maven-3.6.2-bin.tar.gz
-rw-r--r-- 1 root root     556 Dec 20  2019 Dockerfile
-rw-r--r-- 1 root root   10475 Nov 25  2019 settings.xml	# 添加阿里云maven仓库

Dockerfifile文件内容如下:

FROM jenkins/jnlp-slave:latest

MAINTAINER itcast

# 切换到 root 账户进行操作
USER root

# 安装 maven
COPY apache-maven-3.6.2-bin.tar.gz .

RUN tar -zxf apache-maven-3.6.2-bin.tar.gz && \
    mv apache-maven-3.6.2 /usr/local && \
    rm -f apache-maven-3.6.2-bin.tar.gz && \
    ln -s /usr/local/apache-maven-3.6.2/bin/mvn /usr/bin/mvn && \
    ln -s /usr/local/apache-maven-3.6.2 /usr/local/apache-maven && \
    mkdir -p /usr/local/apache-maven/repo

COPY settings.xml /usr/local/apache-maven/conf/settings.xml

USER jenkins

构建出一个新镜像:jenkins-slave-maven:latest,把镜像上传到Harbor的公共库library中

# 构建镜像
[root@k8s-master jenkins-slave]# docker build -t jenkins-slave-maven:latest .
[root@k8s-master jenkins-slave]# docker images|grep "jenkins-slave-maven"

# 登录Harbor
[root@k8s-master jenkins-slave]# docker login -u admin -p Harbor12345 10.0.0.101:85

# 打标签并推送镜像(放在library公共项目中)
[root@k8s-master jenkins-slave]# docker tag jenkins-slave-maven:latest 10.0.0.101:85/library/jenkins-slave-maven:latest
[root@k8s-master jenkins-slave]# docker push 10.0.0.101:85/library/jenkins-slave-maven:latest

3.5、测试Jenkins-Slave是否可以创建

1)jenkins中创建gitlab凭证

image-20210325155642453

2)创建一个流水线项目

image-20210325155133235

3)编写pipeline,从gitlab拉取代码

def git_address = "http://10.0.0.101:82/dianchou_group/tensqure_back.git"
def git_auth = "af309a92-668d-4ba9-bb14-af09d398f24b"

//创建一个Pod的模板,label为jenkins-slave
podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
    containerTemplate(
        name: 'jnlp', 
        image: "10.0.0.101:85/library/jenkins-slave-maven:latest"
    )
  ]
) 
{
  //引用jenkins-slave的pod模块来构建Jenkins-Slave的pod
  node("jenkins-slave"){
      // 第一步
      stage('拉取代码'){
         checkout([$class: 'GitSCM', branches: [[name: 'master']], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_address}"]]])
      }
  }
}

image-20210325163101648

4)控制台输入

image-20210325163235987

image-20210325163254994

3.6、微服务持续集成

3.6.1、拉取代码构建镜像

1)创建NFS共享目录,让所有Jenkins-Slave构建指向NFS的Maven的共享仓库目录

[root@k8s-master ~]# vim /etc/exports
/opt/nfs/jenkins *(rw,no_root_squash)
/opt/nfs/maven *(rw,no_root_squash)
[root@k8s-master ~]# systemctl restart nfs

2)创建harbor凭证

image-20210325165141099

3)创建流水线项目并编写pipeline

image-20210325165015388

添加字符参数:

image-20210325181525482

添加多选框参数:

image-20210325181549819

image-20210325181615386

pipeline脚本:

def git_address = "http://10.0.0.101:82/dianchou_group/tensqure_back.git"
def git_auth = "af309a92-668d-4ba9-bb14-af09d398f24b"
//构建版本的名称
def tag = "latest"
//Harbor私服地址
def harbor_url = "10.0.0.101:85"
//Harbor的项目名称
def harbor_project_name = "dianchou"
//Harbor的凭证
def harbor_auth = "23a9742f-e72d-4541-aa63-6389c3a30828"



podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
    containerTemplate(
        name: 'jnlp', 
        image: "10.0.0.101:85/library/jenkins-slave-maven:latest"
    ),
    containerTemplate(
        name: 'docker', 
        image: "docker:stable",
        ttyEnabled: true,
        command: 'cat'
    ),
  ],
  volumes: [
    hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
    nfsVolume(mountPath: '/usr/local/apache-maven/repo', serverAddress: '10.0.0.103' , serverPath: '/opt/nfs/maven'),
  ],
) 
{
  node("jenkins-slave"){
      // 第一步
      stage('拉取代码'){
         checkout([$class: 'GitSCM', branches: [[name: '${branch}']], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_address}"]]])
      }
      // 第二步
      stage('代码编译'){
           //编译并安装公共工程
         sh "mvn -f tensquare_common clean install" 
      }
      // 第三步
      stage('构建镜像,部署项目'){
	        //把选择的项目信息转为数组
			def selectedProjects = "${project_name}".split(',')
			
            for(int i=0;i<selectedProjects.size();i++){
                //取出每个项目的名称和端口
                def currentProject = selectedProjects[i];
                //项目名称
                def currentProjectName = currentProject.split('@')[0]
                //项目启动端口
                def currentProjectPort = currentProject.split('@')[1]

                 //定义镜像名称
                 def imageName = "${currentProjectName}:${tag}"
				 
				 //编译,构建本地镜像
				 sh "mvn -f ${currentProjectName} clean package dockerfile:build"
				 container('docker') {

					 //给镜像打标签
					 sh "docker tag ${imageName} ${harbor_url}/${harbor_project_name}/${imageName}"

					 //登录Harbor,并上传镜像
					 withCredentials([usernamePassword(credentialsId: "${harbor_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
						  //登录
						  sh "docker login -u ${username} -p ${password} ${harbor_url}"
						  //上传镜像
						  sh "docker push ${harbor_url}/${harbor_project_name}/${imageName}"
					 }

					 //删除本地镜像
					 sh "docker rmi -f ${imageName}"
					 sh "docker rmi -f ${harbor_url}/${harbor_project_name}/${imageName}"
				 }
				 
         } 
      }
  }
}

image-20210325181743940

需要注意的坑

a、在构建过程会发现无法创建仓库目录,是因为NFS共享目录权限不足,需更改权限

chown -R jenkins:jenkins /opt/nfs/maven
chmod -R 777 /opt/nfs/maven

b、报错:java.util.concurrent.ExecutionException: com.spotify.docker.client.shaded.javax.ws.rs.ProcessingException: java.io.IOException: Permission denied

# Docker命令执行权限问题
chmod 777 /var/run/docker.sock

c、需要手动上传父工程依赖到NFS的Maven共享仓库目录中

[root@k8s-master tensquare]# pwd
/opt/nfs/maven/com/tensquare
[root@k8s-master tensquare]# ll
total 0
drwxrwxrwx 3 jenkins jenkins 58 Mar 25 17:50 tensquare_common
drwxrwxrwx 3 jenkins jenkins 58 Mar 25 18:05 tensquare_parent	# 手动添加

3.6.2、微服务部署到k8s

1)修改每个微服务的application.yml

以eureka为例

server:
  port: ${PORT:10086}
spring:
  application:
    name: eureka

eureka:
  server:
    # 续期时间,即扫描失效服务的间隔时间(缺省为60*1000ms)
    eviction-interval-timer-in-ms: 5000
    enable-self-preservation: false
    use-read-only-response-cache: false
  client:
    # eureka client间隔多久去拉取服务注册信息 默认30s
    registry-fetch-interval-seconds: 5
    serviceUrl:
      defaultZone: ${EUREKA_SERVER:http://127.0.0.1:${server.port}/eureka/}
  instance:
    # 心跳间隔时间,即发送一次心跳之后,多久在发起下一次(缺省为30s)
    lease-renewal-interval-in-seconds: 5
    # 在收到一次心跳之后,等待下一次心跳的空档时间,大于心跳间隔即可,即服务续约到期时间(缺省为90s)
    lease-expiration-duration-in-seconds: 10
    instance-id: ${EUREKA_INSTANCE_HOSTNAME:${spring.application.name}}:${server.port}@${random.long(1000000,9999999)}
    hostname: ${EUREKA_INSTANCE_HOSTNAME:${spring.application.name}}

其他服务需要注册到所有Eureka中:

# Eureka配置
eureka:
  client:
    service-url:
      defaultZone: http://eureka-0.eureka:10086/eureka/,http://eureka-1.eureka:10086/eureka/ # Eureka访问地址
  instance:
    prefer-ip-address: true

2)安装Kubernetes Continuous Deploy插件

image-20210325191744778

3)修改流水线脚本,添加部署步骤

def git_address = "http://10.0.0.101:82/dianchou_group/tensqure_back.git"
def git_auth = "af309a92-668d-4ba9-bb14-af09d398f24b"
//构建版本的名称
def tag = "latest"
//Harbor私服地址
def harbor_url = "10.0.0.101:85"
//Harbor的项目名称
def harbor_project_name = "dianchou"
//Harbor的凭证
def harbor_auth = "23a9742f-e72d-4541-aa63-6389c3a30828"
def secret_name = "registry-auth-secret"
//k8s凭证
def k8s_auth = "fb395f34-03a6-4e40-b0b4-349d8cd45a34";



podTemplate(label: 'jenkins-slave', cloud: 'kubernetes', containers: [
    containerTemplate(
        name: 'jnlp', 
        image: "10.0.0.101:85/library/jenkins-slave-maven:latest"
    ),
    containerTemplate(
        name: 'docker', 
        image: "docker:stable",
        ttyEnabled: true,
        command: 'cat'
    ),
  ],
  volumes: [
    hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock'),
    nfsVolume(mountPath: '/usr/local/apache-maven/repo', serverAddress: '10.0.0.103' , serverPath: '/opt/nfs/maven'),
  ],
) 
{
  node("jenkins-slave"){
      // 第一步
      stage('拉取代码'){
         checkout([$class: 'GitSCM', branches: [[name: '${branch}']], userRemoteConfigs: [[credentialsId: "${git_auth}", url: "${git_address}"]]])
      }
      // 第二步
      stage('代码编译'){
           //编译并安装公共工程
         sh "mvn -f tensquare_common clean install" 
      }
      // 第三步
      stage('构建镜像,部署项目'){
	        //把选择的项目信息转为数组
			def selectedProjects = "${project_name}".split(',')
			
            for(int i=0;i<selectedProjects.size();i++){
                //取出每个项目的名称和端口
                def currentProject = selectedProjects[i];
                //项目名称
                def currentProjectName = currentProject.split('@')[0]
                //项目启动端口
                def currentProjectPort = currentProject.split('@')[1]

                 //定义镜像名称
                 def imageName = "${currentProjectName}:${tag}"
				 
				 //编译,构建本地镜像
				 sh "mvn -f ${currentProjectName} clean package dockerfile:build"
				 container('docker') {

					 //给镜像打标签
					 sh "docker tag ${imageName} ${harbor_url}/${harbor_project_name}/${imageName}"

					 //登录Harbor,并上传镜像
					 withCredentials([usernamePassword(credentialsId: "${harbor_auth}", passwordVariable: 'password', usernameVariable: 'username')]) {
						  //登录
						  sh "docker login -u ${username} -p ${password} ${harbor_url}"
						  //上传镜像
						  sh "docker push ${harbor_url}/${harbor_project_name}/${imageName}"
					 }

					 //删除本地镜像
					 sh "docker rmi -f ${imageName}"
					 sh "docker rmi -f ${harbor_url}/${harbor_project_name}/${imageName}"
				 }
				 
				 def deploy_image_name = "${harbor_url}/${harbor_project_name}/${imageName}"
				 
				 //部署到K8S
			     sh """
                        sed -i 's#\$IMAGE_NAME#${deploy_image_name}#' ${currentProjectName}/deploy.yml
						sed -i 's#\$SECRET_NAME#${secret_name}#' ${currentProjectName}/deploy.yml
                 """
                      
                 kubernetesDeploy configs: "${currentProjectName}/deploy.yml", kubeconfigId: "${k8s_auth}"
				 
         } 
      }
  }
}

4)建立k8s认证凭证

# 将文件内容全部复制
[root@k8s-master ~]# cat /root/.kube/config

image-20210325192752689

5)生成docker凭证

Docker凭证,用于Kubernetes到Docker私服拉取镜像

# 登录Harbor 
[root@k8s-master ~]# docker login -u admin -p Harbor123456 10.0.0.101:85

# 生成docker凭证
[root@k8s-master ~]# kubectl create secret docker-registry registry-auth-secret --docker-server=10.0.0.101:85 --docker-username=admin --docker-password=Harbor12345 --docker-email=lawrence@qq.com
secret/registry-auth-secret created

# 查看密钥
[root@k8s-master ~]# kubectl get secret
NAME                                 TYPE                                  DATA   AGE
default-token-kvdv8                  kubernetes.io/service-account-token   3      8h
nfs-client-provisioner-token-gx7gf   kubernetes.io/service-account-token   3      7h19m
registry-auth-secret                 kubernetes.io/dockerconfigjson        1      13s

6)每个项目建立deploy.xml

Eureka的deply.yml

---
apiVersion: v1
kind: Service
metadata:
  name: eureka
  labels:
    app: eureka
spec:
  type: NodePort
  ports:
    - port: 10086
      name: eureka
      targetPort: 10086
  selector:
    app: eureka
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: eureka
spec:
  serviceName: "eureka"
  replicas: 2
  selector:
    matchLabels:
      app: eureka
  template:
    metadata:
      labels:
        app: eureka
    spec:
      imagePullSecrets:
        - name: $SECRET_NAME
      containers:
        - name: eureka
          image: $IMAGE_NAME
          ports:
            - containerPort: 10086
          env:
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: EUREKA_SERVER
              value: "http://eureka-0.eureka:10086/eureka/,http://eureka-1.eureka:10086/eureka/"
            - name: EUREKA_INSTANCE_HOSTNAME
              value: ${MY_POD_NAME}.eureka
  podManagementPolicy: "Parallel"

其他项目的deploy.yml主要把名字和端口修改:

网关服务

---
apiVersion: v1
kind: Service
metadata:
  name: zuul
  labels:
    app: zuul
spec:
  type: NodePort
  ports:
    - port: 10020
      name: zuul
      targetPort: 10020
  selector:
    app: zuul
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zuul
spec:
  serviceName: "zuul"
  replicas: 2
  selector:
    matchLabels:
      app: zuul
  template:
    metadata:
      labels:
        app: zuul
    spec:
      imagePullSecrets:
        - name: $SECRET_NAME
      containers:
        - name: zuul
          image: $IMAGE_NAME
          ports:
            - containerPort: 10020
  podManagementPolicy: "Parallel"

权限管理:不要写成admin_service,会报错

---
apiVersion: v1
kind: Service
metadata:
  name: admin
  labels:
    app: admin
spec:
  type: NodePort
  ports:
    - port: 9001
      name: admin
      targetPort: 9001
  selector:
    app: admin
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: admin
spec:
  serviceName: "admin"
  replicas: 2
  selector:
    matchLabels:
      app: admin
  template:
    metadata:
      labels:
        app: admin
    spec:
      imagePullSecrets:
        - name: $SECRET_NAME
      containers:
        - name: admin
          image: $IMAGE_NAME
          ports:
            - containerPort: 9001
  podManagementPolicy: "Parallel"

7)项目构建后,查看服务创建情况

[root@k8s-master ~]# kubectl get pods
NAME                                    READY   STATUS    RESTARTS   AGE
admin-0                                 1/1     Running   0          5m26s
admin-1                                 1/1     Running   0          5m26s
eureka-0                                1/1     Running   0          34m
eureka-1                                1/1     Running   0          34m
gathering-0                             1/1     Running   0          15s
gathering-1                             1/1     Running   0          15s
nfs-client-provisioner-989bf44d-gtn4k   1/1     Running   0          7h58m
zuul-0                                  1/1     Running   0          22m
zuul-1                                  1/1     Running   0          22m
[root@k8s-master ~]# kubectl get service
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)           AGE
admin        NodePort    10.1.59.3      <none>        9001:30178/TCP    5m30s
eureka       NodePort    10.1.243.67    <none>        10086:32625/TCP   34m
gathering    NodePort    10.1.193.246   <none>        9002:31407/TCP    20s
kubernetes   ClusterIP   10.1.0.1       <none>        443/TCP           9h
zuul         NodePort    10.1.37.187    <none>        10020:31959/TCP   22m

image-20210325201914682

posted @ 2021-03-26 17:02  运维人在路上  阅读(1705)  评论(0编辑  收藏  举报