使用Jenkins结合Gogs和SonarQube对项目代码进行测试、部署、回滚,以及使用keepalived+haproxy调度至后端tomcat

0 环境说明

主tomcat:192.168.0.112
备tomcat:192.168.0.183

haproxy+keepalived-1:192.168.0.156
haproxy+keepalived-2:192.168.0.157

git: 尚未部署
sonar-scanner:尚未部署

软件:
jdk-8u144-linux-x64.tar.gz
apache-tomcat-8.5.43.tar.gz
haproxy-1.5.18-8.el7.x86_64.rpm
keepalived-1.3.5-8.el7_6.5.x86_64.rpm

一、分别配置两台tomcat后端服务的java环境

1. 准备jdk8压缩包

[root@bogon src]# pwd
/usr/local/src
[root@bogon src]# ls
jdk-8u144-linux-x64.tar.gz

2.解压jdk压缩包到指定目录下

[root@bogon src]# tar -zxv -f jdk-8u144-linux-x64.tar.gz -C /usr/local/
[root@bogon src]# cd /usr/local/
[root@bogon local]# ls
bin  etc  games  include  jdk1.8.0_144  lib  lib64  libexec  sbin  share  src

3. 配置java的环境变量并生效

[root@bogon local]# cd /etc/profile.d/
[root@bogon profile.d]# vim java.sh
export JAVA_HOME=/usr/local/jdk1.8.0_144
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=$JAVA_HOME/lib/:$JRE_HOME/lib
export TOMCAT_HOME=/usr/local/apache-tomcat-8.5.43
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin:$TOMCAT_HOME/bin
[root@bogon profile.d]# source java.sh 

4. 测试java环境

[root@bogon profile.d]# echo ${JAVA_HOME}
/usr/local/jdk1.8.0_144
[root@bogon profile.d]# echo ${CLASSPATH}
/usr/local/jdk1.8.0_144/lib/:/usr/local/jdk1.8.0_144/jre/lib
[root@bogon profile.d]# echo ${PATH}
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin:/usr/local/jdk1.8.0_144/bin:/usr/local/jdk1.8.0_144/jre/bin:/usr/local/apache-tomcat-8.5.43/bin
[root@bogon profile.d]# java -version
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

二、分别安装配置tomcat服务

1. 准备tomcat二进制压缩包

[root@bogon src]# pwd
/usr/local/src
[root@bogon src]# ls
apache-tomcat-8.5.43.tar.gz  jdk-8u144-linux-x64.tar.gz

2.解压jtomcat压缩包到指定目录下

[root@bogon src]# tar -zxv -f apache-tomcat-8.5.43.tar.gz -C /usr/local/
[root@bogon src]# cd /usr/local/
[root@bogon local]# ls
bin  etc  games  include  jdk1.8.0_144  lib  lib64  libexec  sbin  share  src

3.启动tomcat服务

[root@bogon apache-tomcat-8.5.43]# /usr/local/apache-tomcat-8.5.43/bin/startup.sh 
Using CATALINA_BASE:   /usr/local/apache-tomcat-8.5.43
Using CATALINA_HOME:   /usr/local/apache-tomcat-8.5.43
Using CATALINA_TMPDIR: /usr/local/apache-tomcat-8.5.43/temp
Using JRE_HOME:        /usr/local/jdk1.8.0_144/jre
Using CLASSPATH:       /usr/local/apache-tomcat-8.5.43/bin/bootstrap.jar:/usr/local/apache-tomcat-8.5.43/bin/tomcat-juli.jar
Tomcat started.

4. 查看启动端口

[root@bogon apache-tomcat-8.5.43]# ss -tlnp
State       Recv-Q Send-Q                   Local Address:Port                                  Peer Address:Port              
LISTEN      0      128                                  *:22                                               *:*                   users:(("sshd",pid=965,fd=3))
LISTEN      0      100                          127.0.0.1:25                                               *:*                   users:(("master",pid=1048,fd=13))
LISTEN      0      1                     ::ffff:127.0.0.1:8005                                            :::*                   users:(("java",pid=1349,fd=70))
LISTEN      0      100                                 :::8009                                            :::*                   users:(("java",pid=1349,fd=55))
LISTEN      0      100                                 :::8080                                            :::*                   users:(("java",pid=1349,fd=50))
LISTEN      0      128                                 :::22                                              :::*                   users:(("sshd",pid=965,fd=4))
LISTEN      0      100                                ::1:25                                              :::*                   users:(("master",pid=1048,fd=14))

5. 浏览器访问测试"主tomcat服务"

浏览器输入地址:http://192.168.0.112:8080/ 进行访问,
为了方便区分,在文件/usr/local/apache-tomcat-8.5.43/webapps/ROOT/index.jsp下面增加如下内容,在</body>上面

<h2>主</h2>

6. 浏览器访问测试"从tomcat服务"

浏览器输入地址:http://192.168.0.183:8080/ 进行访问
为了方便区分,在文件/usr/local/apache-tomcat-8.5.43/webapps/ROOT/index.jsp下面增加如下内容,在</body>上面

<h2>备</h2>

三、分别配置两台keepalived+haproxy高可用分离调度服务

1. 安装高可用服务keepalived

yum -y install keepalived

2. 修改keepalived配置文件

! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 192.168.200.1
   smtp_connect_timeout 30
   router_id haproxy # 在备份服务中的路由id设置为 "haproxy-1",不可相同
   vrrp_skip_check_adv_addr
   # vrrp_strict #禁用掉vrrp,否则只支持组播不支持单播模式
   vrrp_iptables #开启不自动添加防火墙规则,避免无法访问此主机
   vrrp_garp_interval 0
   vrrp_gna_interval 0
}

vrrp_instance VI_1 {
    state MASTER  #设置为主服务,在备份服务中设置为"BACKUP",备份服务
    interface ens33 #绑定的网卡
    virtual_router_id 51 # 实例路由id号,在同一网段内virtual_router_id 值不能相同,备份的可以是50
    priority 100 #优先级,备份服务优先级必须小于100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        #192.168.200.16
        #192.168.200.17
        #192.168.200.18
        192.168.0.220/ dev ens33 label ens33:0 #将虚拟vip绑定到本地ens33网卡并取名为ens33:0,备份的也需要设置
    }
    unicast_src_ip 192.168.0.156 #单播源地址ip,这个是填写自身的IP,在备份服务中设置源ip为192.168.1.11
    unicast_peer{
    192.168.0.157 # 单播目标地址ip这个填写另一台的IP,在备份服务中设置目标ip为192.168.1.10
    }

}

3.分别启动keepalived服务

# 主keepalivd:
[root@bogon keepalived]# systemctl start keepalived.service

[root@bogon keepalived]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-08-13 15:51:22 CST; 6min ago
  Process: 1452 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1453 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─1453 /usr/sbin/keepalived -D
           ├─1454 /usr/sbin/keepalived -D
           └─1455 /usr/sbin/keepalived -D

Aug 13 15:51:48 bogon Keepalived_healthcheckers[1454]: Adding sorry server [192.168.200.200]:1358 to VS [10.10.10.2]:1358
Aug 13 15:51:48 bogon Keepalived_healthcheckers[1454]: Removing alive servers from the pool for VS [10.10.10.2]:1358
Aug 13 15:51:48 bogon Keepalived_healthcheckers[1454]: Remote SMTP server [192.168.200.1]:25 connected.
Aug 13 15:51:48 bogon Keepalived_healthcheckers[1454]: Error reading data from remote SMTP server [192.168.200.1]:25.
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Timeout connecting server [192.168.201.100]:443.
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Check on service [192.168.201.100]:443 failed after 3 retry.
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Removing service [192.168.201.100]:443 from VS [192.168.200.100]:443
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Lost quorum 1-0=1 > 0 for VS [192.168.200.100]:443
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Remote SMTP server [192.168.200.1]:25 connected.
Aug 13 15:51:49 bogon Keepalived_healthcheckers[1454]: Error reading data from remote SMTP server [192.168.200.1]:25.

[root@bogon keepalived]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:ae:fb:8c brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.156/24 brd 192.168.0.255 scope global dynamic ens33
       valid_lft 5299sec preferred_lft 5299sec
    inet 192.168.0.220/0 scope global ens33:0 #绑定的虚拟vip
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:feae:fb8c/64 scope link 
       valid_lft forever preferred_lft forever
#备keepalivd:
[root@bogon keepalived]# systemctl start keepalived.service

[root@bogon keepalived]# systemctl status keepalived.service
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-08-13 16:14:20 CST; 8min ago
  Process: 1386 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 1387 (keepalived)
   CGroup: /system.slice/keepalived.service
           ├─1387 /usr/sbin/keepalived -D
           ├─1388 /usr/sbin/keepalived -D
           └─1389 /usr/sbin/keepalived -D

Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Adding sorry server [192.168.200.200]:1358 to VS [10.10.10.2]:1358
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Removing alive servers from the pool for VS [10.10.10.2]:1358
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Remote SMTP server [192.168.200.1]:25 connected.
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Error reading data from remote SMTP server [192.168.200.1]:25.
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Timeout connecting server [192.168.201.100]:443.
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Check on service [192.168.201.100]:443 failed after 3 retry.
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Removing service [192.168.201.100]:443 from VS [192.168.200.100]:443
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Lost quorum 1-0=1 > 0 for VS [192.168.200.100]:443
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Remote SMTP server [192.168.200.1]:25 connected.
Aug 13 16:14:47 bogon Keepalived_healthcheckers[1388]: Error reading data from remote SMTP server [192.168.200.1]:25.

[root@bogon keepalived]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:c5:6b:34 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.157/24 brd 192.168.0.255 scope global dynamic ens33
       valid_lft 6058sec preferred_lft 6058sec
    inet 192.168.0.220/0 scope global ens33:0 # 绑定的虚拟IP,这个跟文档说的不一样,有待进一步研究
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fec5:6b34/64 scope link 
       valid_lft forever preferred_lft forever

4.分别配置两台调度服务内核参数

[root@bogon keepalived]# vim /etc/sysctl.conf 
net.ipv4.ip_nonlocal_bind = 1   #开启非本地ip绑定,避免haproxy无法绑定非本机ip
net.ipv4.ip_forward = 1  #开启路由转发功能

[root@bogon keepalived]# sysctl -p
net.ipv4.ip_nonlocal_bind = 1   #开启非本地ip绑定,避免haproxy无法绑定非本机ip
net.ipv4.ip_forward = 1  #开启路由转发功能

5. 分别编译安装好haproxy

[root@bogon ~]# yum -y install haproxy
[root@bogon ~]# cd /etc/haproxy
[root@bogon haproxy]# cp haproxy.cfg haproxy.cfg.bak
[root@bogon haproxy]# vim haproxy.cfg
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     100000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
    
    #nbproc 2  #开启的线程数
   # cpu-map 1 0  #绑定到cup的第0号核心
   # cpu-map 2 1  #绑定到cup的第1号核心



#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
#defaults
#    mode                    http
#    log                     global
#    option                  httplog
#    option                  dontlognull
#    option http-server-close
#    option forwardfor       except 127.0.0.0/8
#    option                  redispatch
#    retries                 3
#    timeout http-request    10s
#    timeout queue           1m
#    timeout connect         10s
#    timeout client          1m
#    timeout server          1m
#    timeout http-keep-alive 10s
#    timeout check           10s
#    maxconn                 100000

defaults     #默认设置,为前端、后端及listen默认设置
    option http-keep-alive
    option  forwardfor  #ip透传
    maxconn 100000
    mode http
    timeout connect 300000ms
    timeout client  300000ms
    timeout server  300000ms


#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
#frontend  main *:5000
#    acl url_static       path_beg       -i /static /images /javascript /stylesheets
#    acl url_static       path_end       -i .jpg .gif .png .css .js

#    use_backend static          if url_static
#    default_backend             app

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
#backend static
#    balance     roundrobin
#    server      static 127.0.0.1:4331 check

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
#backend app
#    balance     roundrobin
#    server  app1 127.0.0.1:5001 check
#    server  app2 127.0.0.1:5002 check
#    server  app3 127.0.0.1:5003 check
#    server  app4 127.0.0.1:5004 check

listen stats   #开启监听状态页
    mode http   #http协议
    bind 0.0.0.0:8000   #状态页访绑定的端口
    stats enable   #开启状态页
    log global    #全局日志
    stats uri     /haproxy-status   #状态也路径
    stats auth    admin:123456   #状态页登录的用户名及密码

listen  web_port      #监听的服务
    bind 192.168.0.220:80  #绑定的虚拟vip及端口,当外网访问此虚拟vip时会自动调度到后端服务
    mode http    #http协议
    balance roundrobin  #调度算法 roundrobin动态轮询
    log global   #全局日志
    server 192.168.0.112  192.168.0.112:8080  check inter 3000 fall 2 rise 5     #调度的后端服务
    server 192.168.0.183  192.168.0.183:8080  check inter 3000 fall 2 rise 5     #调度的后端服务

[root@bogon haproxy]# systemctl start haproxy.service
[root@bogon haproxy]# systemctl status haproxy.service
# 若是状态没有启动,通过查看/var/log/messages日志中出现如下错误信息:haproxy-systemd-wrapper: [ALERT] 224/170040 (15627) :Starting proxy stats: cannot bind socket
# 解决办法,执行如下命令,然后重启服务即可:
[root@bogon haproxy]# setsebool -P haproxy_connect_any=1

6. 查看haproxy状态

使用浏览器访问:http://192.168.0.156:8000/haproxy-status , 或者 http://192.168.0.157:8000/haproxy-status
账号是admin,密码是123456

7. 浏览器访问调度服务,成功调度到后端服务

使用浏览器访问:http://192.168.0.220 结果是先调度到备的那台上面,但是因为采用的是轮询算法,强制刷新会发现调度到主的上面

四、创建Jenkins的执行脚本,用以实现通过Jenkins的选项参数来自动测试、部署、回滚代码

注意:事先搭建好jenkins、gitlab、sonaqube等服务,其中jenkins要安装scanner扫描器

1. 自定义创建指定的jenkins服务工作目录

mkdir -pv /data/jenkins/worker

2. jenkins服务器脚本的保存路径

# pwd
/data/jenkins

3. jenkins服务器编辑脚本

注意:里面的参数需要修改,尚未部署的服务需要部署

# vim project.sh
#!/bin/bash

#jenkins参数选项
time=`date +%Y-%m-%d_%H-%M-%S`
# 2019-08-14_00-36-41
method=$1
group=$2
branch=$3

#后端tomcat服务ip地址组
function ip_value(){
    if [[ "${group}" == "group1" ]];then
        ip_list="192.168.0.112"
        /usr/bin/echo ${ip_list}
    elif [[ "${group}" == "group2" ]];then
        ip_list="192.168.0.183"
        /usr/bin/echo ${ip_list}
    elif [[ "${group}" == "group3" ]];then
        ip_list="192.168.0.112 192.168.0.183"
        /usr/bin/echo ${ip_list}
    fi
}

#先从git上拉取代码到Jenkins服务端
function code_deploy(){
    /usr/bin/cd /data/jenkins/worker
    /usr/bin/rm -rf ./*
    /usr/bin/git clone -b ${branch} git@192.168.0.168:3000/sandu/web-page.git
}

#代码测试,使用sonar检测代码质量
function code_test(){
    /usr/bin/cd /data/jenkins/worker/web-page
    /usr/bin/cat > sonar-project.properties <<eof
        sonar.projectKey=one123456 
        sonar.projectName=code-test 
        sonar.projectVersion=1.0 
        sonar.sources=./ 
        sonar.language=python 
        sonar.sourceEncoding=UTF-8
eof
    /data/scanner/sonar-scanner/bin/sonar-scanner
}

#代码打包压缩
function code_compress(){
    /usr/bin/cd /data/jenkins/worker/
    /usr/bin/rm -f web-page/sonar-project.properties
    /usr/bin/tar -czv -f code.tar.gz web-page
}

#调度器剥离后端服务
function haproxy_down(){
    for ip in ${ip_list};do
        /usr/bin/echo ${ip}
        /usr/bin/ssh root@192.168.0.156 "echo "disable  server web_port/${ip}"|socat stdio /var/lib/haproxy/stats"
        /usr/bin/ssh root@192.168.0.157 "echo "disable  server web_port/${ip}"|socat stdio /var/lib/haproxy/stats"
    done
}

#后端服务下线
function backend_stop(){
    for ip in ${ip_list};do
        /usr/bin/echo ${ip}
        /usr/bin/ssh root@$ip "/usr/local/apache-tomcat-8.5.43/bin/shutdown.sh"
        # 备份后端代码
        /usr/bin/ssh root@${ip} "tar -zcv -f /usr/local/apache-tomcat-8.5.43/back_code/${time}-backcode.tar.gz /usr/local/apache-tomcat-8.5.43/webapps"
    done
}

#部署代码到后端服务站点
function scp_backend(){
    for ip in ${ip_list};do
        /usr/bin/echo ${ip}
        /usr/bin/scp /data/jenkins/worker/code.tar.gz root@${ip}:/usr/local/apache-tomcat-8.5.43/web_code/${time}-code.tar.gz
        /usr/bin/ssh root@${ip} "tar -xv -f /usr/local/apache-tomcat-8.5.43/web_code/${time}-code.tar.gz -C /usr/local/apache-tomcat-8.5.43/webapps"
    done
}

#启动后端服务
function backend_start(){
    for ip in ${ip_list};do
        /usr/bin/echo ${ip}
        /usr/bin/ssh root@$ip "/usr/local/apache-tomcat-8.5.43/bin/startup.sh"
        /usr/bin/sleep 6
done
}

#测试访问后端服务
function backend_test(){
for ip in ${ip_list};do
    /usr/bin/echo ${ip}
    status_code=`curl -I -s -m 6 -o /dev/null -w %{http_code} http://${ip}:8080`
    if [ ${status_code} -eq 200 ];then
        /usr/bin/echo "访问测试成功,后端代码部署成功"
        if [[ $ip == "192.168.0.183" ]];then
            /usr/bin/ssh root@192.168.0.156 "echo "enable server web_port/${ip}" | socat stdio /var/lib/haproxy/stats"
            /usr/bin/ssh root@192.168.0.157 "echo "enable server web_port/${ip}" | socat stdio /var/lib/haproxy/stats"
        fi
    else
        /usr/bin/echo "访问测试失败,请重新部署代码至后端服务" 
    fi
done
}

#代码回滚
function code_rollback(){
    for ip in ${ip_list};do
        /usr/bin/echo ${ip}
        /usr/bin/ssh root@${ip} "tar -zxv -f /usr/local/apache-tomcat-8.5.43/back_code/${time}-backcode.tar.gz -C /usr/local/apache-tomcat-8.5.43/webapps"
    done
    /usr/bin/echo "tomcat代码回滚成功,回到上一版本,下一步进行访问测试"
}

#主菜单命令
main(){
    case $1 in
        "deploy")
            ip_value;
            code_deploy;
            code_test;
            code_compress;
            haproxy_down;
            backend_stop;
            scp_backend;
            backend_start;
            backend_test;
        ;;
        "rollback")
            ip_value;
            haproxy_down;
            backend_stop;
            code_rollback;
            backend_start;
            backend_test;
        ;;
    esac
}
main $1 $2 $3

4. 在各后端创建好代码压缩文件和备份文件保存路径

主tomcat:mkdir -p /usr/local/apache-tomcat-8.5.43/{web_code,back_code}
备tomcat:mkdir -p /usr/local/apache-tomcat-8.5.43/{web_code,back_code}

5. 在jenkins服务设置好免密秘钥登录各服务

jenkins所在的服务器分别向两台tomcat服务器和keepalived/haproxy服务器免密秘钥登录

ssh-copy-id 192.168.0.112
ssh-copy-id 192.168.0.183
ssh-copy-id 192.168.0.156
ssh-copy-id 192.168.0.157

五、在gitlab服务器克隆并推送代码---需要修改

1)克隆指定的develop分支代码

root@ubuntu1804:~# git clone -b develop http://192.168.1.30/jie/web-page.git
Cloning into 'web-page'...
Username for 'http://192.168.1.30': jie
Password for 'http://jie@192.168.1.30': 
remote: Enumerating objects: 39, done.
remote: Counting objects: 100% (39/39), done.
remote: Compressing objects: 100% (22/22), done.
remote: Total 39 (delta 4), reused 27 (delta 4)
Unpacking objects: 100% (39/39), done.

2)查看克隆的所包含的代码文件

root@ubuntu1804:~# ls web-page/
index.html  Math.php

3)修改代文件

root@ubuntu1804:~/web-page# cat index.html 
<h1>welcome to tomcat page</h1>
<h3>simple-version v1</h3>

4)推送v1版代码至gitlab代码库

root@ubuntu1804:~/web-page# git add ./*
root@ubuntu1804:~/web-page# git commit -m 'v1'
[develop d0dd713] v1
 1 file changed, 2 insertions(+), 2 deletions(-)

root@ubuntu1804:~/web-page# git push
Username for 'http://192.168.1.30': jie
Password for 'http://jie@192.168.1.30': 
Counting objects: 3, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (3/3), done.
Writing objects: 100% (3/3), 316 bytes | 316.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
remote: 
remote: To create a merge request for develop, visit:
remote:   http://192.168.1.30/jie/web-page/merge_requests/new?merge_request%5Bsource_branch%5D=develop
remote: 
To http://192.168.1.30/jie/web-page.git
     c10f5bf..d0dd713  develop -> develop

六、jenkins的配置文件修改及选项参数构建

1. 创建一个项目code-test

2. 配置此项目的configure文件,添加选项参数、字符参数且与脚本文件中的选项相对应

General,参数化构建过程,选项参数/字符参数

  1. method
  • deploy # 代码部署
  • rollback # 代码回滚
  1. group
  • group1
  • group2
  • group3
  1. branch
  • master # 主分支
  • develop # 开发分支

3. 配置jenkins的shell脚本命令,此脚本实现代码的测试、部署以及 回滚

bulid(构建)--执行shell

cd /data/enkins
bash /project.sh $method $group $branch

4. 保存以上配置,然后部署第一组后端服务主tomcat

5. 控制台输出信息

6. 直接浏览器访问主tomcat服务验证是否部署成功

7. 再部署第二组后端服务备tomcat-1,控制台输出信息验证是否部署成功

8. 分别查看后端服务部署的相关代码文件,确定代码文件是否部署到后端服务

9. 直接浏览器访问备tomcat1服务验证是否部署成功

10. 最后通过浏览器haproxy调度器,成功调度到后端服务tomcat

11. 代码测试结果(SonarQube)

12. 回滚测试

posted @ 2019-08-14 17:56  哈喽哈喽111111  阅读(1147)  评论(0编辑  收藏  举报