1、自动化运维之SaltStack实践

                           自动化运维之SaltStack实践

1.1、环境
linux-node1(master服务端) 192.168.0.15
linux-node2(minion客户端) 192.168.0.16
1.2、SaltStack三种运行模式介绍
Local 本地
Master/Minion 传统运行方式(server端跟agent端)
Salt SSH SSH
1.3、SaltStack三大功能
●远程执行
●配置管理
●云管理
1.4、SaltStack安装基础环境准备
[root@linux-node1 ~]# cat /etc/redhat-release  ##查看系统版本
CentOS release 6.7 (Final)
[root@linux-node1 ~]# uname -r ##查看系统内核版本
2.6.32-573.el6.x86_64
[root@linux-node1 ~]# getenforce ##查看selinux的状态
Enforcing
[root@linux-node1 ~]# setenforce 0 ##关闭selinux
[root@linux-node1 ~]# getenforce  
Permissive
[root@linux-node1 ~]# /etc/init.d/iptables stop
[root@linux-node1 ~]# /etc/init.d/iptables stop
[root@linux-node1 ~]# ifconfig eth0|awk -F '[: ]+' 'NR==2{print $4}' ##过滤Ip地址
192.168.0.15
[root@linux-node1 ~]# hostname ##查看主机名
linux-node1.zhurui.com
[root@linux-node1 yum.repos.d]# wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-6.repo  ##安装salt必须使用到epel源 
1.4、安装Salt
服务端:
[root@linux-node1 yum.repos.d]# yum install -y salt-master salt-minion ##salt-master包跟salt-minion包
[root@linux-node1 yum.repos.d]# chkconfig salt-master on  ##加入到开机自动启动
[root@linux-node1 yum.repos.d]# chkconfig salt-minion on  ##加入到开机自动启动
[root@linux-node1 yum.repos.d]# /etc/init.d/salt-master start   ##启动salt-master
Starting salt-master daemon:                                   [  OK  ]
启动到这里需要修改minion配置文件,才能启动salt-minion服务
[root@linux-node1 yum.repos.d]# grep '^[a-z]' /etc/salt/minion   
master: 192.168.0.15  ##指定master主机
[root@linux-node1 yum.repos.d]# cat /etc/hosts
192.168.0.15 linux-node1.zhurui.com linux-node1  ##确认主机名是否解析
192.168.0.16 linux-node2.zhurui.com linux-node2
解析结果:
  1. [root@linux-node1 yum.repos.d]# ping linux-node1.zhurui.com
  2. PING linux-node1.zhurui.com (192.168.0.15)56(84) bytes of data.
  3. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=1 ttl=64 time=0.087 ms
  4. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=2 ttl=64 time=0.060 ms
  5. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=3 ttl=64 time=0.053 ms
  6. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=4 ttl=64 time=0.060 ms
  7. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=5 ttl=64 time=0.053 ms
  8. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=6 ttl=64 time=0.052 ms
  9. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=7 ttl=64 time=0.214 ms
  10. 64 bytes from linux-node1.zhurui.com (192.168.0.15): icmp_seq=8 ttl=64 time=0.061 ms
[root@linux-node1 yum.repos.d]# /etc/init.d/salt-minion start  ##启动minion客户端
Starting salt-minion daemon:                               [  OK  ]
[root@linux-node1 yum.repos.d]#
客户端:
[root@linux-node2 ~]# yum install -y salt-minion  ##安装salt-minion包,相当于客户端包
[root@linux-node2 ~]# chkconfig salt-minion on  ##加入开机自启动
[root@linux-node2 ~]# grep '^[a-z]' /etc/salt/minion   ##客户端指定master主机
master: 192.168.0.15
[root@linux-node2 ~]# /etc/init.d/salt-minion start  ##接着启动minion
Starting salt-minion daemon:                               [  OK  ]
1.5、Salt秘钥认证设置
1.5.1使用salt-kes -a linux*命令之前在目录/etc/salt/pki/master目录结构如下

1.5.2使用salt-kes -a linux*命令将秘钥通过允许,随后minions_pre下的文件会转移到minions目录下
  1. [root@linux-node1 minion]# salt-key -a linux*
  2. The following keys are going to be accepted:
  3. UnacceptedKeys:
  4. linux-node1.zhurui.com
  5. linux-node2.zhurui.com
  6. Proceed?[n/Y] Y
  7. Keyfor minion linux-node1.zhurui.com accepted.
  8. Keyfor minion linux-node2.zhurui.com accepted.
  9. [root@linux-node1 minion]# salt-key
  10. AcceptedKeys:
  11. linux-node1.zhurui.com
  12. linux-node2.zhurui.com
  13. DeniedKeys:
  14. UnacceptedKeys:
  15. RejectedKeys:

1.5.3此时目录机构变化成如下:

1.5.4并且伴随着客户端/etc/salt/pki/minion/目录下有master公钥生成

1.6、salt远程执行命令详解
1.6.1 salt '*' test.ping 命令
[root@linux-node1 master]# salt '*' test.ping  ##salt命令  test.ping的含义是,test是一个模块,ping是模块内的方法
linux-node2.zhurui.com:
    True
linux-node1.zhurui.com:
    True
[root@linux-node1 master]# 
1.6.2  salt '*' cmd.run 'uptime' 命令

1.7、saltstack配置管理
1.7.1编辑配置文件/etc/salt/master,将file_roots注释去掉

1.7.2接着saltstack远程执行如下命令
[root@linux-node1 master]# ls /srv/
[root@linux-node1 master]# mkdir /srv/salt
[root@linux-node1 master]# /etc/init.d/salt-master restart
Stopping salt-master daemon:                               [  OK  ]
Starting salt-master daemon:                                 [  OK  ]
[root@linux-node1 salt]# cat apache.sls   ##进入到/srv/salt/目录下创建
[root@linux-node1 salt]# salt '*' state.sls apache  ##接着执行如下语句
接着会出现如下报错:

便捷apache.sls文件添加如下:
最后成功如下:
  1. [root@linux-node1 salt]# salt '*' state.sls apache
  2. linux-node2.zhurui.com:
  3. ----------
  4. ID: apache-install
  5. Function: pkg.installed
  6. Name: httpd
  7. Result:True
  8. Comment:Package httpd is already installed.
  9. Started:22:38:52.954973
  10. Duration:1102.909 ms
  11. Changes:
  12. ----------
  13. ID: apache-install
  14. Function: pkg.installed
  15. Name: httpd-devel
  16. Result:True
  17. Comment:Package httpd-devel is already installed.
  18. Started:22:38:54.058190
  19. Duration:0.629 ms
  20. Changes:
  21. ----------
  22. ID: apache-service
  23. Function: service.running
  24. Name: httpd
  25. Result:True
  26. Comment:Service httpd has been enabled, and is running
  27. Started:22:38:54.059569
  28. Duration:1630.938 ms
  29. Changes:
  30. ----------
  31. httpd:
  32. True
  33. Summary
  34. ------------
  35. Succeeded:3(changed=1)
  36. Failed:0
  37. ------------
  38. Total states run:3
  39. linux-node1.zhurui.com:
  40. ----------
  41. ID: apache-install
  42. Function: pkg.installed
  43. Name: httpd
  44. Result:True
  45. Comment:Package httpd is already installed.
  46. Started:05:01:17.491217
  47. Duration:1305.282 ms
  48. Changes:
  49. ----------
  50. ID: apache-install
  51. Function: pkg.installed
  52. Name: httpd-devel
  53. Result:True
  54. Comment:Package httpd-devel is already installed.
  55. Started:05:01:18.796746
  56. Duration:0.64 ms
  57. Changes:
  58. ----------
  59. ID: apache-service
  60. Function: service.running
  61. Name: httpd
  62. Result:True
  63. Comment:Service httpd has been enabled, and is running
  64. Started:05:01:18.798131
  65. Duration:1719.618 ms
  66. Changes:
  67. ----------
  68. httpd:
  69. True
  70. Summary
  71. ------------
  72. Succeeded:3(changed=1)
  73. Failed:0
  74. ------------
  75. Total states run:3
  76. [root@linux-node1 salt]#
1.7.3验证使用saltstack安装httpd是否成功
linux-node1:
[root@linux-node1 salt]# lsof -i:80  ##已经成功启动
COMMAND  PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
httpd   7397   root    4u  IPv6  46164      0t0  TCP *:http (LISTEN)
httpd   7399 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)
httpd   7400 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)
httpd   7401 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)
httpd   7403 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)
httpd   7404 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)
httpd   7405 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)
httpd   7406 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)
httpd   7407 apache    4u  IPv6  46164      0t0  TCP *:http (LISTEN)
linux-node2:
[root@linux-node2 pki]# lsof -i:80
COMMAND   PID   USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
httpd   12895   root    4u  IPv6  47532      0t0  TCP *:http (LISTEN)
httpd   12897 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)
httpd   12898 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)
httpd   12899 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)
httpd   12901 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)
httpd   12902 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)
httpd   12906 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)
httpd   12908 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)
httpd   12909 apache    4u  IPv6  47532      0t0  TCP *:http (LISTEN)
[root@linux-node2 pki]# 
1.7.4使用saltstack状态管理

[root@linux-node1 salt]# salt '*' state.highstate
2.1、SaltStack之Grains数据系统
●Grains
●Pillar
2.1.1使用salt命令查看系统版本
  1. [root@linux-node1 salt]# salt 'linux-node1*' grains.ls
  2. linux-node1.zhurui.com:
  3. -SSDs
  4. - biosreleasedate
  5. - biosversion
  6. - cpu_flags
  7. - cpu_model
  8. - cpuarch
  9. - domain
  10. - fqdn
  11. - fqdn_ip4
  12. - fqdn_ip6
  13. - gpus
  14. - host
  15. - hwaddr_interfaces
  16. - id
  17. - init
  18. - ip4_interfaces
  19. - ip6_interfaces
  20. - ip_interfaces
  21. - ipv4
  22. - ipv6
  23. - kernel
  24. - kernelrelease
  25. - locale_info
  26. - localhost
  27. - lsb_distrib_codename
  28. - lsb_distrib_id
  29. - lsb_distrib_release
  30. - machine_id
  31. - manufacturer
  32. - master
  33. - mdadm
  34. - mem_total
  35. - nodename
  36. - num_cpus
  37. - num_gpus
  38. - os
  39. - os_family
  40. - osarch
  41. - oscodename
  42. - osfinger
  43. - osfullname
  44. - osmajorrelease
  45. - osrelease
  46. - osrelease_info
  47. - path
  48. - productname
  49. - ps
  50. - pythonexecutable
  51. - pythonpath
  52. - pythonversion
  53. - saltpath
  54. - saltversion
  55. - saltversioninfo
  56. - selinux
  57. - serialnumber
  58. - server_id
  59. - shell
  60. - virtual
  61. - zmqversion
  62. [root@linux-node1 salt]#
2.1.2系统版本相关信息:
  1. [root@linux-node1 salt]# salt 'linux-node1*' grains.items
  2. linux-node1.zhurui.com:
  3. ----------
  4. SSDs:
  5. biosreleasedate:
  6. 07/31/2013
  7. biosversion:
  8. 6.00
  9. cpu_flags:
  10. - fpu
  11. - vme
  12. - de
  13. - pse
  14. - tsc
  15. - msr
  16. - pae
  17. - mce
  18. - cx8
  19. - apic
  20. - sep
  21. - mtrr
  22. - pge
  23. - mca
  24. - cmov
  25. - pat
  26. - pse36
  27. - clflush
  28. - dts
  29. - mmx
  30. - fxsr
  31. - sse
  32. - sse2
  33. - ss
  34. - syscall
  35. - nx
  36. - rdtscp
  37. - lm
  38. - constant_tsc
  39. - up
  40. - arch_perfmon
  41. - pebs
  42. - bts
  43. - xtopology
  44. - tsc_reliable
  45. - nonstop_tsc
  46. - aperfmperf
  47. - unfair_spinlock
  48. - pni
  49. - ssse3
  50. - cx16
  51. - sse4_1
  52. - sse4_2
  53. - x2apic
  54. - popcnt
  55. - hypervisor
  56. - lahf_lm
  57. - arat
  58. - dts
  59. cpu_model:
  60. Intel(R)Core(TM) i3 CPU M 380@2.53GHz
  61. cpuarch:
  62. x86_64
  63. domain:
  64. zhurui.com
  65. fqdn:
  66. linux-node1.zhurui.com
  67. fqdn_ip4:
  68. -192.168.0.15
  69. fqdn_ip6:
  70. gpus:
  71. |_
  72. ----------
  73. model:
  74. SVGA II Adapter
  75. vendor:
  76. unknown
  77. host:
  78. linux-node1
  79. hwaddr_interfaces:
  80. ----------
  81. eth0:
  82. 00:0c:29:fc:ba:90
  83. lo:
  84. 00:00:00:00:00:00
  85. id:
  86. linux-node1.zhurui.com
  87. init:
  88. upstart
  89. ip4_interfaces:
  90. ----------
  91. eth0:
  92. -192.168.0.15
  93. lo:
  94. -127.0.0.1
  95. ip6_interfaces:
  96. ----------
  97. eth0:
  98. - fe80::20c:29ff:fefc:ba90
  99. lo:
  100. -::1
  101. ip_interfaces:
  102. ----------
  103. eth0:
  104. -192.168.0.15
  105. - fe80::20c:29ff:fefc:ba90
  106. lo:
  107. -127.0.0.1
  108. -::1
  109. ipv4:
  110. -127.0.0.1
  111. -192.168.0.15
  112. ipv6:
  113. -::1
  114. - fe80::20c:29ff:fefc:ba90
  115. kernel:
  116. Linux
  117. kernelrelease:
  118. 2.6.32-573.el6.x86_64
  119. locale_info:
  120. ----------
  121. defaultencoding:
  122. UTF8
  123. defaultlanguage:
  124. en_US
  125. detectedencoding:
  126. UTF-8
  127. localhost:
  128. linux-node1.zhurui.com
  129. lsb_distrib_codename:
  130. Final
  131. lsb_distrib_id:
  132. CentOS
  133. lsb_distrib_release:
  134. 6.7
  135. machine_id:
  136. da5383e82ce4b8d8a76b5a3e00000010
  137. manufacturer:
  138. VMware,Inc.
  139. master:
  140. 192.168.0.15
  141. mdadm:
  142. mem_total:
  143. 556
  144. nodename:
  145. linux-node1.zhurui.com
  146. num_cpus:
  147. 1
  148. num_gpus:
  149. 1
  150. os:
  151. CentOS
  152. os_family:
  153. RedHat
  154. osarch:
  155. x86_64
  156. oscodename:
  157. Final
  158. osfinger:
  159. CentOS-6
  160. osfullname:
  161. CentOS
  162. osmajorrelease:
  163. 6
  164. osrelease:
  165. 6.7
  166. osrelease_info:
  167. -6
  168. -7
  169. path:
  170. /sbin:/usr/sbin:/bin:/usr/bin
  171. productname:
  172. VMwareVirtualPlatform
  173. ps:
  174. ps -efH
  175. pythonexecutable:
  176. /usr/bin/python2.6
  177. pythonpath:
  178. -/usr/bin
  179. -/usr/lib64/python26.zip
  180. -/usr/lib64/python2.6
  181. -/usr/lib64/python2.6/plat-linux2
  182. -/usr/lib64/python2.6/lib-tk
  183. -/usr/lib64/python2.6/lib-old
  184. -/usr/lib64/python2.6/lib-dynload
  185. -/usr/lib64/python2.6/site-packages
  186. -/usr/lib64/python2.6/site-packages/gtk-2.0
  187. -/usr/lib/python2.6/site-packages
  188. pythonversion:
  189. -2
  190. -6
  191. -6
  192. - final
  193. -0
  194. saltpath:
  195. /usr/lib/python2.6/site-packages/salt
  196. saltversion:
  197. 2015.5.10
  198. saltversioninfo:
  199. -2015
  200. -5
  201. -10
  202. -0
  203. selinux:
  204. ----------
  205. enabled:
  206. True
  207. enforced:
  208. Permissive
  209. serialnumber:
  210. VMware-564d8f43912d3a99-eb c4 3b a9 34 fc ba 90
  211. server_id:
  212. 295577080
  213. shell:
  214. /bin/bash
  215. virtual:
  216. VMware
  217. zmqversion:
  218. 3.2.5
2.1.3系统版本相关信息:

2.1.4查看node1所有ip地址:
[root@linux-node1 salt]# salt 'linux-node1*' grains.get ip_interfaces:eth0 ##用于信息的收集
linux-node1.zhurui.com:
    - 192.168.0.15
    - fe80::20c:29ff:fefc:ba90
2.1.4使用Grains收集系统信息:
[root@linux-node1 salt]# salt 'linux-node1*' grains.get os 
linux-node1.zhurui.com:
    CentOS
[root@linux-node1 salt]# salt -G os:CentOS cmd.run 'w'  ##  -G:代表使用Grains收集,使用w命令,查看登录信息
linux-node2.zhurui.com:
     20:29:40 up 2 days, 16:09,  2 users,  load average: 0.00, 0.00, 0.00
    USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
    root     tty1     -                Sun14   29:07m  0.32s  0.32s -bash
    root     pts/0    192.168.0.101    Sun20   21:41m  0.46s  0.46s -bash
linux-node1.zhurui.com:
     02:52:01 up 1 day, 22:31,  3 users,  load average: 4.00, 4.01, 4.00
    USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
    root     tty1     -                Sat20   24:31m  0.19s  0.19s -bash
    root     pts/0    192.168.0.101    Sun02    1.00s  1.33s  0.68s /usr/bin/python
    root     pts/1    192.168.0.101    Sun04   21:36m  0.13s  0.13s -bash
[root@linux-node1 salt]# 
截图如下:
2.1.5 使用Grains规则匹配到memcache的主机上运行输入hehe
[root@linux-node1 salt]# vim /etc/salt/minion ##编辑minion配置文件,取消如下几行注释
88 grains:
 89   roles:
 90     - webserver
 91     - memcache
 截图如下:
[root@linux-node1 salt]# /etc/init.d/salt-minion restart   ##
Stopping salt-minion daemon:                               [  OK  ]
Starting salt-minion daemon:                               [  OK  ]
[root@linux-node1 salt]# 
[root@linux-node1 salt]# salt -G 'roles:memcache' cmd.run 'echo zhurui'  ##使用grains匹配规则是memcache的客户端机器,然后输出命令
linux-node1.zhurui.com:
    zhurui
[root@linux-node1 salt]#
截图如下:

2.1.5 也可以通过创建新的配置文件/etc/salt/grains文件来配置规则
[root@linux-node1 salt]# cat /etc/salt/grains 
web: nginx
[root@linux-node1 salt]# /etc/init.d/salt-minion restart  ##修改完配置文件以后需要重启服务
Stopping salt-minion daemon:                               [  OK  ]
Starting salt-minion daemon:                               [  OK  ]
[root@linux-node1 salt]# 
[root@linux-node1 salt]# salt -G web:nginx cmd.run 'w'  ##使用grains匹配规则为web:nginx的主机运行命令w
linux-node1.zhurui.com:
     03:31:07 up 1 day, 23:11,  3 users,  load average: 4.11, 4.03, 4.01
    USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
    root     tty1     -                Sat20   25:10m  0.19s  0.19s -bash
    root     pts/0    192.168.0.101    Sun02    0.00s  1.41s  0.63s /usr/bin/python
    root     pts/1    192.168.0.101    Sun04   22:15m  0.13s  0.13s -bash
 
grains的用法:
1.收集底层系统信息
2、远程执行里面匹配minion
3、top.sls里面匹配minion
 
2.1.5 也可以/srv/salt/top.sls配置文件匹配minion
 
[root@linux-node1 salt]# cat /srv/salt/top.sls 
base:
  'web:nginx':
    - match: grain
    - apache
[root@linux-node1 salt]# 
2.2、SaltStack之Pillar数据系统
2.2.1 首先在master配置文件552行打开pillar开关
 
[root@linux-node1 salt]# grep '^[a-z]' /etc/salt/master 
file_roots:
pillar_opts: True
[root@linux-node1 salt]# /etc/init.d/salt-master restart   ##重启master
Stopping salt-master daemon:                               [  OK  ]
Starting salt-master daemon:                                 [  OK  ]
[root@linux-node1 salt]# salt '*' pillar.items  ##使用如下命令验证
截图如下:
[root@linux-node1 salt]# grep '^[a-z]' /etc/salt/master
529 pillar_roots:  ##打开如下行
530   base:
531     - /srv/pillar
截图如下:
[root@linux-node1 salt]# mkdir /srv/pillar
[root@linux-node1 salt]# /etc/init.d/salt-master restart  ##重启master
Stopping salt-master daemon:                               [  OK  ]
Starting salt-master daemon:                                 [  OK  ]
[root@linux-node1 salt]# vim /srv/pillar/apache.sls
[root@linux-node1 salt]# cat /srv/pillar/apache.sls
{%if grains['os'] == 'CentOS' %}
apache: httpd
{% elif grains['os'] == 'Debian' %}
apache: apache2
{% endif %}
[root@linux-node1 salt]# 
截图如下:
接着指定哪个minion可以看到:
[root@linux-node1 salt]# cat /srv/pillar/top.sls 
base:
  '*':
    - apache

 
[root@linux-node1 salt]# salt '*' pillar.items ##修改完成以后使用该命令验证
linux-node1.zhurui.com:
    ----------
    apache:
        httpd
linux-node2.zhurui.com:
    ----------
    apache:
        httpd
截图如下:
2.2.1 使用Pillar定位主机

报错处理:
[root@linux-node1 salt]# salt '*' saltutil.refresh_pillar  ##需要执行刷新命令
linux-node2.zhurui.com:
    True
linux-node1.zhurui.com:
    True
[root@linux-node1 salt]# 
截图如下:

[root@linux-node1 salt]# salt -I 'apache:httpd' test.ping
linux-node1.zhurui.com:
    True
linux-node2.zhurui.com:
    True
[root@linux-node1 salt]# 
 
2.3、SaltStack数据系统区别介绍
名称 存储位置 数据类型 数据采集更新方式 应用
Grains minion端 静态数据 minion启动时收集,也可以使用saltutil.sync_grains进行刷新。 存储minion基本数据,比如用于匹配minion,自身数据可以用来做资产管理等。
Pillar master端 动态数据 在master端定义,指定给对应的minion,可以使用saltutil.refresh_pillar刷新 存储Master指定的数据,只有指定的minion可以看到,用于敏感数据保存。
posted @ 2017-03-25 15:37  Simon92  阅读(35257)  评论(3编辑  收藏  举报