SaltStack 第1章

1-1 配置管理和SaltStack概述

https://docs.saltstack.com/en/getstarted/
https://docs.saltstack.com/en/latest/

打开salt的repo源:

https://repo.saltstack.com
https://repo.saltstack.com/#rhel

添加salt的源仓库:

node1 ~]# yum install https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm -y
node2 ~]# yum install https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm -y
或使用阿里云的repo源:
~]# cd /etc/yum.repos.d/ && wget node1 ~]# yum install https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm
[root@linux-node1 yum.repos.d]# ll
-rw-r--r--  1 root root  243 May 10 07:25 salt-latest.repo

安装salt:

[root@linux-node1 ~]# yum install salt-master salt-minion -y
[root@linux-node2 ~]# yum install salt-minion -y

1-2 SaltStack入门-远程执行

[root@linux-node1 ~]# salt --version
salt 2017.7.1 (Nitrogen)
[root@linux-node1 ~]# systemctl start salt-master
[root@linux-node1 ~]# vim /etc/salt/minion
master: 192.168.56.11
[root@linux-node1 ~]# systemctl start salt-minion
[root@linux-node2 ~]# vim /etc/salt/minion
master: 192.168.56.11
[root@linux-node2 ~]# systemctl start salt-minion
[root@linux-node1 master]# salt-key 
Accepted Keys:
Denied Keys:
Unaccepted Keys:
linux-node1
linux-node2
Rejected Keys:
[root@linux-node1 master]# salt-key -a linux*
The following keys are going to be accepted:
Unaccepted Keys:
linux-node1
linux-node2
Proceed? [n/Y] y
Key for minion linux-node1 accepted.
Key for minion linux-node2 accepted.
[root@linux-node1 master]# salt-key
Accepted Keys:
linux-node1
linux-node2
Denied Keys:
Unaccepted Keys:
Rejected Keys:
[root@linux-node1 ~]# salt '*' test.ping
linux-node2:
    True
linux-node1:
    True
[root@linux-node1 ~]# salt '*' cmd.run 'uptime'
[root@linux-node1 ~]# salt '*' cmd.run 'mkdir /tmp/hehe'
linux-node2:
linux-node1:
#zeroMQ发布订阅,所有minion都连接到4505端口
[root@linux-node1 ~]# lsof -ni:4505
COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
salt-mast 2871 root   16u  IPv4  21307      0t0  TCP *:4505 (LISTEN)
salt-mast 2871 root   18u  IPv4  26899      0t0  TCP 192.168.56.11:4505->192.168.56.11:52262 (ESTABLISHED)
salt-mast 2871 root   19u  IPv4  27038      0t0  TCP 192.168.56.11:4505->192.168.56.12:54354 (ESTABLISHED)
salt-mini 4520 root   27u  IPv4  26898      0t0  TCP 192.168.56.11:52262->192.168.56.11:4505 (ESTABLISHED)
[root@linux-node1 ~]# salt '*' cmd.run 'date'
linux-node2:
    Wed Sep 20 08:28:40 CST 2017
linux-node1:
    Wed Sep 20 08:28:41 CST 2017
#消息从4505发送,返回是4506接收

1-3-SaltStack入门-配置管理1

https://docs.saltstack.com/en/latest/topics/yaml

#修改master的配置文件
[root@linux-node1 ~]# vim /etc/salt/master
file_roots:
  base:
    - /srv/salt/base
  dev:
    - /srv/salt/dev
  test:
    - /srv/salt/test
  prod:
    - /srv/salt/prod

[root@linux-node1 ~]# mkdir -p /srv/salt/{base,dev,test,prod}
[root@linux-node1 ~]# systemctl restart salt-master

编写yaml的描述文件:

[root@linux-node1 ~]# cd /srv/salt/base/
[root@linux-node1 base]# mkdir web
[root@linux-node1 base]# cd web/
[root@linux-node1 web]# vim  apache.sls
apache-install:
  pkg.installed:
   - name: httpd

apache-serrvice:
  service.running:
   - name: httpd
   - enable: True
#执行:
[root@linux-node1 web]# salt 'linux-node2' state.sls web.apache
[root@linux-node2 ~]# lsof -i:80
httpd   6529   root    4u  IPv6  26156      0t0

1-4 SaltStack入门-配置管理2

[root@linux-node1 ~]# cd /srv/salt/base/
[root@linux-node1 base]# vim top.sls
base:
  '*':
    - web.apache
[root@linux-node1 ~]# salt '*' state.highstate
[root@linux-node1 ~]# salt '*' state.highstate test=True

1-5 SaltStack数据系统-Grains

[root@linux-node1 ~]# salt '*' grains.items
[root@linux-node1 ~]# salt '*' grains.ls
[root@linux-node1 ~]# salt '*' grains.get saltversion
linux-node1:
    2017.7.1
linux-node2:
    2017.7.1
[root@linux-node1 ~]# salt '*' grains.get ip4_interfaces:eth0
linux-node1:
    - 192.168.56.11
linux-node2:
    - 192.168.56.12
[root@linux-node1 ~]# salt -G 'os:CentOS' cmd.run 'uptime'
linux-node2:
     07:40:15 up 45 min,  1 user,  load average: 0.00, 0.01, 0.02
linux-node1:
     07:40:15 up 46 min,  1 user,  load average: 0.00, 0.01, 0.06
[root@linux-node1 ~]# salt -G 'init:systemd' cmd.run 'uptime'
linux-node2:
     07:43:03 up 48 min,  1 user,  load average: 0.00, 0.01, 0.02
linux-node1:
     07:43:03 up 48 min,  1 user,  load average: 0.08, 0.03, 0.05
#grains还有一种匹配方法:
[root@linux-node1 ~]# vim /srv/salt/base/top.sls
base:
  'os:CentOS':
    - match: grain
    - web.apache
[root@linux-node1 ~]# salt '*' state.highstate

https://docs.saltstack.com/en/latest/topics/pillar/

自定义一个grains:

#grains在minion上配置的
[root@linux-node1 ~]# vim /etc/salt/grains
test-grains: linux-node2
[root@linux-node1 ~]# systemctl restart salt-minion
[root@linux-node1 ~]# salt '*' grains.get test-grains
linux-node1:
    linux-node2
linux-node2:
[root@linux-node1 ~]# vim /etc/salt/grains           
test-grains: linux-node2
hehe: haha
[root@linux-node1 ~]# salt '*' saltutil.sync_grains
linux-node1:
linux-node2:
[root@linux-node1 ~]# salt '*' grains.get hehe
linux-node1:
    haha
linux-node2:

1-6 SaltStack数据系统-Pillar

https://docs.saltstack.com/en/latest/topics/pillar/

#pillar很安全只有指定的人才能看到key-value
打开pillar的配置,在master上:
[root@linux-node1 ~]# vim /etc/salt/master
pillar_opts: True
#这里我就不打开了

自定义pillar

[root@linux-node1 ~]# vim /etc/salt/master
pillar_roots:
  base:
    - /srv/pillar/base
  prod:
    - /srv/pillar/prod
[root@linux-node1 ~]# mkdir -p /srv/pillar/{base,prod}
[root@linux-node1 ~]# systemctl restart salt-master
[root@linux-node1 ~]# vim /srv/pillar/base/apache.sls
{% if grains['os'] == 'CentOS' %}
apache: httpd
{% elif grains['os'] == 'Debian' %}
apache: apache2
{% endif %}

接下来指定谁来使用pillar:

[root@linux-node1 ~]# vim /srv/pillar/base/top.sls
base:
  '*':
   - apache
#表示在base目录下,所有机器都可以访问到pillar
[root@linux-node1 base]# ll
total 8
-rw-r--r-- 1 root root 113 Sep 21 08:39 apache.sls
-rw-r--r-- 1 root root  25 Sep 21 08:45 top.sls
[root@linux-node1 base]# pwd
/srv/pillar/base
[root@linux-node1 ~]# salt '*' pillar.items
linux-node2:
    ----------
    apache:
        httpd
linux-node1:
    ----------
    apache:
        httpd

修改下原来那个apache.sls成通用的pillar形式:

[root@linux-node1 ~]# vim /srv/salt/base/web/apache.sls
apache-install:
  pkg.installed:
   - name: {{ pillar['apache'] }}

apache-serrvice:
  service.running:
   - name: {{ pillar['apache'] }}
   - enable: True
[root@linux-node1 ~]# salt '*' state.highstate

http://www.runoob.com

1-7 SaltStack数据系统-Grains-VS-Pillar

存储位置

名称 存储位置 数据类型 数据采集更新方式 应用
Grains Minion端 静态数据 Minion启动时收集,也可以使用saltutil_sync_grains进行刷新 存储Minion基本数据。比如用于匹配Minion,自身数据可以用来做资产管理等
Pillar Master端 动态数据 在Master端定义,指定给对应的Minion.可以使用saltutil.refresh_pillar刷新 存储Master指定的数据,只有指定的Minion可以看到。用于敏感数据保存

1-8 SaltStack远程执行-目标

[root@linux-node1 ~]# salt '*' service.status sshd
linux-node1:
    True
linux-node2:
    True

目标

和minionID有关:
1、MinionID

[root@linux-node1 ~]# salt 'linux-node1' service.status sshd
linux-node1:
    True

2、通配符

[root@linux-node1 ~]# salt 'linux-node[1-2]' service.status sshd  
linux-node1:
    True
linux-node2:
    True

3、列表

[root@linux-node1 ~]# salt -L 'linux-node1,linux-node2' test.ping
linux-node1:
    True
linux-node2:
    True

4、正则表达式

[root@linux-node1 ~]# salt -E 'linux-(node1|node2)' test.ping
linux-node1:
    True
linux-node2:
    True

和MinionID无关:

1、Grains

[root@linux-node1 ~]# salt -G 'os:CentOS' test.ping
linux-node1:
    True
linux-node2:
    True

2、子网、IP地址

[root@linux-node1 ~]# salt -S '192.168.56.11' test.ping
linux-node1:
    True
[root@linux-node1 ~]# salt -S '192.168.56.0/24' test.ping  
linux-node1:
    True
linux-node2:
    True

3、Pillar

[root@linux-node1 ~]# salt -I 'apache:httpd' test.ping
linux-node1:
    True
linux-node2:
    True

混合匹配

https://docs.saltstack.com/en/latest/topics/targeting/nodegroups.html

[root@linux-node1 ~]# vim /etc/salt/master
nodegroups:
  web-group:  'L@linux-node1,linux-node2'
[root@linux-node1 ~]# salt -N web-group test.ping
linux-node2:
    True
linux-node1:
    True

批处理执行

https://docs.saltstack.com/en/latest/topics/targeting/batch.html

#就是不是全部执行比如,10台,10台执行,这边只有2台看不出效果,以后生产5台,5台机器重启
[root@linux-node1 ~]# salt '*' -b 10 test.ping

Executing run on ['linux-node1', 'linux-node2']

jid:
    20170924063750485185
linux-node2:
    True
retcode:
    0
jid:
    20170924063750485185
linux-node1:
    True
retcode:
    0
#比如说,1台执行后返回结果,在执行下一台,相当于不是并发
[root@linux-node1 ~]# salt '*' -b 1 test.ping 

Executing run on ['linux-node1']

jid:
    20170924064029877583
linux-node1:
    True
retcode:
    0

Executing run on ['linux-node2']

jid:
    20170924064030026240
linux-node2:
    True
retcode:
    0
[root@linux-node1 ~]# salt -G 'os:CentOS' --batch-size 50% test.ping

Executing run on ['linux-node1']

jid:
    20170924064335701544
linux-node1:
    True
retcode:
    0

Executing run on ['linux-node2']

jid:
    20170924064335850545
linux-node2:
    True
retcode:
    0

1-9 SaltStack远程执行-执行模块

https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.network.html
返回tcp链接

[root@linux-node1 ~]# salt '*' network.active_tcp

查看salt版本:
[root@linux-node1 ~]# salt-call --version
salt-call 2017.7.1 (Nitrogen)
查看机器连接baidu的80端口之间通讯:

[root@linux-node1 ~]# salt '*' network.connect baidu.com 80
linux-node1:
    ----------
    comment:
        Successfully connected to baidu.com (111.13.101.208) on tcp port 80
    result:
        True
linux-node2:
    ----------
    comment:
        Successfully connected to baidu.com (111.13.101.208) on tcp port 80
    result:
        True

Service模块:
https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.service.html#module-salt.modules.service

[root@linux-node1 ~]# salt '*' service.available sshd
linux-node2:
    True
linux-node1:
    True
#获取正在运行的服务
[root@linux-node1 ~]# salt '*' service.get_all
[root@linux-node1 ~]# salt '*' service.get_all|grep sshd
    - sshd
    - sshd-keygen
    - sshd.socket
    - sshd@
    - sshd
    - sshd-keygen
    - sshd.socket
    - sshd@

state模块:

https://docs.saltstack.com/en/latest/ref/modules/all/salt.modules.state.html#module-salt.modules.state

[root@linux-node1 ~]# salt '*' state.show_top
linux-node1:
    ----------
    base:
        - web.apache
linux-node2:
    ----------
    base:
        - web.apache

file模块:

[root@linux-node1 ~]# salt-cp '*' /etc/passwd /tmp/papa
[root@linux-node1 ~]# salt '*' cmd.run 'ls /tmp/papa'

1-10 SaltStack远程执行-返回

https://docs.saltstack.com/en/latest/ref/returners/index.html
注意:返回是Minion返回的
https://docs.saltstack.com/en/latest/ref/returners/all/salt.returners.mysql.html

在所有Minion上装MySQL-python包:

[root@linux-node1 ~]# salt '*' cmd.run 'yum install MySQL-python -y'
#也可以使用pkg.install安装方式:
[root@linux-node1 ~]# salt '*' pkg.install MySQL-python

安装数据库:

[root@linux-node1 ~]# yum install mariadb-server -y
[root@linux-node1 ~]# systemctl start mariadb
[root@linux-node1 ~]# systemctl enable mariadb.service
#登录mysql
[root@linux-node1 ~]# mysql
#创建salt库
CREATE DATABASE  `salt`
  DEFAULT CHARACTER SET utf8
  DEFAULT COLLATE utf8_general_ci;
#创建3个表
MariaDB [(none)]> USE `salt`;
CREATE TABLE `jids` (
  `jid` varchar(255) NOT NULL,
  `load` mediumtext NOT NULL,
  UNIQUE KEY `jid` (`jid`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
CREATE INDEX jid ON jids(jid) USING BTREE;

CREATE TABLE `salt_returns` (
  `fun` varchar(50) NOT NULL,
  `jid` varchar(255) NOT NULL,
  `return` mediumtext NOT NULL,
  `id` varchar(255) NOT NULL,
  `success` varchar(10) NOT NULL,
  `full_ret` mediumtext NOT NULL,
  `alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
  KEY `id` (`id`),
  KEY `jid` (`jid`),
  KEY `fun` (`fun`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

CREATE TABLE `salt_events` (
`id` BIGINT NOT NULL AUTO_INCREMENT,
`tag` varchar(255) NOT NULL,
`data` mediumtext NOT NULL,
`alter_time` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
`master_id` varchar(255) NOT NULL,
PRIMARY KEY (`id`),
KEY `tag` (`tag`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

授权:

MariaDB [salt]> grant all on salt.* to salt@'%' identified by 'salt';
Query OK, 0 rows affected (0.00 sec)

在Minion上配置连接mysql:

#在所有Minion配置文件中加入这些
[root@linux-node2 ~]# vim /etc/salt/minion
mysql.host: '192.168.56.11'
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
[root@linux-node2 ~]# systemctl restart salt-minion
[root@linux-node1 ~]# vim /etc/salt/minion
mysql.host: '192.168.56.11'
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
[root@linux-node1 ~]# systemctl restart salt-minion

测试下:

[root@linux-node1 ~]# salt '*' test.ping --return mysql
linux-node2:
    True
linux-node1:
    True
[root@linux-node1 ~]# salt '*' cmd.run 'df -h' --return mysql

数据库上看下数据是否写入了:

+-----------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
| fun       | jid                  | return                                                                                                                                                                                                                                                                                                                                                                                                                                                            | id          | success | full_ret                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                  | alter_time          |
+-----------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
| test.ping | 20170924080508951387 | true                                                                                                                                                                                                                                                                                                                                                                                                                                                              | linux-node1 | 1       | {"fun_args": [], "jid": "20170924080508951387", "return": true, "retcode": 0, "success": true, "fun": "test.ping", "id": "linux-node1"}                                                                                                                                                                                                                                                                                                                                                                                                                                                                   | 2017-09-24 08:05:14 |
| test.ping | 20170924080508951387 | true                                                                                                                                                                                                                                                                                                                                                                                                                                                              | linux-node2 | 1       | {"fun_args": [], "jid": "20170924080508951387", "return": true, "retcode": 0, "success": true, "fun": "test.ping", "id": "linux-node2"}                                                                                                                                                                                                                                                                                                                                                                                                                                                                   | 2017-09-24 08:05:19 |
| cmd.run   | 20170924080851190949 | "Filesystem               Size  Used Avail Use% Mounted on\n/dev/mapper/centos-root   20G  1.3G   19G   7% /\ndevtmpfs                 479M     0  479M   0% /dev\ntmpfs                    489M   28K  489M   1% /dev/shm\ntmpfs                    489M  6.7M  483M   2% /run\ntmpfs                    489M     0  489M   0% /sys/fs/cgroup\n/dev/sda1                253M  111M  143M  44% /boot\ntmpfs                     98M     0   98M   0% /run/user/0" | linux-node1 | 1       | {"fun_args": ["df -h"], "jid": "20170924080851190949", "return": "Filesystem               Size  Used Avail Use% Mounted on\n/dev/mapper/centos-root   20G  1.3G   19G   7% /\ndevtmpfs                 479M     0  479M   0% /dev\ntmpfs                    489M   28K  489M   1% /dev/shm\ntmpfs                    489M  6.7M  483M   2% /run\ntmpfs                    489M     0  489M   0% /sys/fs/cgroup\n/dev/sda1                253M  111M  143M  44% /boot\ntmpfs                     98M     0   98M   0% /run/user/0", "retcode": 0, "success": true, "fun": "cmd.run", "id": "linux-node1"} | 2017-09-24 08:08:51 |
| cmd.run   | 20170924080851190949 | "Filesystem               Size  Used Avail Use% Mounted on\n/dev/mapper/centos-root   20G  1.1G   19G   6% /\ndevtmpfs                 479M     0  479M   0% /dev\ntmpfs                    489M   12K  489M   1% /dev/shm\ntmpfs                    489M  6.7M  483M   2% /run\ntmpfs                    489M     0  489M   0% /sys/fs/cgroup\n/dev/sda1                253M  111M  143M  44% /boot\ntmpfs                     98M     0   98M   0% /run/user/0" | linux-node2 | 1       | {"fun_args": ["df -h"], "jid": "20170924080851190949", "return": "Filesystem               Size  Used Avail Use% Mounted on\n/dev/mapper/centos-root   20G  1.1G   19G   6% /\ndevtmpfs                 479M     0  479M   0% /dev\ntmpfs                    489M   12K  489M   1% /dev/shm\ntmpfs                    489M  6.7M  483M   2% /run\ntmpfs                    489M     0  489M   0% /sys/fs/cgroup\n/dev/sda1                253M  111M  143M  44% /boot\ntmpfs                     98M     0   98M   0% /run/user/0", "retcode": 0, "success": true, "fun": "cmd.run", "id": "linux-node2"} | 2017-09-24 08:08:51 |
+-----------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+-------------+---------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+---------------------+
4 rows in set (0.00 sec)

接下来修改配置Master缓存起来:

#加入
[root@linux-node1 ~]# vim /etc/salt/master
master_job_cache: mysql
#连接mysql的配置也加入进来
mysql.host: '192.168.56.11'
mysql.user: 'salt'
mysql.pass: 'salt'
mysql.db: 'salt'
mysql.port: 3306
[root@linux-node1 ~]# systemctl restart salt-master
#现在执行的所有命令都会写到数据库里面,现在跟return返回已经没有关系了,是job_cache的一种机制,可以把缓存写在mysql里面。
[root@linux-node1 ~]# salt '*' cmd.run 'mkdir -p /server/scripts'
linux-node2:
linux-node1:

显示job的jid:

[root@linux-node1 ~]# salt '*' cmd.run 'uptime' -v
Executing job with jid 20170924082930524038
-------------------------------------------

linux-node2:
     08:29:30 up  2:49,  1 user,  load average: 0.00, 0.01, 0.05
linux-node1:
     08:29:30 up  2:49,  2 users,  load average: 0.01, 0.07, 0.11
[root@linux-node1 ~]# salt-run jobs.lookup_jid 20170924082930524038
linux-node1:
     08:29:30 up  2:49,  2 users,  load average: 0.01, 0.07, 0.11
linux-node2:
     08:29:30 up  2:49,  1 user,  load average: 0.00, 0.01, 0.05
posted @ 2017-09-23 23:46  ShenghuiChen  阅读(308)  评论(0编辑  收藏  举报