NoSQL之Redis集群搭建

 

一、案例概述

1、单节点Redis服务器带来的问题

·单点故障,服务不可用

·无法处理大量的并发数据请求

·数据丢失——大灾难

2、解决方法 

搭建Redis集群

 

二、案例前置知识点

1、Redis集群介绍

·Redis集群是一个提供在多个Redis间节点间共享数据的程序集

·Redis集群并不支持处理多个keys的命令,因为这需要在不同的节点间移动数据,从而达不到像Redis那样的性能,在高负载的情况下可能会导致不可预料的错误

·Redis集群通过分区来提供一定程度的可用性,在实际环境中当某个节点宕机或者不可达的情况下可继续处理命令

2、Redis集群的优势

·自动分割数据到不同的节点上

·整个集群的部分节点失败或者不可达的情况下能够继续处理命令

3、Redis集群的实现方法

·有客户端分片

·代理分片

·服务器端分片

4、Redis-Cluster数据分片

·Redis集群没有使用一致性hash,而是引入了哈希槽概念

·Redis集群有16384个哈希槽

·每个key通过CRC16校验后对16384取模来决定放置槽

·集群的每个节点负责一部分哈希槽

·以3个节点组成的集群为例

①节点A包含0到5500号哈希槽

②节点B包含5501到11000号哈希槽

③节点C包含11001到16384号哈希槽

·支持添加或者删除节点

①添加删除节点无需停止服务

②例如:

1)如果想新添加个节点D,需要移动节点A、B、C中的部分槽到D上

2)如果想移除节点A,需要将A中的槽移到B和C上,再将没有任何槽的A节点从集群中移除

·Redis-Cluster的主从复制模型

①集群中具有A,B,C三个节点,如果节点B失败了,整个集群就会因缺少5501-11000这个范围的槽而不可用

②为每个节点添加一个从节点A1,B1,C1,整个集群便有三个master节点和三个slave节点组成,在节点B失败后,集群便会选举B1为新的主节点继续服务

③当B和B1都失败后,集群将不可用

 

三、Redis集群搭建

1、案例拓扑图

2、环境 

 

Master1服务器

20.0.0.10

Master2服务器

20.0.0.20

Master3服务器

20.0.0.30

Slave1服务器

20.0.0.40

Slave2服务器

20.0.0.50

Slave3服务器

20.0.0.60

3、安装Redis

所有服务器上都需要安装,只在master1上演示,其他安装都一样

[root@master1 ~]# tar zxf redis-5.0.7.tar.gz
[root@master1 ~]# cd redis-5.0.7/
[root@master1 redis-5.0.7]# make -j2
[root@master1 redis-5.0.7]# make PREFIX=/usr/local/redis install
[root@master1 redis-5.0.7]# ln -s /usr/local/redis/bin/* /usr/local/bin/
[root@master1 redis-5.0.7]# cd utils/
[root@master1 utils]# ./install_server.sh
Welcome to the redis service installer
This script will help you easily set up a running redis server
Please select the redis port for this instance: [6379]
Selecting default: 6379
Please select the redis config file name [/etc/redis/6379.conf]
Selected default - /etc/redis/6379.conf
Please select the redis log file name [/var/log/redis_6379.log]
Selected default - /var/log/redis_6379.log
Please select the data directory for this instance [/var/lib/redis/6379]
Selected default - /var/lib/redis/6379
Please select the redis executable path [/usr/local/bin/redis-server]
Selected config:
Port           : 6379
Config file    : /etc/redis/6379.conf
Log file       : /var/log/redis_6379.log
Data dir       : /var/lib/redis/6379
Executable     : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
Is this ok? Then press ENTER to go on or Ctrl-C to abort.
Copied /tmp/6379.conf => /etc/init.d/redis_6379
Installing service...
Successfully added to chkconfig!
Successfully added to runlevels 345!
Starting Redis server...
Installation successful!
4、修改配置文件

所有服务器上都需要修改,只在master1上演示

[root@localhost utils]# vi /etc/redis/6379.conf
bind 20.0.0.10           #删除原来的127.0.0.1,改成自己的IP
cluster-enabled yes     #前面的注释去掉
appendonly yes           #开启AOF持久化
cluster-config-file nodes-6379.conf     #前面的注释去掉
cluster-node-timeout 15000         #前面的注释去掉
cluster-require-full-coverage no      #前面的注释去掉,把yes改成no,表示当负责一个插槽的主库下线且没有相应的从库进行故障恢复时,集群仍然可用
5、开启Redis服务

所有服务器上都需要开启,只在master1上演示

[root@master1 utils]# /etc/init.d/redis_6379 restart
Stopping ...
Waiting for Redis to shutdown ...
Redis stopped
Starting Redis server...
[root@master1 utils]# netstat -anpt | grep 6379
tcp        0      0 20.0.0.10:6379          0.0.0.0:*               LISTEN      18943/redis-server
tcp        0      0 20.0.0.10:16379         0.0.0.0:*               LISTEN      18943/redis-server

6、在master1上使用脚本创建集群

gem是ruby写的软件包。rubygems是用来打包、下载、安装、使用gem软件包的工具。要搭建集群的话,需要使用一个工具(脚本文件),这个工具在redis解压文件的源代码里。因为这个工具是一个ruby脚本文件,所以这个工具的运行需要ruby的运行环境,就相当于java语言的运行需要在jvm上。所以需要安装ruby。

[root@master1 utils]# yum -y install ruby rubygems 1 [root@master1 utils]# cd
[root@master1 ~]# gem install redis-3.2.0.gem
Successfully installed redis-3.2.0
Parsing documentation for redis-3.2.0
Installing ri documentation for redis-3.2.0
1 gem installed
[root@master1 ~]# cd redis-5.0.7/src/
[root@master1 src]# redis-cli --cluster create --cluster-replicas 1 20.0.0.10:6379 20.0.0.20:6379 20.0.0.30:6379 20.0.0.40:6379 20.0.0.50:6379 20.0.0.60:6379
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 20.0.0.50:6379 to 20.0.0.10:6379
Adding replica 20.0.0.60:6379 to 20.0.0.20:6379
Adding replica 20.0.0.40:6379 to 20.0.0.30:6379
M: 7ae810725eb6ff5d3c8b222dff08bed993f7738f 20.0.0.10:6379
    slots:[0-5460] (5461 slots) master
M: 0229fcffb856fac03854aebcc053ff4115a8b248 20.0.0.20:6379
    slots:[5461-10922] (5462 slots) master
M: d29fc5dcf1765ff01adc89aae5ec27131d05d311 20.0.0.30:6379
    slots:[10923-16383] (5461 slots) master
S: bb00f5e1da389a397580abdeec8bfab15cf2b404 20.0.0.40:6379
    replicates d29fc5dcf1765ff01adc89aae5ec27131d05d311
S: f1843f0b57222c396f8c72acbbe5a31bffdfe790 20.0.0.50:6379
    replicates 7ae810725eb6ff5d3c8b222dff08bed993f7738f
S: 7316d95a643a9ffd439e37d248ff354c69cdea0b 20.0.0.60:6379
    replicates 0229fcffb856fac03854aebcc053ff4115a8b248
Can I set the above configuration? (type 'yes' to accept): yes     #输入yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
....
>>> Performing Cluster Check (using node 20.0.0.10:6379)
M: 7ae810725eb6ff5d3c8b222dff08bed993f7738f 20.0.0.10:6379
    slots:[0-5460] (5461 slots) master
    1 additional replica(s)
M: 0229fcffb856fac03854aebcc053ff4115a8b248 20.0.0.20:6379
    slots:[5461-10922] (5462 slots) master
    1 additional replica(s)
M: d29fc5dcf1765ff01adc89aae5ec27131d05d311 20.0.0.30:6379
    slots:[10923-16383] (5461 slots) master
    1 additional replica(s)
S: 7316d95a643a9ffd439e37d248ff354c69cdea0b 20.0.0.60:6379
    slots: (0 slots) slave
    replicates 0229fcffb856fac03854aebcc053ff4115a8b248
S: f1843f0b57222c396f8c72acbbe5a31bffdfe790 20.0.0.50:6379
    slots: (0 slots) slave
    replicates 7ae810725eb6ff5d3c8b222dff08bed993f7738f
S: bb00f5e1da389a397580abdeec8bfab15cf2b404 20.0.0.40:6379
    slots: (0 slots) slave
    replicates d29fc5dcf1765ff01adc89aae5ec27131d05d311
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

7、测试群集

[root@master1 src]# redis-cli -h 20.0.0.10 -p 6379 -c
20.0.0.10:6379> set centos 7.6
OK
20.0.0.10:6379> quit
[root@master1 src]# redis-cli -h 20.0.0.20 -p 6379 -c
20.0.0.20:6379> get centos
-> Redirected to slot [467] located at 20.0.0.10:6379
"7.6"
20.0.0.10:6379> quit
[root@master1 src]# redis-cli -h 20.0.0.50 -p 6379 -c
20.0.0.50:6379> get centos
-> Redirected to slot [467] located at 20.0.0.10:6379
"7.6"
20.0.0.10:6379> cluster info          #查看集群状态
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:383
cluster_stats_messages_pong_sent:373
cluster_stats_messages_sent:756
cluster_stats_messages_ping_received:368
cluster_stats_messages_pong_received:383
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:756
20.0.0.10:6379> cluster nodes                   #查看节点信息
0229fcffb856fac03854aebcc053ff4115a8b248 20.0.0.20:6379@16379 master - 0 1605018468897 2 connected 5461-10922
d29fc5dcf1765ff01adc89aae5ec27131d05d311 20.0.0.30:6379@16379 master - 0 1605018466000 3 connected 10923-16383
7316d95a643a9ffd439e37d248ff354c69cdea0b 20.0.0.60:6379@16379 slave 0229fcffb856fac03854aebcc053ff4115a8b248 0 1605018467000 6 connected
7ae810725eb6ff5d3c8b222dff08bed993f7738f 20.0.0.10:6379@16379 myself,master - 0 1605018465000 1 connected 0-5460
f1843f0b57222c396f8c72acbbe5a31bffdfe790 20.0.0.50:6379@16379 slave 7ae810725eb6ff5d3c8b222dff08bed993f7738f 0 1605018467876 5 connected
bb00f5e1da389a397580abdeec8bfab15cf2b404 20.0.0.40:6379@16379 slave d29fc5dcf1765ff01adc89aae5ec27131d05d311 0 1605018468000 4 connected

 

posted @ 2020-11-17 22:35  Biu小怪兽  阅读(83)  评论(0编辑  收藏  举报