Redis学习笔记整理

一、Redis概述

1、redis简介
Redis(REmote DIctionary Server 远程字典服务器)是一款开源的,用ANSI C编写、支持网络、基于内存、亦可持久化的日志型、Key-Value(键值对)数据库,Redis是目前最流行的键值对存储数据库,现在的开发由VMware主持。可以用来存储字符串,哈希(hash)结构,链表,集合,因此,常用来提供数据结构服务。
附录:数据结构是计算机存储、组织数据的方式。数据结构是指相互之间存在一种或多种特定关系的数据元素的集合。通常情况下,精心选择的数据结构可以带来更高的运行或者存储效率。数据结构往往同高效的检索算法和索引技术有关。
2、redis的特性
1)存储结构
redis每个被存取的对象都有唯一的标识符key,存取操作均通过key进行,例如可以把后端数据库中的select操作提取出来,然后对相应的SQL进行hash计算得出key,然后以这个key在redis中查找数据,如果数据不存在,说明其尚未被写入缓存中,并设置一个失效时间(比如1小时),在失效时间内的数据都是从缓存中提取,这样就有效地减少了数据库的压力。到目前为止Redis支持的键值数据有五种类型如下:

  • 字符串类型
  • 列表类型
  • 集合类型
  • 有序集合类型
  • 散列类型
    2)内存存储与持久化
    redis数据库中的所有数据都存储在内存中,其读写速度远远快于硬盘,因此Redis在性能上对比其他硬盘存储有明显的优势,在一台普通的笔记本电脑,Redis可以在一秒内读写超过十万个键值。
    将数据存储写入内存中也有诸如像memcached的问题,,程序关闭后内存中的数据会丢失。不过Redis提供了对持久化的支持,即将可以内存中的数据异步写入到硬盘中,同时不影响继续提供服务。
    3)功能丰富
    redis虽然是作为数据库开发的,但是由于其提供了丰富的功能,越来越多的人将其作为缓存、队列系统等一些功能使用。
    4)简单稳定
    redis直观的存储结构使得通过程序与redis交互十分简单。
    3、redis与memcached之间的区别
    1)在性能上Redis是单线程模型,而Memcached支持多线程,所以在多核服务器上后者的性能会更高一些。
    2)在功能上,Redis支持的存储结构要比Memcached丰富,而且具有持久化等功能,是Memcached很好的替代品。
    3)作为缓存系统,Redis还可以限定数据占有的最大内存空间,在数据达到空间限制后可以按照一定的规则自动淘汰不需要的键。(次数与时间)
    4)Redis的列表类型键值还可以用来实现队列,并且支持阻塞式读取,可以很容易地实现一个高性能的优先级队列。同时在更高层次上,Redis还支持“发布/订阅”的消息模式,可以基于此构建聊天室等系统。
    5)redis可以通过构建主从复制,避免单点故障,而memcached并不具有此功能,但是可以通过memcached的Repcached(memcached同步补丁)来实现冗余(只支持单对单和1.2.x版本)也可以通过开源软件Magent实现Memcached群集。

redis的官方网站:www.redis.io

参考文献:
http://zh.wikipedia.org/zh/Redis
http://baike.baidu.com/view/4595959.htm?fr=aladdin
http://redis.cn/topics/introduction.html
http://www.redis.io

二、Redis安装

2.1安装依赖包
[root@localhost redis-3.2.8]# yum -y install tcl
2.2解压源码并进入目录
[root@localhost ~]# tar xf redis-3.2.8.tar.gz -C /usr/src
[root@localhost ~]# cd /usr/src/redis-3.2.8/
2.3不用configure直接make
[root@localhost redis-3.2.8]# make (如果是32位机器用: make 32bit)
注:易碰到的问题,时间错误.
原因:源码是官方configure过的,但官方configure时,生成的文件有时间戳信息,make只能发生在configure之后,如果你的虚拟机的时间不对,就会出现错误。
解决方案: date -s ‘yyyy-mm-dd hh:mm:ss’ 重写时间,再 clock -w 写入cmos
2.4安装到指定的目录
[root@localhost redis-3.2.8]#make PREFIX=/usr/local/redis install
注: PREFIX要大写
2.5 redis /bin下的文件详解
redis-benchmark 性能测试工具
redis-check-aof 日志文件检测工具(如断电造成日志损坏,可以检测并修复)
redis-check-rdb 快照文件检测工具,效果类上
redis-cli 连接用的客户端
redis-server redis服务进程
2.6复制配置文件
[root@localhost redis-3.2.8]# cp redis.conf /usr/local/redis/
2.7启动与连接
[root@localhost redis-3.2.8]# /usr/local/redis/bin/redis-server /usr/local/redis/redis.conf

另开一个终端连 用/bin/redis-cli连接
[root@localhost ~]# /usr/local/redis/bin/redis-cli -h localhost -p 6379
localhost:6379>
2.8让redis以后台进程的形式运行
编辑conf配置文件,修改如下内容;
[root@localhost redis-3.2.8]# vim /usr/local/redis/redis.conf
128 daemonize yes //no改为yes

三、redis键值入门

3.1、基础操作入门
3.1.2、Redis对于key的操作命令
redis是一种高级的key-value存储系统。关于key的几点注意事项。
1.key不要太长,尽量不要超过1024字节,太长不仅消耗内存,而且还会降低查找的效率;
2.key也不要太短,太短的话,key的可读性会降低;
3.在一个项目中,key最好使用统一的命名模式。

设置key键和值
set key value
localhost:6379> set site(key) www.crushlinux.com(value)
localhost:6379> set age 20 //设置key值
localhost:6379> get site //查看key值
"www.crushlinux.com"
localhost:6379> get age
"20"

del删除1个或多个键
del key1 key2 ... Keyn
localhost:6379> del site
(integer) 1
localhost:6379> del age
(integer) 1
返回值: 不存在的key忽略掉,返回真正删除的key的数量
注: 如果newkey已存在,则newkey的原值被覆盖

rename把key改名为newkey
rename key newkey
localhost:6379> set site www.crushlinux.com
OK
localhost:6379> rename site wangzhi
OK
renamenx key newkey
localhost:6379> set wangzhi www.baidu.com
localhost:6379> renamenx wangzhi site
(integer) 0
返回: 发生修改返回1,未发生修改返回0
注: nx--> not exists, 即, newkey不存在时,作改名动作

keys *查看当前库下所有的key
localhost:6379> keys *

  1. "site"
  2. "age"
  3. "wangzhi"
    返回:返回当前数据库下所有的key键

move key db选择数据库并迁移数据库中的key(默认是0库)
move key db
localhost:6379> keys *

  1. "site" //只有当前数据库有key值
  2. "age"
  3. "wangzhi"
    localhost:6379> select 1 //切换数据库
    OK
    localhost:6379[1]> keys *
    (empty list or set) //切换后,数据库没有数据
    localhost:6379[1]> select 0
    OK
    localhost:6379> keys *
  4. "site"
  5. "age"
  6. "wangzhi"
    localhost:6379> move site 2
    (integer) 1
    localhost:6379> select 2
    OK
    localhost:6379[2]> keys *
  7. "site"
    localhost:6379[2]> get site
    "www.crushlinux.com"
    (注意: 一个redis进程,打开了不止一个数据库, 默认打开16个数据库,从0到15编号,
    如果想打开更多数据库,可以从配置文件修改)
    可以通过info keyspace查看那几个库有键值

flushdb清空数据库下的所有key
localhost:6379[2]> select 0
OK
localhost:6379> keys *

  1. "age"
  2. "wangzhi"
    localhost:6379> flushdb
    OK
    localhost:6379> keys *
    (empty list or set)

在redis里,允许模糊查询key
有3个通配符 *, ? ,[]
: 通配任意多个字符
?: 通配单个字符
[]: 通配括号内的某1个字符
keys pattern 查询相应的key
localhost:6379> mset one 1 two 2 three 3 four 4 //一次设置多个key
OK
localhost:6379> keys o

  1. "one"
    localhost:6379> keys *o
  2. "two"
    localhost:6379> keys ???
  3. "two"
  4. "one"
    localhost:6379> keys on?
  5. "one"
    localhost:6379> set ons yes
    OK
    localhost:6379> keys on[eaw]
  6. "one"

randomkey返回随机key
randomkey
localhost:6379> randomkey
"four"
localhost:6379> randomkey
"three"
localhost:6379> randomkey
"three"

exists判断key是否存在,返回1/0(不支持模糊查询)
exists key
localhost:6379> exists one
(integer) 1
localhost:6379> exists wolf
(integer) 0

type返回key存储的值的类型
有string,link,set,order set, hash
type key
localhost:6379> type one
string

ttl查询key的生命周期
ttl key
localhost:6379> ttl one
(integer) -1
返回: 秒数
注:对于不存在的key或已过期的key,都返回-1
Redis2.8中,对于不存在的key,返回-2

expire设置key的生命周期,以秒为单位(默认是永不过期)
expire key 整型值
localhost:6379> expire one 10
(integer) 1
localhost:6379> ttl one
(integer) 8
localhost:6379> ttl one
(integer) 6
localhost:6379> ttl one
(integer) 2
localhost:6379> ttl one
(integer) 1
localhost:6379> ttl one
(integer) 0
localhost:6379> get one
(nil)
同理:
pexpire key 毫秒数, 设置生命周期
pttl key, 以毫秒返回生命周期
pexpire key 毫秒数
localhost:6379> pexpire two 8000
(integer) 1
localhost:6379> pttl two
(integer) 3340
localhost:6379> pttl two
(integer) 1239
localhost:6379> pttl two
(integer) 662
localhost:6379> pttl two
(integer) 165
localhost:6379> pttl two
(integer) -2
返回值为-2,代表以毫秒为单位的key过期

persist把指定key置为永久有效
persist key
localhost:6379> expire three 60
(integer) 1
localhost:6379> ttl three
(integer) 55
localhost:6379> persist three
(integer) 1
localhost:6379> ttl three
(integer) -1

Redis数据类型

4.1、Redis字符串类型的操作
set key value [ex 秒数]设置k键时同时设置其生命周期
set key value [ex 秒数] / [px 毫秒数] [nx] /[xx]
如: set a 1 ex 10 10秒有效
set a 1 px 9000 9秒有效
注: 如果ex,px同时写,以后面的有效期为准
如 set a 1 ex 100 px 9000, 实际有效期是9000毫秒

nx: 表示key不存在时,执行操作
xx: 表示key存在时,执行操作

mset一次性设置多个键值
mset(multi set)
例: mset key1 v1 key2 v2 ....

get key获取key的值
get key

mget获取多个key的值
mget key1 key2 ..keyn

setrange key offset value把字符串的offset偏移字节,改成value
setrange key offset value
localhost:6379> set greet hello
OK
localhost:6379> setrange greet 2 x
(integer) 5
localhost:6379> get greet
"hexlo"
localhost:6379> setrange greet 3 w
(integer) 5
localhost:6379> get greet
"hexwo"
localhost:6379> setrange greet 6 t
(integer) 7
localhost:6379> get greet
"hexwo\x00t"
注意: 如果偏移量>字符长度, 该字符自动补0x00

append key value把value追加到key的原值上
append key value
localhost:6379> get three
"3"
localhost:6379> append three 6
(integer) 2
localhost:6379> get three
"36"

getrange key start stop是获取字符串中 [start, stop]范围的值
getrange key start stop
localhost:6379> set title 'chinese'
OK
localhost:6379> getrange title 0 3
"chin"
localhost:6379> getrange title 1 -2
"hines"
注意: 对于字符串的下标,左数从0开始,右数从-1开始
注意:
1: start>=length, 则返回空字符串
2: stop>=length,则截取至字符结尾
3: 如果start 所处位置在stop右边, 返回空字符串

getset key newvalue获取并返回旧值,设置新值
getset key newvalue
localhost:6379> set cnt 0
OK
localhost:6379> getset cnt 1
"0"
localhost:6379> getset cnt 2
"1"
localhost:6379> get cnt
"2"

incr key指定的key的值加1,并返回加1后的值
incr key
localhost:6379> incr cnt
(integer) 3
localhost:6379> get cnt
"3"
注意:
1:不存在的key当成0,再incr操作
2:范围为64有符号

incrby key number //增加整数值格式
localhost:6379> set age 2
OK
localhost:6379> get age
"2"
localhost:6379> incrby age 90
(integer) 92
incrbyfloat key floatnumber //增加小数值格式
localhost:6379> incrbyfloat age 3.5
"93.5"
注意:
1.不存在的key当成0,再incr操作

decr key指定的key的值减1,并返回减1后的值
decr key
localhost:6379> set age 20
OK
localhost:6379> decr age
(integer) 19
decrby key number //减少整数值
localhost:6379> decrby age 3
(integer) 16

getbit key offset获取值的二进制表示,对应位上的值(从左,从0编号到7)
getbit key offset
localhost:6379> set char A
OK
localhost:6379> getbit char 1
(integer) 1
localhost:6379> getbit char 2
(integer) 0
localhost:6379> getbit char 7
(integer) 1
ASCII码 A 65 0100 0001
a 97 0110 0001

setbit设置offset对应二进制位上的值
setbit key offset value
localhost:6379> get char
"A"
localhost:6379> setbit char 2 1
(integer) 0
localhost:6379> get char
"a"
返回: 该位上的旧值
注意:
1:如果offset过大,则会在中间填充0,
2: offset最大大到多少
offset最大2^32-1,可推出最大的的字符串为512M

bitop operation destkey key1 [key2…]对key1…keyN作operation,并将结果保存 destkey 上
bitop operation destkey key1 [key2 ...]
operation 可以是 AND 、 OR 、 NOT 、 XOR
localhost:6379> setbit lower 2 1
(integer) 0
localhost:6379> get lower
" "
localhost:6379> set char Q
OK
localhost:6379> bitop or char char lower
(integer) 1
localhost:6379> get char
"q"
注意: 对于NOT操作, key不能多个
4.2、link 链表结构
4.2.1基本操作
lpush key value把值插入到链接头部
lpush key value

rpush key value把值插入到链接尾部
rpush key value

lrange key start stop返回链表中[start ,stop]中的元素
lrange key start stop
规律: 左数从0开始,右数从-1开始

实验部分
localhost:6379> lpush character a
(integer) 1
localhost:6379> rpush character b
(integer) 2
localhost:6379> rpush character c
(integer) 3
localhost:6379> lpush character 0
(integer) 4
localhost:6379> lrange character 1 2

  1. "a"
  2. "b"
    localhost:6379> lrange character 0 3
  3. "0"
  4. "a"
  5. "b"
  6. "c"
    localhost:6379> lrange character 0 -1 //返回链表中所有的值
  7. "0"
  8. "a"
  9. "b"
  10. "c"

rpop key返回并删除链表尾元素
rpop key

lpop key返回并删除链表头部元素
lpop key

实验部分
localhost:6379> lpop character
"0"
localhost:6379> lrange character 0 -1

  1. "a"
  2. "b"
  3. "c"
    localhost:6379> rpop character
    "c"
    localhost:6379> lrange character 0 -1
  4. "a"
  5. "b"

lrem key count value从key链表中删除 value值
lrem key count value
localhost:6379> flushdb
OK
localhost:6379> rpush answer a b c a b d a
(integer) 7
localhost:6379> lrange answer 0 -1

  1. "a"
  2. "b"
  3. "c"
  4. "a"
  5. "b"
  6. "d"
  7. "a"
    localhost:6379> lrem answer 1 b
    (integer) 1
    localhost:6379> lrange answer 0 -1
  8. "a"
  9. "c"
  10. "a"
  11. "b"
  12. "d"
  13. "a"
    localhost:6379> lrem answer -2 a
    (integer) 2
    localhost:6379> lrange answer 0 -1
  14. "a"
  15. "c"
  16. "b"
  17. "d"
    注: 删除count的绝对值个value后结束
    Count>0 从表头删除
    Count<0 从表尾删除

ltrim key start stop剪切key对应的链接,切[start,stop]一段,并把该段重新赋给key
ltrim key start stop
localhost:6379> flushdb
OK
localhost:6379> rpush character a b c d e f
(integer) 6
localhost:6379> lrange character 0 -1

  1. "a"
  2. "b"
  3. "c"
  4. "d"
  5. "e"
  6. "f"
    localhost:6379> ltrim character 2 5
    OK
    localhost:6379> lrange character 0 -1
  7. "c"
  8. "d"
  9. "e"
  10. "f"
    localhost:6379> ltrim character 1 -2
    OK
    localhost:6379> lrange character 0 -1
  11. "d"
  12. "e"

lindex key index返回index索引上的值
lindex key index
如 lindex key 2

llen key计算链接表的元素个数
llen key
localhost:6379> lindex character 1
"e"
localhost:6379> lindex character 2
(nil)
localhost:6379> lindex character 0
"d"
localhost:6379> llen character
(integer) 2
localhost:6379> rpush character b c d
(integer) 5
localhost:6379> llen character
(integer) 5

linsert key after | before search value在key链表中寻找’search’,并在search值之前|之后,.插入value
linsert key after|before search value
localhost:6379> rpush num 1 3 6 8 9
(integer) 5
localhost:6379> lrange num 0 -1

  1. "1"
  2. "3"
  3. "6"
  4. "8"
  5. "9"
    localhost:6379> linsert num before 3 2
    (integer) 6
    localhost:6379> lrange num 0 -1
  6. "1"
  7. "2"
  8. "3"
  9. "6"
  10. "8"
  11. "9"
    localhost:6379> linsert num after 9 10
    (integer) 7
    localhost:6379> lrange num 0 -1
  12. "1"
  13. "2"
  14. "3"
  15. "6"
  16. "8"
  17. "9"
  18. "10"
    localhost:6379> linsert num after 99 10
    (integer) -1
    localhost:6379> linsert num after 9 10
    (integer) 8
    localhost:6379> lrange num 0 -1
  19. "1"
  20. "2"
  21. "3"
  22. "6"
  23. "8"
  24. "9"
  25. "10"
  26. "10"
    localhost:6379> linsert num after 10 11
    (integer) 9
    localhost:6379> lrange num 0 -1
  27. "1"
  28. "2"
  29. "3"
  30. "6"
  31. "8"
  32. "9"
  33. "10"
  34. "11"
  35. "10"
    注: 一旦找到一个search后,命令就结束了,因此不会插入多个value

rpoplpush source dest把source的尾部拿出,放在dest的头部
rpoplpush source dest
localhost:6379> flushdb
OK
localhost:6379> rpush task a b c d
(integer) 4
localhost:6379> rpoplpush task job
"d"
localhost:6379> lrange task 0 -1

  1. "a"
  2. "b"
  3. "c"
    localhost:6379> lrange job 0 -1
  4. "d"
    返回该单元值

4.2.2应用场景
task + bak 双链表完成安全队列
Task列表 bak列表

业务逻辑:
1:rpop lpush task bak
2:接收返回值,并做业务处理
3:如果成功,rpop bak 清除任务. 如不成功,下次从bak表里取任务

brpop,blpop key timeout等待弹出key的尾/头元素
brpop ,blpop key timeout
localhost:6379> lrange job 0 -1

  1. "d"
    localhost:6379> rpop job
    "d"
    localhost:6379> rpop job
    (nil)

另开一个终端,进入redis
localhost:6379> lpush job e
(integer) 1

切换回原来的终端
localhost:6379> brpop job 20(timeout)

  1. "job"
  2. "e"
    (5.24s)
    Timeout为等待超时时间
    如果timeout为0,则一直等待

应用场景: 长轮询Ajax,在线聊天时,能够用到

位图统计法统计活跃用户
Setbit 的实际应用
一亿个用户,用户有频繁登陆的,也有不经常登陆的
如何来记录用户的登录信息
如何来查询活跃用户[如一周内登录3次的]

场景:
1亿个用户, 每个用户 登陆/做任意操作 ,记为今天活跃,否则记为不活跃
每周评出: 有奖活跃用户: 连续7天活动
每月评,等等...
可以用bit来表示:1代表登录,0代表没有登录

思路:
Userid dt active
1 2013-07-27 1
1 2013-0726 1
如果是放在表中, 1:表急剧增大,2:要用group ,sum运算,计算较慢

用: 位图法 bit-map
Log0721: ‘011001...............0’
......
log0726 : ‘011001...............0’
Log0727 : ‘0110000.............1’

1: 记录用户登陆:
每天按日期生成一个位图, 用户登陆后,把user_id位上的bit值置为1

2: 把1周的位图 and 计算,
位上为1的,即是连续登陆的用户
localhost:6379> setbit mon 100000000 0 //初始化操作
(integer) 0
(0.51s)
localhost:6379> setbit mon 3 1
(integer) 0
localhost:6379> setbit mon 5 1
(integer) 0
localhost:6379> setbit mon 7 1
(integer) 0
localhost:6379> setbit thur 100000000 0
(integer) 0
(0.89s)
localhost:6379> setbit thur 3 1
(integer) 0
localhost:6379> setbit thur 5 1
(integer) 0
localhost:6379> setbit thur 8 1
(integer) 0
localhost:6379> setbit wen 10000000 0
(integer) 0
(0.51s)
localhost:6379> setbit wen 3 1
(integer) 0
localhost:6379> setbit wen 4 1
(integer) 0
localhost:6379> setbit wen 6 1
(integer) 0
localhost:6379> bitop and res mon feb wen
(integer) 12500001
如上例,优点:
1: 节约空间, 1亿人每天的登陆情况,用1亿bit,约1200WByte,约10M 的字符就能表示
2: 计算方便
4.3、集合 set 相关命令
集合的性质: 唯一性,无序性,确定性
注: 在string和link的命令中,可以通过range 来访问string中的某几个字符或某几个元素
但,因为集合的无序性,无法通过下标或范围来访问部分元素.
因此想看元素,要么随机先一个,要么全选

sadd key value1 value2往集合key中增加元素
sadd key value1 value2
localhost:6379> sadd gender male female
(integer) 2
localhost:6379> sadd gender yao yao
(integer) 1
localhost:6379> sadd gender yao
(integer) 0

smembers key返回集中中所有的元素
smembers key
localhost:6379> smembers gender

  1. "yao"
  2. "female"
  3. "male"

srem value1 value2删除集合中集为 value1 value2的元素
srem value1 value2
localhost:6379> srem gender yao
(integer) 1
localhost:6379> smembers gender

  1. "female"
  2. "male"
    localhost:6379> srem gender x c
    (integer) 0
    localhost:6379> srem gender male x c
    (integer) 1
    返回值: 忽略不存在的元素后,真正删除掉的元素的个数

spop key删除集合中key中1个随机元素
spop key
localhost:6379> sadd gender a b c d e f
(integer) 6
localhost:6379> smembers gender

  1. "b"
  2. "a"
  3. "e"
  4. "d"
  5. "c"
  6. "f"
  7. "female"
    localhost:6379> spop gender
    "c"
    localhost:6379> smembers gender
  8. "b"
  9. "a"
  10. "e"
  11. "d"
  12. "f"
  13. "female"
    localhost:6379> spop gender
    "d"
    localhost:6379> smembers gender
  14. "b"
  15. "a"
  16. "e"
  17. "f"
  18. "female"
    localhost:6379> spop gender
    "f"
    localhost:6379> smembers gender
  19. "b"
  20. "a"
  21. "e"
  22. "female"
    返回值: 返回并删除集合中key中1个随机元素

srandmember key随机--体现了无序性
srandmember key
localhost:6379> srandmember gender
"f"
localhost:6379> srandmember gender
"a"
localhost:6379> srandmember gender
"c"
localhost:6379> srandmember gender
"e"
localhost:6379> srandmember gender
"d"
localhost:6379> srandmember gender
"f"
localhost:6379> srandmember gender
"a"
localhost:6379> srandmember gender
"female"
返回值: 返回集合key中,随机的1个元素.

sismember key value判断value是否在key集合中
sismember key value
localhost:6379> sismember gender Q
(integer) 0
localhost:6379> sismember gender f
(integer) 1
是 返回1,否 返回0

scard key返回集合中元素的个数
scard key
localhost:6379> scard gender
(integer) 7

smove source dest value把source中的value删除,并添加到dest集合中
smove source dest value
localhost:6379> sadd upper A B C
(integer) 3
localhost:6379> sadd lower a b c
(integer) 3
localhost:6379> smembers upper

  1. "C"
  2. "B"
  3. "A"
    localhost:6379> smembers lower
  4. "c"
  5. "b"
  6. "a"
    localhost:6379> smove upper lower A
    (integer) 1
    localhost:6379> smembers upper
  7. "C"
  8. "B"
    localhost:6379> smembers lower
  9. "A"
  10. "c"
  11. "b"
  12. "a"
    localhost:6379> smove lower upper A
    (integer) 1
    localhost:6379> smembers upper
  13. "C"
  14. "B"
  15. "A"

sinter key1 key2 key3求出key1 key2 key3 三个集合中的交集,并返回
sinter key1 key2 key3
redis 127.0.0.1:6379> sadd s1 0 2 4 6
(integer) 4
redis 127.0.0.1:6379> sadd s2 1 2 3 4
(integer) 4
redis 127.0.0.1:6379> sadd s3 4 8 9 12
(integer) 4
redis 127.0.0.1:6379> sinter s1 s2 s3

  1. "4"
    redis 127.0.0.1:6379> sinter s3 s1 s2
    "4"

sinterstore dest key1 key2 key3求出key1 key2 key3 三个集合中的交集,并赋给dest
sinterstore dest key1 key2 key3
localhost:6379> sinter lisi wang poly

  1. "d"
  2. "c"
  3. "a"
    localhost:6379> sinterstore result lisi wang poly
    (integer) 3
    localhost:6379> smembers resulut
    (empty list or set)
    localhost:6379> smembers result
  4. "d"
  5. "c"
  6. "a"

sunion key1 key2|…keyn求出key1 key2 keyn的并集,并返回
sunion key1 key2.. Keyn
localhost:6379> sunion lisi wang poly

  1. "b"
  2. "a"
  3. "e"
  4. "d"
  5. "c"
  6. "g"
  7. "f"

sdiff key1 key2 key3求出key1与key2 key3的差集
sdiff key1 key2 key3
localhost:6379> sdiff lisi wang

  1. "b"
    即key1-key2-key3
    4.4、order set 有序集合
    zadd添加元素
    zadd key score1 value1 score2 value2 ..
    localhost:6379> zadd class 12 lily 13 lucy 18 lilei 6 poly
    (integer) 4

zrank key member查询member的排名(升续 0名开始)
zrank key member
ocalhost:6379> zrank class poly
(integer) 0
localhost:6379> zrank class lily
(integer) 1
localhost:6379> zrank class lucy
(integer) 2
localhost:6379> zrank class lilei
(integer) 3

zrevrank key memeber查询 member的排名(降续 0名开始)
zrevrank key memeber
localhost:6379> zrevrank class poly
(integer) 3
localhost:6379> zrevrank class lily
(integer) 2
localhost:6379> zrevrank class lucy
(integer) 1
localhost:6379> zrevrank class lilei
(integer) 0

zrange key start stop [WITHSCORES]把集合排序后,返回名次[start,stop]的元素
ZRANGE key start stop [WITHSCORES]
localhost:6379> zrange class 0 -1 withscores

  1. "poly"
  2. "6"
  3. "lily"
  4. "12"
  5. "lucy"
  6. "13"
  7. "lilei"
  8. "18"
    默认是升续排列
    Withscores 是把score也打印出来

zrem key value1 value2删除集合中的元素
zrem key value1 value2 ..
localhost:6379> zrem class lilei lucy lily poly
(integer) 4
localhost:6379> zrange class 0 -1 withscores
(empty list or set)

zremrangebyscore key min max按照socre来删除元素,删除score在[min,max]之间的
zremrangebyscore key min max
localhost:6379> zadd class 12 lily 13 lucy 18 lilei 6 poly
(integer) 4
localhost:6379> zremrangebyscore class 10 15
(integer) 2
localhost:6379> zrange class 0 -1 withscores

  1. "poly"
  2. "6"
  3. "lilei"
  4. "18"

zremrangebyrank key start end按排名删除元素,删除名次在[start,end]之间的
zremrangebyrank key start end
localhost:6379> zadd class 12 lily 13 lucy
(integer) 2
localhost:6379> zrange class 0 -1 withscores

  1. "poly"
  2. "6"
  3. "lily"
  4. "12"
  5. "lucy"
  6. "13"
  7. "lilei"
  8. "18"
    localhost:6379> zremrangebyrank class 0 1
    (integer) 2
    localhost:6379> zrange class 0 -1 withscores
  9. "lucy"
  10. "13"
  11. "lilei"
  12. "18"

zrevrange key start stop把集合降序排列,取名字[start,stop]之间的元素
zrevrange key start stop
localhost:6379> zrevrange class 0 -1 withscores

  1. "lilei"
  2. "18"
  3. "lucy"
  4. "13"
  5. "lily"
  6. "12"
  7. "poly"
  8. "6"

zrangebyscore key min max [withscoress] limit offset N集合(升续)排序后,取score在[min,max]内的元素
zrangebyscore key min max [withscores] limit offset N
localhost:6379> zrangebyscore class 1 20 withscores limit 1 2

  1. "lily"
  2. "12"
  3. "lucy"
  4. "13"
    并跳过 offset个, 取出N个

zcard key返回元素个数
zcard key
localhost:6379> zadd ty 25 zhang 27 guan 28 liubei
(integer) 3
localhost:6379> zcard ty
(integer) 3
localhost:6379> zadd ty 23 zhao
(integer) 1
localhost:6379> zcard ty
(integer) 4
localhost:6379> zrange ty 0 -1 withscores

  1. "zhao"
  2. "23"
  3. "zhang"
  4. "25"
  5. "guan"
  6. "27"
  7. "liubei"
  8. "28"

zcount key min max返回[min,max] 区间内元素的数量
zcount key min max
localhost:6379> zcount ty 25 28
(integer) 3

zintersoter destination numkeys key1 [key2…]求key1,key2的交集
zinterstore destination numkeys key1 [key2 ...]
[WEIGHTS weight [weight ...]]
[AGGREGATE SUM|MIN|MAX]
key1,key2的权重分别是 weight1,weight2
聚合方法用: sum |min|max
聚合的结果,保存在dest集合内

注意: weights ,aggregate如何理解?
答: 如果有交集, 交集元素又有socre,score怎么处理?
Aggregate sum->score相加 , min 求最小score, max 最大score

另: 可以通过weigth设置不同key的权重, 交集时,socre * weights

实验部分
localhost:6379> zadd lisi 3 cat 5 dog 6 horse
(integer) 3
localhost:6379> zadd wang 2 cat 6 dog 8 horse 1 donkey
(integer) 4
localhost:6379> zinterstore result 2 lisi wang
(integer) 3
localhost:6379> zrange result 0 -1

  1. "cat"
  2. "dog"
  3. "horse"
    localhost:6379>
    localhost:6379> zrange result 0 -1 withscores
  4. "cat"
  5. "5"
  6. "dog"
  7. "11"
  8. "horse"
  9. "14"
    localhost:6379> zinterstore result 2 lisi wang aggregate min
    (integer) 3
    localhost:6379> zrange result 0 -1 withscores
  10. "cat"
  11. "2"
  12. "dog"
  13. "5"
  14. "horse"
  15. "6"
    localhost:6379> zinterstore result 2 lisi wang aggregate max
    (integer) 3
    localhost:6379> zrange result 0 -1 withscores
  16. "cat"
  17. "3"
  18. "dog"
  19. "6"
  20. "horse"
  21. "8"
    localhost:6379> zinterstore result 2 lisi wang weights 2 1 aggregate max
    (integer) 3
    localhost:6379> zrange result 0 -1 withscores
  22. "cat"
  23. "6"
  24. "dog"
  25. "10"
  26. "horse"
  27. "12"
    localhost:6379> zinterstore result 2 lisi wang weights 2 1 aggregate sum
    (integer) 3
    localhost:6379> zrange result 0 -1 withscores
  28. "cat"
  29. "8"
  30. "dog"
  31. "16"
  32. "horse"
  33. "20"

4.5、Hash 哈希数据类型相关命令
hset把key中 filed域的值设为value
hset key field value
注:如果没有field域,直接添加,如果有,则覆盖原field域的值

设置field1->N 个域, 对应的值是value1->N
hmset key field1 value1 [field2 value2 field3 value3 ......fieldn valuen]
(对应PHP理解为 $key = array(file1=>value1, field2=>value2 ....fieldN=>valueN))

hget key field返回key中field域的值
hget key field

hmget key field1 field2 fieldN返回key中field1 field2 fieldN域的值
hmget key field1 field2 fieldN

hgetall key返回key中,所有域与其值
hgetall key

hdel key field删除key中 field域
hdel key field

hlen key返回key中元素的数量
hlen key

hexists key field判断key中有没有field域
hexists key field

hinrby key field value是把key中的field域的值增长整型值value
hinrby key field value

hinrby float key field value是把key中的field域的值增长浮点值value
hinrby float key field value

hkeys key返回key中所有的field
hkeys key

hvals key返回key中所有的value
kvals key

实验部分
localhost:6379> flushdb
OK
localhost:6379> hset user1 name lisi
(integer) 1
localhost:6379> hset user1 age 28
(integer) 1
localhost:6379> hset user1 height 175
(integer) 1
localhost:6379> hget user1 height
"175"
localhost:6379> hgetall user1

  1. "name"
  2. "lisi"
  3. "age"
  4. "28"
  5. "height"
  6. "175"
    localhost:6379> hmset user2 name wang age 10 height 100
    OK
    localhost:6379> hgetall user2
  7. "name"
  8. "wang"
  9. "age"
  10. "10"
  11. "height"
  12. "100"
    localhost:6379> hget user1 name
    "lisi"
    localhost:6379> hget user2 age
    "10"
    localhost:6379> hmget user1 name height
  13. "lisi"
  14. "175"
    localhost:6379> hgetall user1
  15. "name"
  16. "lisi"
  17. "age"
  18. "28"
  19. "height"
  20. "175"
    localhost:6379> hdel user1 age
    (integer) 1
    localhost:6379> hgetall user1
  21. "name"
  22. "lisi"
  23. "height"
  24. "175"
    localhost:6379> hlen user1
    (integer) 2
    localhost:6379> hlen user2
    (integer) 3
    localhost:6379> hexists user1 age
    (integer) 0
    localhost:6379> hexists user2 age
    (integer) 1
    localhost:6379> hincrby user2 age 1
    (integer) 11
    localhost:6379> hgetall user2
  25. "name"
  26. "wang"
  27. "age"
  28. "11"
  29. "height"
  30. "100"
    localhost:6379> hget user2 age
    "11"
    localhost:6379> hincrbyfloat user2 age 0.5
    "11.5"
    localhost:6379> hkeys user2
  31. "name"
  32. "age"
  33. "height"
    localhost:6379> hvals user2
  34. "wang"
  35. "11.5"
  36. "100"

Redis 中的事务

5.1、Redis支持简单的事务
Redis与 mysql事务的对比
Mysql Redis
开启 start transaction muitl
语句 普通sql 普通命令
失败 rollback 回滚 discard 取消
成功 commit exec
注: rollback与discard 的区别
如果已经成功执行了2条语句, 第3条语句出错.
Rollback后,前2条的语句影响消失.
Discard只是结束本次事务,前2条语句造成的影响仍然还在

注:
在mutil后面的语句中, 语句出错可能有2种情况
1: 语法就有问题,
这种exec时,报错, 所有语句得不到执行

2: 语法本身没错,但适用对象有问题. 比如 zadd 操作list对象
Exec之后,会执行正确的语句,并跳过有不适当的语句.
(如果zadd操作list这种事怎么避免? 这一点,由程序员负责)

思考:
我正在买票
Ticket -1 , money -100
而票只有1张, 如果在我multi之后,和exec之前, 票被别人买了---即ticket变成0了.
我该如何观察这种情景,并不再提交

悲观的想法:
世界充满危险,肯定有人和我抢, 给 ticket上锁, 只有我能操作. [悲观锁]

乐观的想法:
没有那么人和我抢,因此,我只需要注意,
--有没有人更改ticket的值就可以了 [乐观锁]

Redis的事务中,启用的是乐观锁,只负责监测key没有被改动.

5.2、实验部分
localhost:6379> flushdb
OK
localhost:6379> set wang 200
OK
localhost:6379> set zhao 700
OK
localhost:6379> multi
OK
localhost:6379> decrby zhao 100
QUEUED
localhost:6379> incrby wang 100
QUEUED
localhost:6379> mget wang zhao
QUEUED
localhost:6379> exec

  1. (integer) 600
  2. (integer) 300
    1. "300"
    2. "600"
      localhost:6379> multi
      OK
      localhost:6379> decrby zhao 100
      QUEUED
      localhost:6379> dfsf
      (error) ERR unknown command 'dfsf'
      localhost:6379> exec
      (error) EXECABORT Transaction discarded because of previous errors.
      localhost:6379> mget zhao wang
  3. "600"
  4. "300"
    localhost:6379> multi
    OK
    localhost:6379> decrby zhao 100
    QUEUED
    localhost:6379> sadd wang king
    QUEUED
    localhost:6379> exec
  5. (integer) 500
  6. (error) WRONGTYPE Operation against a key holding the wrong kind of value
    localhost:6379> mget zhao wang
  7. "500"
  8. "300"
    localhost:6379> multi
    OK
    localhost:6379> decrby zhao 100
    QUEUED
    localhost:6379> incrby wang 100
    QUEUED
    localhost:6379> discard
    OK
    localhost:6379> mget zhao wang
  9. "500"
  10. "300"
    localhost:6379> multi
    OK
    localhost:6379> decrby zhao 100
    QUEUED
    localhost:6379> sadd wang pig
    QUEUED
    localhost:6379> exec
  11. (integer) 400
  12. (error) WRONGTYPE Operation against a key holding the wrong kind of value
    localhost:6379> discard
    (error) ERR DISCARD without MULTI

localhost:6379> flushdb
OK
localhost:6379> set ticket 1
OK
localhost:6379> set lisi 300
OK
localhost:6379> set wang 300
OK
localhost:6379> multi
OK
localhost:6379> decr ticket
QUEUED
localhost:6379> decrby lisi 100
QUEUED
另起一个终端
localhost:6379> decr ticket
(integer) 0
localhost:6379> get ticket
"0"
返回原终端执行exec
localhost:6379> exec

  1. (integer) -1
  2. (integer) 200
    具体的命令---- watch命令
    watch key1 key2 ... keyN
    作用:监听key1 key2..keyN有没有变化,如果有变, 则事务取消

unwatch
作用: 取消所有watch监听
原终端
localhost:6379> set ticket 1
OK
localhost:6379> watch ticket
OK
localhost:6379> multi
OK
localhost:6379> decr ticket
QUEUED
localhost:6379> decrby lisi 100
QUEUED
另起一个终端
localhost:6379> decr ticket
(integer) 0
返回原终端
localhost:6379> exec
(nil)
localhost:6379> mget ticket lisi

  1. "0"
  2. "200"

消息订阅

使用办法:
订阅端: Subscribe 频道名称
发布端: publish 频道名称 发布内容
服务端例子:
服务端
localhost:6379> publish news 'good good study'
(integer) 2
客户端例子:
第一个终端
localhost:6379> subscribe news
Reading messages... (press Ctrl-C to quit)

  1. "subscribe"
  2. "news"
  3. (integer) 1
  4. "message"
  5. "news"
  6. "good good study"
    第二个终端
    localhost:6379> subscribe news
    Reading messages... (press Ctrl-C to quit)
  7. "subscribe"
  8. "news"
  9. (integer) 1
  10. "message"
  11. "news"
  12. "good good study"
    第三个终端
    localhost:6379>

Redis持久化配置

Redis的持久化有2种方式 1快照rdb 2是日志aof
7.1、Rdb快照的配置选项
vim /usr/local/redis/redis.conf
save 900 1 // 900内,有1条写入,则产生快照
save 300 1000 // 如果300秒内有1000次写入,则产生快照
save 60 10000 // 如果60秒内有10000次写入,则产生快照
(这3个选项都屏蔽,则rdb禁用)

stop-writes-on-bgsave-error yes // 后台备份进程出错时,主进程停不停止写入?
rdbcompression yes // 导出的rdb文件是否压缩
Rdbchecksum yes //导入rbd恢复数据时,是否检验rdb的完整性
dbfilename dump.rdb //导出来的rdb文件名
dir ./ //rdb的放置路径
[root@localhost redis]# mkdir -p /var/rdb
[root@localhost redis]# vim redis.conf
204 save 60 10000 修改为:save 60 3000
247 dir ./ 修改为:dir /var/rdb
[root@localhost redis]# ls
bin dump.rdb redis.conf
[root@localhost redis]# rm -rf dump.rdb
[root@localhost redis]# ls
bin redis.conf
[root@localhost redis]# pkill -9 redis
[root@localhost redis]# ./bin/redis-server redis.conf
[root@localhost redis]# ls /var/rdb

另开一个终端
[root@localhost ~]# /usr/local/redis/bin/redis-cli
127.0.0.1:6379> set site www.crushlinux.it

返回原终端
[root@localhost redis]# /usr/local/redis/bin/redis-benchmark -n 10000
====== PING_INLINE ======
10000 requests completed in 0.11 seconds
50 parallel clients
3 bytes payload
keep alive: 1

98.75% <= 1 milliseconds
99.95% <= 2 milliseconds
100.00% <= 2 milliseconds
94339.62 requests per second

====== PING_BULK ======
10000 requests completed in 0.10 seconds
50 parallel clients
3 bytes payload
keep alive: 1

99.77% <= 1 milliseconds
100.00% <= 1 milliseconds
98039.22 requests per second
…………

返回另一终端
127.0.0.1:6379> set address bj
OK

返回原终端
[root@localhost redis]# ls /var/rdb
dump.rdb
[root@localhost redis]# pkill -9 redis
[root@localhost redis]# /usr/local/redis/bin/redis-server redis.conf

返回另一个终端
[root@localhost ~]# /usr/local/redis/bin/redis-cli
127.0.0.1:6379> get site
"www.crushlinux.it"
127.0.0.1:6379> get address
(nil)
7.2、Aof 的配置
appendonly no #是否打开 aof日志功能
appendfsync always #每1个命令,都立即同步到aof. 安全,速度慢
appendfsync everysec #折衷方案,每秒写1次
appendfsync no #写入工作交给操作系统,由操作系统判断缓冲区 大小,统一写入到aof. 同步频率低,速度快,

no-appendfsync-on-rewrite yes #正在导出rdb快照的过程中,要不要停止同步aof
auto-aof-rewrite-percentage 100 #aof文件大小比起上次重写时的大小,增长率100%时重写
auto-aof-rewrite-min-size 64mb #aof文件,至少超过64M时,重写
原终端
[root@localhost redis]# vim redis.conf
修改:
593 修改为appendonly yes
645 修改为no-appendfsync-on-rewrite yes
[root@localhost redis]# pkill -9 redis
[root@localhost redis]# ./bin/redis-server redis.conf
[root@localhost redis]# vim appendonly.aof
没有任何内容,另起一个终端添加内容
[root@localhost redis]# ./bin/redis-cli
127.0.0.1:6379> set site www.crushlinux.com
OK
127.0.0.1:6379> set name lily
OK

返回原终端
[root@localhost redis]# vim appendonly.aof
*2
$6
SELECT
$1
0
*3
$3
set
$4
site
$18
www.crushlinux.com
*3
$3
set
$4

注: 在dump rdb过程中,aof如果停止同步,会不会丢失?
答: 不会,所有的操作缓存在内存的队列里,,dump完成后,统一操作。

注: aof重写是指什么?
答: aof重写是指把内存中的数据,逆化成命令,写入到.aof日志里,以解决 aof日志过大的问题。

问: 如果rdb文件,和aof文件都存在,优先用谁来恢复数据?
答: aof

问: 2种是否可以同时用?
答: 可以,而且推荐这么做。

问: 恢复时rdb和aof哪个恢复的快
答: rdb快,因为其是数据的内存映射,直接载入到内存,而aof是命令,需要逐条执行。

Redis主从复制

8.1、概念
8.1.1、集群的作用
1: 主从备份 防止主机宕机
2: 读写分离,分担master的任务
3: 任务分离,如从服分别分担备份工作与计算工作
8.1.2 redis主从复制特点

1.同一个Master可以拥有多个Slaves。
2.Master下的Slave还可以接受同一架构中其它slave的链接与同步请求,实现数据的联级复制,即Master->Slave->Slave模式;
3、Master以非阻塞的方式同步数据至slave,这将意味着Master会继续处理一个或多个slave的读写请求;
4、Slave端同步数据也可以修改为非阻塞式的方式,当slave在执行新的同步时,它仍可以用旧的数据信息来提供查询;否则,当slave与master失去联系时,slave会返回一个错误给客户端;
5、主从复制具有可扩展性,即多个slave专门提供只读查询与数据的冗余,Master端专门提供写操作;
6、通过配置禁用Master数据持久化机制,将其数据持久化操作交给Slaves完成,避免在Master中要有独立的进程来完成此操作。
8.1.3主从复制原理

请求同步连接,无论是第一次连接还是重新连接,Master都会启动一个后台进程,将数据快照保存到数据文件中,同时Master会记录所有修改数据的命令,并缓存到数据文件中。后台进程完成缓存操作后,Master就发送数据文件给Slave,Slave端将数据文件保存到硬盘上,然后将其在加载到内存中,接着Master就会所有修改数据的操作,将其发送给Slave端。若Slave出现故障导致宕机,恢复正常后会自动重新连接,Master收到Slave的连接后,将其完整的数据文件发送给Slave,如果Mater同时收到多个Slave发来的同步请求,Master只会在后台启动一个进程保存数据文件,然后将其发送给所有的Slave,确保Slave正常。
8.2、redis集群配置
8.2.1、Master配置
1:关闭rdb快照(备份工作交给slave)
2:可以开启aof
8.2.2、slave配置
1: 声明slave-of
2: 配置密码[如果master有密码]
3: [某1个]slave打开 rdb快照功能
4: 配置是否只读[slave-read-only]

配置终端
[root@localhost redis]# cp redis.conf redis6380.conf
[root@localhost redis]# cp redis.conf redis6381.conf
[root@localhost redis]# pkill -9 redis
[root@localhost redis]# vim redis6380.conf
修改
150 pidfile /var/run/redis_6380.pid
84 port 6380
593 appendonly no
265 slaveof localhost 6379
272 masterauth passwd
301 slave-read-only yes
[root@localhost redis]# vim redis6381.conf
修改
150 pidfile /var/run/redis_6381.pid
84 port 6381
202 #save 900 1
203 #save 300 10
204 #save 60 10000
593 appendonly no
265 slaveof localhost 6379
272 masterauth passwd
301 slave-read-only yes
[root@localhost redis]# vim redis.conf
修改
202 #save 900 1
203 #save 300 10
204 #save 60 10000
593 appendonly yes
301 slave-read-only yes
480 requirepass passwd
[root@localhost redis]# ./bin/redis-server redis.conf
[root@localhost redis]# ./bin/redis-server redis6380.conf
[root@localhost redis]# ./bin/redis-server redis6381.conf

master
[root@localhost redis]# ./bin/redis-cli
127.0.0.1:6379> get title
(error) NOAUTH Authentication required.
127.0.0.1:6379> auth passwd
OK
127.0.0.1:6379> set title sunshine
OK

slave1
[root@localhost ~]# cd /usr/local/redis/
[root@localhost redis]# ./bin/redis-cli -p 6380"
127.0.0.1:6380> get title
"sunshine"
slave2
[root@localhost ~]# cd /usr/local/redis/
[root@localhost redis]# ./bin/redis-cli -p 6381
127.0.0.1:6381> get title
"sunshine"
8.3、Redis主从复制的缺陷
缺陷:
每次salave断开后,(无论是主动断开,还是网络故障),再连接master。
都要master全部dump出来rdb,再aof,即同步的过程都要重新执行1遍。
所以要记住---多台slave不要一下都启动起来,否则master可能I/O剧增。

Redis运维

9.1、Redis 服务器端命令
redis 127.0.0.1:6380> time ,显示服务器时间 , 时间戳(秒), 微秒数

  1. "1375270361"
  2. "504511"

redis 127.0.0.1:6380> dbsize // 当前数据库的key的数量
(integer) 2
redis 127.0.0.1:6380> select 2
OK
redis 127.0.0.1:6380[2]> dbsize
(integer) 0
redis 127.0.0.1:6380[2]>

BGREWRITEAOF 后台进程重写AOF
BGSAVE 后台保存rdb快照
SAVE 保存rdb快照
LASTSAVE 上次保存时间

Slaveof master-Host port , 把当前实例设为master的slave

Flushall 清空所有库所有键

Flushdb 清空当前库所有键(谨慎使用)

Showdown [save/nosave]
注: 如果不小心运行了flushall, 立即 shutdown nosave ,关闭服务器
然后 手工编辑aof文件, 去掉文件中的 “flushall ”相关行, 然后开启服务器,就可以导入回原来数据.

如果,flushall之后,系统恰好bgrewriteaof了,那么aof就清空了,数据丢失.

Slowlog 显示慢查询
注:多慢才叫慢?
答: 由slowlog-log-slower-than 10000 ,来指定,(单位是微秒)

服务器储存多少条慢查询的记录?
答: 由 slowlog-max-len 128 ,来做限制

Info [Replication/CPU/Memory..]
查看redis服务器的信息

Config get 配置项
Config set 配置项 值 (特殊的选项,不允许用此命令设置,如slave-of, 需要用单独的slaveof命令来设置)
9.2、Redis运维时需要注意的参数
9.2.1、内存

Memory

used_memory:859192 数据结构的空间
used_memory_rss:7634944 实占空间
mem_fragmentation_ratio:8.89 前2者的比例,1.N为佳,如果此值过大,说明redis的内存的碎片化严重,可以导出再导入一次.
9.2.2 、主从复制

Replication

role:slave
master_host:192.168.1.128
master_port:6379
master_link_status:up
9.2.3、持久化

Persistence

rdb_changes_since_last_save:0
rdb_last_save_time:1375224063
9.2.4 、fork耗时

Status

latest_fork_usec:936 上次导出rdb快照,持久化花费微秒
注意: 如果某实例有10G内容,导出需要2分钟,
每分钟写入10000次,导致不断的rdb导出,磁盘始处于高IO状态.
9.2.5 、慢日志
config get/set slowlog-log-slower-than
CONFIG get/SET slowlog-max-len
slowlog get N 获取慢日志

运行时更改master-slave
修改一台slave(设为A)为new master
命令该服务不做其他redis服务的slave
命令: slaveof no one
修改其readonly为yes

其他的slave再指向new master A
命令该服务为new master A的slave
命令格式 slaveof IP port

aof恢复与rdb服务器间迁移

10.1、aof的恢复
服务端
[root@localhost rdb]# rm -rf ./*
[root@localhost redis]#./bin/redis-sever ./redis.conf

客户端
[root@localhost redis]#./bin/redis-cli
127.0.0.1:6379>set site www.google.com
127.0.0.1:6379>set address beijing
127.0.0.1:6379>set name lily

服务端
[root@localhost redis]#ll /var/rdb

客户端
127.0.0.1:6379>flushall
127.0.0.1:6379>get name
127.0.0.1:6379>shutdown nosave //及时关掉,不要重新写入aof

服务端
[root@localhost rdb]# vim appendonly.aof
修改aof,删除标红的部分并保存。
*2
$6
SELECT
$1
0
*3
$3
set
*2
$6
SELECT
$1
0
*3
$3
set
$4
site
$14
www.google.com
*3
$3
$4
site
$14
www.google.com
*3
$3
set
$4
name
$4
lily
*1
$8
flushal

重启服务
[root@localhost redis]#./bin/redis-server redis.conf

客户端
[root@localhost redis]#./bin/redis-cli
127.0.0.1:6379>get name
127.0.0.1:6379>"lily"
10.2、RDB服务器间的迁移
[root@localhost rdb]# cp dump6379.rdb dump6380.rdb
[root@localhost redis]# ./bin/redis-cli -p 6380
127.0.0.1:6380> keys *
(empty list or set)
[root@localhost rdb]# vim /usr/local/redis/redis6380.conf
修改
593 appendonly no
[root@localhost rdb]# mv appendonly.aof appendonly.aof.bak
[root@localhost rdb]# ll
总用量 12
-rw-r--r-- 1 root root 0 3月 29 12:20 appendonly6380.aof
-rw-r--r-- 1 root root 100 3月 29 11:36 appendonly.aof.bak
-rw-r--r-- 1 root root 113 3月 29 11:55 dump6379.rdb
-rw-r--r-- 1 root root 113 3月 29 11:55 dump6380.rdb
[root@localhost rdb]# mv appendonly6380.aof appendonly6380.aof.bak
[root@localhost rdb]# pkill -9 redis
[root@localhost rdb]# /usr/local/redis/bin/redis-server /usr/local/redis/redis.conf
[root@localhost rdb]# pkill -9 redis
[root@localhost rdb]# /usr/local/redis/bin/redis-server /usr/local/redis/redis6380.conf
[root@localhost redis]# ./bin/redis-cli
127.0.0.1:6379> keys *

  1. "name"
  2. "site"
    [root@localhost redis]# ./bin/redis-cli -p 6380
    127.0.0.1:6380> keys *
  3. "name"
  4. "site"

监控工具 sentinel

11.1监控原理
Sentinel不断与master通信,获取master的slave信息,监听master与slave的状态,如果某slave失效,直接通知master去除该slave。

如果master失效,是按照slave优先级(可配置), 选取1个slave做 new master,把其他slave--> new master

疑问:
sentinel与master通信,如果某次因为master IO操作频繁,导致超时,此时,认为master失效,很武断。
解决:
sentnel允许多个实例看守1个master, 当N台(N可设置)sentinel都认为master失效,才正式失效。

思路:
运行时更改master-slave
修改一台slave(设为A)为new master
1)命令该服务不做其他redis服务的slave
命令:slaveof no one
2)修改其readonly为yes

其他的slave再指向new master A
1)命令该服务为new master A的slave
11.2手动监控主从服务器
127.0.0.1:6379> config get appendonly

  1. "appendonly"
  2. "yes"

127.0.0.1:6379> info Replication

Replication

role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6380,state=online,offset=356,lag=1
slave1:ip=127.0.0.1,port=6381,state=online,offset=356,lag=1
master_repl_offset:356
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:355

127.0.0.1:6380> info Replication

Replication

role:slave
master_host:localhost
master_port:6379
master_link_status:up
master_last_io_seconds_ago:6
master_sync_in_progress:0
slave_repl_offset:482
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

127.0.0.1:6381> info Replication

Replication

role:slave
master_host:localhost
master_port:6379
master_link_status:up
master_last_io_seconds_ago:8
master_sync_in_progress:0
slave_repl_offset:1056
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

127.0.0.1:6380> slaveof no one //手动设置新的master
OK
127.0.0.1:6380> info Replication

Replication

role:master
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

127.0.0.1:6380> config get slave-read-only

  1. "slave-read-only" //只读的
  2. "yes"

127.0.0.1:6380> config set slave-read-only no
OK
127.0.0.1:6380> config get slave-read-only

  1. "slave-read-only"
  2. "no"

127.0.0.1:6381> slaveof localhost 6380
OK
127.0.0.1:6381> info Replication

Replication

role:slave
master_host:localhost
master_port:6380
master_link_status:up
master_last_io_seconds_ago:3
master_sync_in_progress:0
slave_repl_offset:15
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

127.0.0.1:6380> info Replication

Replication

role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=6381,state=online,offset=57,lag=1
master_repl_offset:57
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:56
11.3使用Sentinel自动监控
复制源码中的监控工具到当前目录下。
[root@localhost redis]# cp /usr/src/redis-3.2.8/sentinel.conf ./
[root@localhost redis]# ls
appendonly.aof dump.rdb redis6381.conf sentinel.conf
bin redis6380.conf redis.conf
修改配置文件
[root@localhost redis]# vim sentinel.conf
修改
98 sentinel monitor mymaster 127.0.0.1 6380 2

98 sentinel monitor mymaster 127.0.0.1 6380 1
Sentinel选项配置
port 26379

端口

sentinel monitor mymaster 127.0.0.1 6379 2 ,
给主机起的名字(不重即可), 当2个sentinel实例都认为master失效时,正式失效

sentinel down-after-milliseconds mymaster 30000
多少毫秒后连接不到master认为断开

sentinel can-failover mymaster yes

是否允许sentinel修改slave->master. 如为no,则只能监控,无权修改./

sentinel parallel-syncs mymaster 1 ,
一次性修改几个slave指向新的new master.

sentinel client-reconfig-script mymaster /var/redis/reconfig.sh
在重新配置new master,new slave过程,可以触发的脚本
重启服务查看信息
[root@localhost redis]# ./bin/redis-server ./redis.conf
127.0.0.1:6379> info Replication

Replication

role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6380,state=online,offset=127,lag=1
slave1:ip=127.0.0.1,port=6381,state=online,offset=127,lag=1
master_repl_offset:127
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:126
启动sentinel监控服务
主配置文件里并没有sentinel指令,通过以下命令就可以启动监控服务
[root@localhost redis]# ./bin/redis-server ./sentinel.conf --sentinel
7959:X 22 Apr 23:55:08.764 # Sentinel ID is 3fe1c582aa2579d8478c7a13e9cca2ebd5af4725
7959:X 22 Apr 23:55:08.764 # +monitor master mymaster 127.0.0.1 6379 quorum 1
7959:X 22 Apr 23:55:08.766 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
7959:X 22 Apr 23:55:08.768 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379

down掉6379查看输出结果
7959:X 22 Apr 23:58:03.347 # +failover-end master mymaster 127.0.0.1 6379
7959:X 22 Apr 23:58:03.347 # +switch-master mymaster 127.0.0.1 6379 127.0.0.1 6380

127.0.0.1:6380> info Replication

Replication

role:master //服务角色已经自动改变
connected_slaves:1
slave0:ip=127.0.0.1,port=6381,state=online,offset=6548,lag=0
master_repl_offset:6548
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:6547

127.0.0.1:6381> info Replication

Replication

role:slave
master_host:127.0.0.1
master_port:6380
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_repl_offset:9397
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
配置优先级,指定谁优先作为master
vim /usr/local/redis/redis6380.conf
修改
413 slave-priority 100

413 slave-priority 10 //更改优先级
再次重启所有的redis服务查看信息
127.0.0.1:6380> info Replication

Replication

role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=6381,state=online,offset=1671,lag=0
slave1:ip=127.0.0.1,port=6379,state=online,offset=1804,lag=0
master_repl_offset:1804
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:1803

127.0.0.1:6381> info Replication

Replication

role:slave
master_host:127.0.0.1
master_port:6380
master_link_status:up
master_last_io_seconds_ago:2
master_sync_in_progress:0
slave_repl_offset:2217
slave_priority:10
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

127.0.0.1:6379> info Replication

Replication

role:slave
master_host:127.0.0.1
master_port:6380
master_link_status:up
master_last_io_seconds_ago:0
master_sync_in_progress:0
slave_repl_offset:3575
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

key设计原则

12.1、Redis 与关系型数据库的适合场景
12.1.1、书签系统
mysql里的应用:
create table book (
bookid int,
title char(20)
)engine myisam charset utf8;

insert into book values
(5 , 'PHP圣经'),
(6 , 'ruby实战'),
(7 , 'mysql运维')
(8, 'ruby服务端编程');

create table tags (
tid int,
bookid int,
content char(20)
)engine myisam charset utf8;

insert into tags values
(10 , 5 , 'PHP'),
(11 , 5 , 'WEB'),
(12 , 6 , 'WEB'),
(13 , 6 , 'ruby'),
(14 , 7 , 'database'),
(15 , 8 , 'ruby'),
(16 , 8 , 'server');

既有web标签,又有PHP,同时还标签的书,要用连接查询

select * from tags inner join tags as t on tags.bookid=t.bookid
where tags.content='PHP' and t.content='WEB';

换成key-value存储
用kv 来存储
127.0.0.1:6379> flushall
OK
127.0.0.1:6379> set book:5:title 'PHP bible'
OK
127.0.0.1:6379> set book:6:title 'ruby practice'
OK
127.0.0.1:6379> set book:7:title 'mysql ops'
OK
127.0.0.1:6379> set book:8:title 'ruby server'
OK

127.0.0.1:6379> sadd tag:php 5
(integer) 1
127.0.0.1:6379> sadd tag:WEB 5 6
(integer) 2
127.0.0.1:6379> sadd tag:database 7
(integer) 1
127.0.0.1:6379> sadd tag:ruby 6 8
(integer) 2
127.0.0.1:6379> sadd tag:server 8
(integer) 1

查: 既有PHP,又有WEB的书
Sinter tag:php tag:WEB #查集合的交集

查: 有PHP或有WEB标签的书
Sunin tag:php tag:WEB

查:含有ruby,不含WEB标签的书
Sdiff tag:ruby tag:WEB #求差集
12.1.2、Redis key 设计技巧
1: 把表名转换为key前缀 如, tag:
2: 第2段放置用于区分区key的字段--对应mysql中的主键的列名,如userid
3: 第3段放置主键值,如2,3,4...., a , b ,c
4: 第4段,写要存储的列名

用户表 user , 转换为key-value存储
userid username passworde email
9 Lisi 1111111 lisi@163.com
举例:
set user:userid:9:username lisi
set user:userid:9:password 111111
set user:userid:9:email lisi@163.com
keys user:userid:9*

分布式存储时,有一个好处。
无底洞效应,只对前面的键HASH,同一个用户都将HASH到同一台服务器。
12.2、实验部分
127.0.0.1:6379> set user:userid:9:passwd 111111
OK
127.0.0.1:6379> set user:userid:9:email a@a.com
OK
127.0.0.1:6379> set user:userid:9:username lisi
OK
127.0.0.1:6379> get user:userid:9:username
"lisi"
127.0.0.1:6379> get user:userid:9:passwd
"111111"
127.0.0.1:6379> get user:userid:9:email
"a@a.com"
127.0.0.1:6379> keys user:userid:9*

  1. "user:userid:9:email"
  2. "user:userid:9:username"
  3. "user:userid:9:passwd"
    注意:
    在关系型数据中,除主键外,还有可能其他列步骤查询,
    如上表中, username 也是极频繁查询的,往往这种列也是加了索引的.

转换到k-v数据中,则也要相应的生成一条按照该列为主的key-value
Set user:username:lisi:uid 9

这样,我们可以根据username:lisi:uid ,查出userid=9,
再查user:9:password/email ...
get user:username:lisi:userid

完成了根据用户名来查询用户信息

项目实战
LEMP(LNMP)架构搭建
nginx搭建
[root@nginx ~]# service iptables stop
[root@nginx ~]# setenforce 0
[root@nginx ~]# service httpd stop
[root@nginx ~]# yum -y install pcre-devel zlib-devel
[root@nginx ~]# useradd -M -s /sbin/nologin nginx
[root@nginx ~]# tar xf nginx-1.6.2.tar.gz -C /usr/src/
[root@nginx ~]# cd /usr/src/nginx-1.6.2/
[root@nginx nginx-1.6.2]# ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx --with-http_stub_status_module --with-http_ssl_module --with-http_flv_module --with-http_gzip_static_module &&make && make install
[root@nginx nginx-1.6.2]# ln -s /usr/local/nginx/sbin/nginx /usr/local/bin/
[root@nginx nginx-1.6.2]# ll /usr/local/bin/nginx
lrwxrwxrwx 1 root root 27 12-29 07:24 /usr/local/bin/nginx -> /usr/local/nginx/sbin/nginx
[root@nginx conf]# nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@nginx conf]# nginx
[root@nginx conf]# netstat -anpt |grep :80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 6810/nginx: master
[root@nginx ~]# elinks --dump http://localhost
Welcome to nginx!
添加启动脚本文件
[root@nginx ~]# vim /etc/init.d/nginx
脚本一:

!/bin/bash

chkconfig: 2345 99 20

description: Nginx Server Control Script

PROG="/usr/local/nginx/sbin/nginx"
PIDF="/usr/local/nginx/logs/nginx.pid"
case "$1" in
start)
$PROG
;;
stop)
kill -s QUIT $(cat $PIDF)
;;
restart)
$0 stop
$0 start
;;
reload)
kill -s HUP $(cat $PIDF)
;;
*)
echo "Usage: $0 {start|stop|restart|reload}"
exit 1
esac
exit 0

[root@nginx ~]# chmod +x /etc/init.d/nginx
[root@nginx ~]# chkconfig --add nginx
[root@nginx ~]# chkconfig nginx on
[root@nginx ~]# chkconfig --list nginx
nginx 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭
安装Mysql数据库
[root@nginx ~]# rpm -q mysql-server mysql
package mysql-server is not installed
package mysql is not installed
[root@nginx ~]# yum -y install ncurses-devel
[root@nginx ~]# tar xf cmake-2.8.12.tar.gz -C /usr/src/
[root@nginx ~]# cd /usr/src/cmake-2.8.12/
[root@nginx cmake-2.8.12]# ./configure && gmake && gmake install
[root@nginx ~]# groupadd mysql
[root@nginx ~]# useradd -M -s /sbin/nologin mysql -g mysql
[root@nginx ~]# tar xf mysql-5.5.22.tar.gz -C /usr/src/
[root@nginx ~]# cd /usr/src/mysql-5.5.22/
[root@nginx mysql-5.5.22]# cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql -DDEFAULT_CHARSET=utf8 -DDEFAULT_COLLATION=utf8_general_ci -DWITH_EXTRA_CHARSETS=all -DSYSCONFDIR=/etc && make && make install
[root@ nginx mysql-5.5.22]# chown -R mysql:mysql /usr/local/mysql
[root@nginx mysql-5.5.22]# cp support-files/my-medium.cnf /etc/my.cnf
[root@nginx mysql-5.5.22]# cp support-files/mysql.server /etc/init.d/mysqld
[root@nginx mysql-5.5.22]# chmod +x /etc/init.d/mysqld
[root@nginx mysql-5.5.22]# chkconfig --add mysqld
[root@nginx mysql-5.5.22]# chkconfig --list mysqld
mysqld 0:关闭 1:关闭 2:启用 3:启用 4:启用 5:启用 6:关闭
[root@nginx mysql-5.5.22]# echo "PATH=$PATH:/usr/local/mysql/bin" >> /etc/profile
[root@nginx mysql-5.5.22]# . /etc/profile
[root@nginx mysql-5.5.22]# groupadd mysql
[root@nginx mysql-5.5.22]# useradd -M -s /sbin/nologin -g mysql mysql
[root@nginx mysql-5.5.22]# /usr/local/mysql/scripts/mysql_install_db --basedir=/usr/local/mysql/ --datadir=/usr/local/mysql/data --user=mysql
[root@nginx mysql-5.5.22]# service mysqld start
Starting MySQL.............. [确定]
安装PHP解析环境
[root@nginx ~]# yum -y install gd libxml2-devel libjpeg-devel libpng-devel
[root@nginx ~]# tar xf php-5.3.6.tar.gz -C /usr/src/
[root@nginx ~]# cd /usr/src/php-5.3.6/
[root@nginx php-5.3.6]# ./configure --prefix=/usr/local/php5 --with-gd --with-zlib --with-mysql=/usr/local/mysql --with-config-file-path=/usr/local/php5 --enable-mbstring --enable-fpm --with-jpeg-dir=/usr/lib && make && make install

报错信息处理
注:在RHEL6的64系统中按上面的配置项配置可能会报以下错误,如果要是报错的话请按如下方法解决:
If configure fails try --with-jpeg-dir=


configure: error: libpng.(a|so) not found.
解决方法:
根据报错发现是因为libpng.so和libpng.a找不到,但libpng的相关软件包我已经安装了,如下图查询结果,发现libpng的包都安装了。
[root@nginx ~]# rpm -qa |grep libpng
要解决的问题就是它没找到我安装的,那我得去找找看它到底是放那里去了。执行下列命令,可以查找libpng.so在那里。
[root@nginx ~]# updatedb
[root@nginx ~]# locate libpng.so
/usr/lib64/libpng.so
/usr/lib64/libpng.so.3
/usr/lib64/libpng.so.3.44.0
/usr/lib64/compiz/libpng.so
通过上面的搜索其实就知道一些原因了,configure一般的搜索编译路径为/usr/lib/下,因为php默认就在/usr/lib/下找相关库文件,而x64机器上是在:/usr/lib64.这时你就可以直接把需要的库文件从/usr/lib64中拷贝到/usr/lib/中去就可以了
[root@nginx ~]# cp -frp /usr/lib64/libpng* /usr/lib/
解决完后重新执行./configure进行编译前的配置
如果报configure: error: libjpeg.(a|so) not found错误解决方法和上面的解决方法类似。具体操作如下:
[root@nginx ~]# yum -y install libjpeg*
[root@nginx ~]# cp -frp /usr/lib64/libpng* /usr/lib/
[root@nginx ~]# cp -frp /usr/lib64/libjpeg* /usr/lib/
如果报下面错误
usr/bin/install: cannot create regular file `/usr/local/man/man1/cjpeg.1': No such file or directory
make: *** [install] Error 1
提示找不到目录。既然电脑找不到,咱们人脑可以找嘛,自己先创建先
mkdir /usr/local/man
mkdir /usr/local/man1 创建完了再来
类似错误很多,方法也很多,我们得灵活应变,具体问题具体分析!
安装php环境
[root@nginx ~]# cd /usr/src/php-5.3.6/
[root@nginx php-5.3.6]# cp php.ini-development /usr/local/php5/php.ini
[root@nginx php-5.3.6]# ln -s /usr/local/php5/bin/* /usr/local/bin/
[root@nginx php-5.3.6]# ln -s /usr/local/php5/sbin/* /usr/local/sbin/
[root@nginx ~]# tar xf ZendGuardLoader-php-5.3-linux-glibc23-x86_64.tar.gz -C /usr/src/
[root@nginx ~]# cd /usr/src/ZendGuardLoader-php-5.3-linux-glibc23-x86_64/
[root@nginx ZendGuardLoader-php-5.3-linux-glibc23-x86_64]#cp php-5.3.x/ZendGuardLoader.so /usr/local/php5/lib/php/
[root@nginx ZendGuardLoader-php-5.3-linux-glibc23-i386]# vim /usr/local/php5/php.ini
zend_extension=/usr/local/php5/lib/php/ZendGuardLoader.so
zend_loader.enable=1
[root@nginx ~]# cd /usr/local/php5/etc/
[root@nginx etc]# cp php-fpm.conf.default php-fpm.conf
[root@nginx etc]# vim php-fpm.conf
25 pid = run/php-fpm.pid //确认pid文件位置
122 user = nginx //运行用户
123 group = nginx //运行组
157 pm.start_servers = 20 //启动时开启的进程数
162 pm.min_spare_servers = 5 //最少空闲进程数
167 pm.max_spare_servers = 50 //最大空闲进程数
152 pm.max_children = 50 //最多空闲进程数
[root@nginx etc]# /usr/local/sbin/php-fpm
[root@nginx etc]# netstat -anpt |grep php-fpm
tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 23027/php-fpm.conf)
结合nginx启动php脚本
[root@nginx etc]# vim /etc/init.d/nginx

!/bin/bash

chkconfig: 2345 99 20

description: Nginx Server Control Script

PROG="/usr/local/nginx/sbin/nginx"
PIDF="/usr/local/nginx/logs/nginx.pid"
PROG_FPM="/usr/local/sbin/php-fpm"
PIDF_FPM="/usr/local/php5/var/run/php-fpm.pid"
case "$1" in
start)
$PROG
$PROG_FPM
;;
stop)
kill -s QUIT $(cat $PIDF)
kill -s QUIT $(cat $PIDF_FPM)
;;
restart)
$0 stop
$0 start
;;
reload)
kill -s HUP $(cat $PIDF)
;;
*)
echo "Usage: \(0 (start|stop|restart|reload)" exit 1 esac exit 0 这样,一旦启动或关闭nginx服务,php-fpm程序也会随之启动或关闭,不需要额外再启动或关闭php-fpm. 配置Nginx支持PHP解析: [root@nginx etc]# vim /usr/local/nginx/conf/nginx.conf location ~ \.php\) {
root html;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi.conf;
}
[root@nginx etc]# killall -HUP nginx
在benet目录下创建PHP测试页,看能否正常解析和访问
[root@nginx etc]# vim /usr/local/nginx/html/benet/test.php

MYSQL数据库。测试页内容如下 :
[root@nginx etc]# vim /usr/local/nginx/html/benet/test1.php

恭喜你,大功告成!!"; //连接成功则返回信息 mysql_close(); //关闭数据库连接 ?>

安装redis扩展插件
解压编译安装 php-redis扩展插件
执行/php/path/bin/phpize (作用是检测PHP的内核版本,并为扩展生成相应的编译配置)
configure --with-php-config=/php/path/bin/php-config
make && make install
[root@localhost redis-3.2.8]# cd /usr/src
[root@localhost src]# tar zxf redis-2.2.4.tgz
[root@localhost src]# cd redis-2.2.4
[root@localhost redis-2.2.4]# ls //需要执行phpize命令增加配置项
arrays.markdown library.c redis_array_impl.c
common.h library.h redis_array_impl.h
config.m4 php_redis.h redis.c
config.w32 README.markdown redis_session.c
COPYING redis_array.c redis_session.h
CREDITS redis_array.h
[root@localhost redis-2.2.4]# /usr/local/php5/bin/phpize
Configuring for:
PHP Api Version: 20090626
Zend Module Api No: 20090626
Zend Extension Api No: 220090626
[root@localhost redis-2.2.4]# ls
acinclude.m4 configure.in php_redis.h
aclocal.m4 config.w32 README.markdown
arrays.markdown COPYING redis_array.c
autom4te.cache CREDITS redis_array.h
build install-sh redis_array_impl.c
common.h library.c redis_array_impl.h
config.guess library.h redis.c
config.h.in ltmain.sh redis_session.c
config.m4 Makefile.global redis_session.h
config.sub missing run-tests.php
configure mkinstalldirs
[root@localhost redis-2.2.4]# ./configure --with-php-config=/usr/local/php5/bin/php-config
引入编译出的redis.so插件
1: 编辑php.ini
2: 添加
[root@localhost redis-2.2.4]# vim /usr/local/php5/php.ini
在1000行添加
extension= /usr/local/php5/lib/php/extensions/no-debug-non-zt s-20090626/redis.so
重启php服务
[root@localhost php5]# pkill -9 php-fpm
[root@localhost php5]# /usr/local/php5/sbin/php-fpm
[root@localhost php5]# netstat -anpt | grep php-fpm
LISTEN 31411/php-fpm
redis插件的使用
在html/benet目录下添加redis.php测试网页
vim /usr/local/nginx/html/benet/redis.php

open('localhost',6379); $redis->set('user:userid:9:username','wangwu'); //存储 var_dump($redis->get('user:userid:9:username')); //查询 测试结果 仿微博项目实战 上传微博项目模板,到/usr/local/nginx/html下 [root@localhost html]# unzip weibo-模板.zip [root@localhost html]# cd weibo 修改主配置文件使html/weibo生效 [root@localhost weibo]# vim /usr/local/nginx/conf/nginx.conf 注释掉html/benet的信息 # location ~ \.php$ { # root html/benet; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # include fastcgi.conf; # # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # # include fastcgi_params; # } 添加 location ~ \.php$ { root html/weibo; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } 简化代码,提取头尾部信息,注意把图片移动到html下 [root@localhost weibo]# vim header.php //截取index.html前12行信息 1 2 3 4 5 Retwis - Example Twitter clone based on the Redis Key-V alue DB 6 7 8 9
10 3 4 修改html/weibo下的首页index.html为index.php [root@localhost weibo]# mv index.html index.php [root@localhost weibo]# vim index.php 删除前12行在首行添加

删除后4行在末行添加

测试结果

[root@localhost weibo]# vim register.php

编辑基础函数库
[root@localhost weibo]# vim lib.php

注册之后增加的用户
[root@localhost redis]# ./bin/redis-cli
127.0.0.1:6379> keys *

  1. "user:username🐺userid"
  2. "user:userid:9:passwd"
  3. "user:userid:9:email"
  4. "user:userid:9:username"
  5. "global:userid"
  6. "user:userid:1:password"
  7. "user:userid:1:username"
    127.0.0.1:6379> get user:userid:1:username
    "wolf"
    127.0.0.1:6379> get user:userid:1:password
    "123123"
    再次注册

mv home.html home.php
vim home.php
删除头部信息添加头部信息和库


------------------------------------------------------ ==================================

加判断

, 有啥感想?

127.0.0.1:6379> keys *

  1. "user:username🐯userid"
  2. "global:userid"
  3. "newuserlink"
  4. "user:userid:1:username"
  5. "user:userid:1:password"
    127.0.0.1:6379> lrange newuserlink 0 -1
  6. "1"
    127.0.0.1:6379>

添加两个用户test1,test2,返回redis查看数据库
127.0.0.1:6379> lrange newuserlink 0 -1

  1. "3"
  2. "2"
  3. "1"

sort取出来的相当于mysql用户的主键
127.0.0.1:6379> sort newuserlink 正序排列

  1. "1"
  2. "2"
  3. "3"
    127.0.0.1:6379> sort newuserlink desc //反序排列
  4. "3"
  5. "2"
  6. "1"

编辑timeline.php头部

sort('newuserlink',array('sort'=>'desc','get'=>'user:userid:*:username')); print_r($newuserlist); exit; ?>

timeline 用户修改

添加follow.php 关注功能

附加代码:
在php目录里

微博项目的key设计
全局相关的key:
表名 global
列名 操作 备注
Global:userid incr 产生全局的userid
Global:postid Incr 产生全局的postid

用户相关的key(表)
表名 user
Userid Username Password Authsecret
3 Test3 1111111 #U*Q(%_

在redis中,变成以下几个key
Key前缀 user
User:Userid:* User:userid:Username User:userid:Password User:userid::Authsecret
User:userid:3 User:userid:3:Test3 User:userid:3:1111111 User:userid:3:#U
Q(%_

微博相关的表设计
表名 post
Postid Userid Username Time Content
4 2 Lisi 1370987654f 测试内容

微博在redis中,与表设计对应的key设计
Key前缀 post
Post:Postid:* Post:postid:Userid Post:postid::Username Post:postid::Time Post:postid::Content
4 2 Lisi 1370987654f 测试内容

关注表: following
Following:$userid -->

粉丝表
Follower:$userid --->

推送表:revicepost

3 4 7

=拉模型,改进=====

拉取表

3 4 7

问: 上次我拉取了 A->5,67,三条微博, 下次刷新home.php, 从>7的微博开始拉取
解决: 拉取时,设定一个lastpull时间点, 下次拉取时,取>lastpull的微博

问: 有很多关注人,如何取?
解决: 循环自己的关注列表,逐个取他们的新微博

问: 取出来之后放在哪儿?
答: pull:$userid的链接里

问: 如果个人中心,只有前1000条
答: ltrim,只取前1000条

问: 如果我关注 A,B两人, 从2人中,各取3条最新信息
,这3+3条信息, 从时间上,是交错的, 如何按时间排序?
答: 我们发布时, 是发布的hash结构, 不能按时间来排序.

解决: 同步时,取微博后,记录本次取的微博的最大id,
下次同步时,只取比最大id更大的微博

Time taken for tests: 32.690 seconds
Complete requests: 20000
Failed requests: 0
Write errors: 0
Non-2xx responses: 20000
Total transferred: 13520000 bytes
Total POSTed: 5340000
HTML transferred: 9300000 bytes
Requests per second: 611.80 [#/sec] (mean)
Time per request: 81.726 [ms] (mean)
Time per request: 1.635 [ms] (mean, across all concurrent requests)
Transfer rate: 403.88 [Kbytes/sec] received
159.52 kb/s sent
563.41 kb/s total

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.9 0 19
Processing: 14 82 8.4 81 153
Waiting: 4 82 8.4 80 153
Total: 20 82 8.2 81 153

Percentage of the requests served within a certain time (ms)
50% 81
66% 84
75% 86
80% 88
90% 93
95% 96
98% 100
99% 103
100% 153 (longest request)

测试结果:
50个并发, 20000次请求, 虚拟下,未做特殊优化
每次请求redis写操作6次.
30+秒左右完成.

平均每秒发布700条微博, 4000次redis写入.
后台定时任务,回归冷数据入mysql
Redis配置文件
daemonize yes # redis是否以后台进程运行
Requirepass 密码 # 配置redis连接的密码
注:配置密码后,客户端连上服务器,需要先执行授权命令

auth 密码

posted @ 2022-01-26 23:40  高宏宇  阅读(299)  评论(0编辑  收藏  举报