返回顶部

rabbitMQ

怎么在Ubuntu linux系统上安装和使用RabbitMQ

RabbitMQ是一个使用非常普遍的免费消息中间件,简单高效。在Openstack中默认使用RabbitMQ作为其消息中间件,以下将介绍如何在Ubuntu系统上安装RabbitMQ服务以以及如何使用和监控RabbitMQ

apt install rabbitmq-server

安装完成后在rabbitMQ中添加openstack用户,为后续安装openstack提取做好准备

rabbitmqctl add_user openstack dick

其中dick为openstack用户登录rabbitMQ服务的密码

同时为openstack用户设置读写等权限

rabbitmqctl set_permissions openstack ".*" ".*" ".*"

安装RabbitMQ监控管理插件进行RabbitMQ的管理 

rabbitmq-plugins enable rabbitmq_management

插件rabbitmq_management启动成功后就可以通过web页面进行RabbitMQ的监控和管理  

 

使用rabbitmq_management插件进行监控和管理

使用firefox浏览器登录:

http://localhost:15672

在登录页面使用 guest/guest用户名和密码登录RabbitMQ管理系统,在系统中可以对RabbitMQ服务进行channel,queue,用户等的管理  

安装pika开发RabbitMQ客户端

 pip install pika

  

 实现最简单的队列通信

 

 

producer

import pika
# 封装socket逻辑的部分 
connection = pika.BlockingConnection(pika.ConnectionParameters(
               'localhost'))
# 创建一个可以操作rabbitmq的文件句柄
channel
= connection.channel() #创建一个双方通信的队列名字叫做hello channel.queue_declare(queue='hello') #n RabbitMQ a message can never be sent directly to the queue, it always needs to go through an exchange. channel.basic_publish(exchange='', routing_key='hello', body='Hello World!') print(" [x] Sent 'Hello World!'") connection.close()

 

consumer

#_*_coding:utf-8_*_
__author__ = 'Alex Li'
import pika
 
connection = pika.BlockingConnection(pika.ConnectionParameters(
               'localhost'))
channel = connection.channel()
 
 
#You may ask why we declare the queue again ‒ we have already declared it in our previous code.
# We could avoid that if we were sure that the queue already exists. For example if send.py program
#was run before. But we're not yet sure which program to run first. In such cases it's a good
# practice to repeat declaring the queue in both programs.
channel.queue_declare(queue='hello')

# ch管道内存对象的地址,method,把消息发送给谁的信息
def callback(ch, method, properties, body): print(" [x] Received %r" % body)
# 只要有消息过来就会调用callback函数 channel.basic_consume(callback, queue
='hello', no_ack=True) print(' [*] Waiting for messages. To exit press CTRL+C') channel.start_consuming()

 远程连接rabbitmq server的话,需要配置权限 噢 

 首先在rabbitmq server上创建一个用户

sudo rabbitmqctl  add_user alex alex3714  

  

同时还要配置权限,允许从外面访问

sudo rabbitmqctl set_permissions -p / alex ".*" ".*" ".*"

set_permissions [-p vhost] {user} {conf} {write} {read}

vhost

The name of the virtual host to which to grant the user access, defaulting to /.

user

The name of the user to grant access to the specified virtual host.

conf

A regular expression matching resource names for which the user is granted configure permissions.

write

A regular expression matching resource names for which the user is granted write permissions.

read

A regular expression matching resource names for which the user is granted read permissions.

 

客户端连接的时候需要配置认证参数  

credentials = pika.PlainCredentials('alex', 'alex3714')
 
 
connection = pika.BlockingConnection(pika.ConnectionParameters(
    '10.211.55.5',5672,'/',credentials))
channel = connection.channel()

消息持久化 

在生产者和消费者的管道中都加入,durable=True

channel.queue_declare(queue='hello', durable=True)

 

在生产者推送消息的时候加上下面的proerties选项

channel.basic_publish(exchange='',
                      routing_key="task_queue",
                      body=message,
                      properties=pika.BasicProperties(
                         delivery_mode = 2, # make message persistent
                      ))

 

 

消息公平分发

如果Rabbit只管按顺序把消息发到各个消费者身上,不考虑消费者负载的话,很可能出现,一个机器配置不高的消费者那里堆积了很多消息处理不完,同时配置高的消费者却一直很轻松。为解决此问题,可以在各个消费者端,配置perfetch=1,意思就是告诉RabbitMQ在我这个消费者当前消息还没处理完的时候就不要再给我发新消息了。

channel.basic_qos(prefetch_count=1)

  

 send端

zhangbiao = "在黑暗中的每一次跳跃都是成长"

import pika

connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))#建立基本的socket
channel = connection.channel()#开通一个管道

# 声明queue,queue的名字叫做hello
channel.queue_declare(queue='hello',durable=True)


channel.basic_publish(exchange='',
                      routing_key='hello',#queue的名字
                      body='Hello World!123',
                      #消息持久化
                    properties=pika.BasicProperties(
                         delivery_mode = 2, # make message persistent
                      ))
print(" [x] Sent 'Hello World!'")
connection.close()

 

 receive端

zhangbiao = "在黑暗中的每一次跳跃都是成长"

import pika,time

connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channel = connection.channel()


channel.queue_declare(queue='hello',durable=True) # 消息持久话,durable = true 关机没事


def callback(ch, method, properties, body):
   #ch 管道的名字
    print(ch,method,properties)
    #time.sleep(30)
    print(" [x] Received %r" % body)
    ch.basic_ack(delivery_tag=method.delivery_tag) # 客户端要手动的向服务器确认处理完了

channel.basic_qos(prefetch_count=1)#消息公平分发,如果当前的客户端还有消息,没处理完,就不会发给他,发给空闲的
channel.basic_consume(#消费信息
                    callback,#如果收到消息,就调用这个函数来处理消息
                      queue='hello',
                     # no_ack=True 客户端没处理完,会传给下一个客户端,默认会断掉

                       )

print(' [*] Waiting for messages. To exit press ')
channel.start_consuming()

 

Publish\Subscribe(消息发布\订阅) 

 

之前的例子都基本都是1对1的消息发送和接收,即消息只能发送到指定的queue里,但有些时候你想让你的消息被所有的Queue收到,类似广播的效果,这时候就要用到exchange了,

An exchange is a very simple thing. On one side it receives messages from producers and the other side it pushes them to queues. The exchange must know exactly what to do with a message it receives. Should it be appended to a particular queue? Should it be appended to many queues? Or should it get discarded. The rules for that are defined by the exchange type.

Exchange在定义的时候是有类型的,以决定到底是哪些Queue符合条件,可以接收消息


fanout: 所有bind到此exchange的queue都可以接收消息
direct: 通过routingKey和exchange决定的那个唯一的queue可以接收消息
topic:所有符合routingKey(此时可以是一个表达式)的routingKey所bind的queue可以接收消息

   表达式符号说明:#代表一个或多个字符,*代表任何字符
      例:#.a会匹配a.a,aa.a,aaa.a等
          *.a会匹配a.a,b.a,c.a等
     注:使用RoutingKey为#,Exchange Type为topic的时候相当于使用fanout 

headers: 通过headers 来决定把消息发给哪些queue

 

广播发送消息(fanout)

只有正在运行的客户端才会收到消息

fanout_publish

zhangbiao = "在黑暗中的每一次跳跃都是成长"
import pika


# 类似于一个广播,关了就收不到了,开了才会收到
connection = pika.BlockingConnection(pika.ConnectionParameters(
    host='localhost'))

channel = connection.channel()
# 发布方没声明queue,只要exchange

channel.exchange_declare(exchange='logs',  # 绑定这个的都会受到
                         type='fanout')

message = "info: Hello World!"
channel.basic_publish(exchange='logs',
                      routing_key='',
                      body=message)
print(" [x] Sent %r" % message)
connection.close()

 

fanout_consumer

zhangbiao = "在黑暗中的每一次跳跃都是成长"

import pika

connection = pika.BlockingConnection(pika.ConnectionParameters(
    host='localhost'))
channel = connection.channel()

channel.exchange_declare(exchange='logs',
                         type='fanout')

result = channel.queue_declare(exclusive=True)  # 不指定queue名字,rabbit会随机分配一个名字,exclusive=True会在使用此queue的消费者断开后,自动将queue删除
queue_name = result.method.queue#随机生成queue的名字

channel.queue_bind(exchange='logs',#绑定转发器
                   queue=queue_name)

print(' [*] Waiting for logs. To exit press CTRL+C')


def callback(ch, method, properties, body):
    print(" [x] %r" % body)


channel.basic_consume(callback,
                      queue=queue_name,
                      no_ack=True)

channel.start_consuming()

 

 

有选择的接收消息(exchange type=direct)

RabbitMQ还支持根据关键字发送,即:队列绑定关键字,发送者将数据根据关键字发送到消息exchange,exchange根据 关键字 判定应该将数据发送至指定队列。

publisher

 

zhangbiao = "在黑暗中的每一次跳跃都是成长"
import pika
import sys

connection = pika.BlockingConnection(pika.ConnectionParameters(
    host='localhost'))
channel = connection.channel()

channel.exchange_declare(exchange='direct_logs',
                         type='direct')

severity = sys.argv[1] if len(sys.argv) > 1 else 'info'  # 定义一个级别
message = ' '.join(sys.argv[2:]) or 'Hello World!'
channel.basic_publish(exchange='direct_logs',
                      routing_key=severity,
                      body=message)
print(" [x] Sent %r:%r" % (severity, message))
connection.close()

subscriber 

zhangbiao = "在黑暗中的每一次跳跃都是成长"

import pika
import sys

connection = pika.BlockingConnection(pika.ConnectionParameters(
    host='localhost'))
channel = connection.channel()

channel.exchange_declare(exchange='direct_logs',
                         type='direct')

result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue

severities = sys.argv[1:]
if not severities:
    sys.stderr.write("Usage: %s [info] [warning] [error]\n" % sys.argv[0])
    sys.exit(1)

for severity in severities:
    channel.queue_bind(exchange='direct_logs',
                       queue=queue_name,
                       routing_key=severity)

print(' [*] Waiting for logs. To exit press CTRL+C')


def callback(ch, method, properties, body):
    print(" [x] %r:%r" % (method.routing_key, body))


channel.basic_consume(callback,
                      queue=queue_name,
                      no_ack=True)

channel.start_consuming()

 

 

这是可以在多个终端上开多个消费者,注意必须是python3的运行环境不然 会报错

开启一个消费者,只接受info的信息

python3 subscriber.py info

 

开启一个消费者,只接受info 和 warning的信息

python3 subscriber.py info warning

 

开启一个消费者,只接受error 和 warning的信息

python3 subscriber.py warning error

 

开启一个生产者,生产info的信息

python3 publisher.py info  

 这时只有前两个消费者可以接收到消息

 

开启一个生产者,生产error的信息

python3 publisher.py error   

这时候只有最后一个消费者可以接收到消息 

 

更细致的消息过滤

 

Although using the direct exchange improved our system, it still has limitations - it can't do routing based on multiple criteria.

In our logging system we might want to subscribe to not only logs based on severity, but also based on the source which emitted the log. You might know this concept from the syslog unix tool, which routes logs based on both severity (info/warn/crit...) and facility (auth/cron/kern...).

That would give us a lot of flexibility - we may want to listen to just critical errors coming from 'cron' but also all logs from 'kern'.

 

publisher

zhangbiao = "在黑暗中的每一次跳跃都是成长"
import pika
import sys

connection = pika.BlockingConnection(pika.ConnectionParameters(
    host='localhost'))
channel = connection.channel()

channel.exchange_declare(exchange='topic_logs',
                         type='topic')

routing_key = sys.argv[1] if len(sys.argv) > 1 else 'anonymous.info'
message = ' '.join(sys.argv[2:]) or 'Hello World!'
channel.basic_publish(exchange='topic_logs',
                      routing_key=routing_key,
                      body=message)
print(" [x] Sent %r:%r" % (routing_key, message))
connection.close()

 

 subscriber

zhangbiao = "在黑暗中的每一次跳跃都是成长"
import pika
import sys

connection = pika.BlockingConnection(pika.ConnectionParameters(
    host='localhost'))
channel = connection.channel()

channel.exchange_declare(exchange='topic_logs',
                         type='topic')

result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue

binding_keys = sys.argv[1:]
if not binding_keys:
    sys.stderr.write("Usage: %s [binding_key]...\n" % sys.argv[0])
    sys.exit(1)

for binding_key in binding_keys:
    channel.queue_bind(exchange='topic_logs',
                       queue=queue_name,
                       routing_key=binding_key)

print(' [*] Waiting for logs. To exit press CTRL+C')


def callback(ch, method, properties, body):
    print(" [x] %r:%r" % (method.routing_key, body))


channel.basic_consume(callback,
                      queue=queue_name,
                      no_ack=True)

channel.start_consuming()

 

 消费者

只接受info结尾的消息

python subscriber.py *.info

 

 

 只接受error结尾和mysql开头的消息

python subscriber.py *.error mysql.*

 

 

生产者

默认info接受,第一个消费者接受

python3 publisher.py

 

 

第二个消费者接受

python3 publisher.py test.error

 

 

两个消费者都会接受

python3 publisher.py mysql.info

 

 

 所有的消费者都会接收到该条消息

python3 publisher.py #

 

Remote procedure call (RPC)

rpc 远程调用一个方法,消费者也是生产者

To illustrate how an RPC service could be used we're going to create a simple client class. It's going to expose a method named call which sends an RPC request and blocks until the answer is received:

fibonacci_rpc = FibonacciRpcClient()
result = fibonacci_rpc.call(4)
print("fib(4) is %r" % result)

RPC server

 

import pika
import time
connection = pika.BlockingConnection(pika.ConnectionParameters(
        host='localhost'))
 
channel = connection.channel()
 
channel.queue_declare(queue='rpc_queue')
 
def fib(n):
    if n == 0:
        return 0
    elif n == 1:
        return 1
    else:
        return fib(n-1) + fib(n-2)
 
def on_request(ch, method, props, body):
    n = int(body)
 
    print(" [.] fib(%s)" % n)
    response = fib(n)
 
    ch.basic_publish(exchange='',
                     routing_key=props.reply_to,
                     properties=pika.BasicProperties(correlation_id = \
                                                         props.correlation_id),
                     body=str(response))
    ch.basic_ack(delivery_tag = method.delivery_tag)
 
channel.basic_qos(prefetch_count=1)
channel.basic_consume(on_request, queue='rpc_queue')
 
print(" [x] Awaiting RPC requests")
channel.start_consuming()

RPC client

 

 

import pika
import uuid
 
class FibonacciRpcClient(object):
    def __init__(self):
        self.connection = pika.BlockingConnection(pika.ConnectionParameters(
                host='localhost'))
 
        self.channel = self.connection.channel()
 
        result = self.channel.queue_declare(exclusive=True)
        self.callback_queue = result.method.queue
 
        self.channel.basic_consume(self.on_response, # 只要收到消息就调用
                     no_ack=True,
                                   queue=self.callback_queue)
 
    def on_response(self, ch, method, props, body):
        if self.corr_id == props.correlation_id:
            self.response = body
 
    def call(self, n):
        self.response = None
        self.corr_id = str(uuid.uuid4())
        self.channel.basic_publish(exchange='',
                                   routing_key='rpc_queue',
                                   properties=pika.BasicProperties(
                                         reply_to = self.callback_queue,
                                         correlation_id = self.corr_id,
                                         ),
                                   body=str(n))
        while self.response is None:
            self.connection.process_data_events() # 非阻塞版的start_consuming,有消息返回执行on_response
        return int(self.response)
 
fibonacci_rpc = FibonacciRpcClient()
 
print(" [x] Requesting fib(30)")
response = fibonacci_rpc.call(30)
print(" [.] Got %r" % response)

 

posted @ 2018-03-11 10:41  Crazymagic  阅读(230)  评论(0编辑  收藏  举报