day12 pika、RabbitMQ、MySQL、SQLchemy、Paramiko

contextlib

contextlib代码程序的执行示例:

import contextlib  #step1


@contextlib.contextmanager #step2;step 6
def worker_state(state_list, worker_thread):
	"""
	用于记录线程中正在等待的线程数
	"""
	state_list.append(worker_thread) #step7
	try: #step8
		yield #step9;step12
	finally:
		state_list.remove(worker_thread) #step13

free_list = [] #step3
current_thread = "alex" #step4
with worker_state(free_list, current_thread):#step5;step10
	print(123) #step11
	print(456) #step12;step14

contextlib模块socket_server_终止线程

import contextlib
import socket

@contextlib.contextmanager
def context_socket(host, port):
    sk = socket.socket()
    sk.bind((host,port))
    sk.listen(5)
    try:
        yield sk
    finally:
        sk.close()

with context_socket('127.0.0.1', 8888) as sock:
    print(sock)

redis

redis是一个开源的(BSD许可),内存存储的数据结构服务器,可用做数据库,高速缓存和消息队列代理。

数据模型:

Redis的外围由一个键、值映射的字典(dict())构成。与其他非关系型数据库主要不同在于:Redis中值的类型不仅限于字符串,还支持如下抽象数据类型:

字符串列表

无序不重复的set()

有序不重复的set(sorted)

键、值都为string()的hash()

值的类型决定了值本身支持的操作。Redis支持不同无序、有序的列表,无序、有序的集合间的交集、并集等高级服务器端原子操作。

持久化:

Redis通常将全部的数据存储在内存中。版本后可配置为使用虚拟内存,一部分数据集存储在硬盘上,但这个特性废弃了。

目前通过两种方式实现持久化:

  • 使用快照,一种半持久耐用模式。不时的将数据集以异步方式从内存以RDB格式写入硬盘。
  • 1.1版本开始使用更安全的AOF格式替代,一种只能追加的日志类型。将数据集修改操作记录起来。Redis能够在后台对只可追加的记录作修改来避免无限增长的日志。

同步:

Redis支持主从同步。数据可以从主服务器向任意数量的从服务器上同步,从服务器可以是关联其他从服务器的主服务器。这使得Redis可执行单层树复制。从盘可以有意无意的对数据进行写操作。由于完全实现了发布/订阅机制,使得从数据库在任何地方同步树时,可订阅一个频道并接收主服务器完整的消息发布记录。同步对读取操作的可扩展性和数据冗余很有帮助。

性能:

当数据依赖不再需要,Redis这种基于内存的性质,与在执行一个事务时将每个变化都写入硬盘的数据库系统相比就显得执行效率非常高。写与读操作速度没有明显差别。

redis发布/订阅的示例:

发布订阅代码

import redis

class RedisHelper:

    def __init__(self):
        self.__conn = redis.Redis(host='192.168.11.87')

    def publish(self, msg, chan): #发布功能
        self.__conn.publish(chan, msg)
        return True

    def subscribe(self, chan): #订阅功能
        pub = self.__conn.pubsub()
        pub.subscribe(chan)
        pub.parse_response()
        return pub

发布者

import s3

obj = s3.RedisHelper()
obj.public('alex db', 'fm111.7')

订阅者

import s3

obj = s3.RedisHelper()
data = obj.subscribe('fm111.7')
print(data.parse_response())

RabbitMQ

RabbitMQ是流行的开源消息队列系统,用erlang语言开发。RabbitMQ是AMQP(高级消息队列协议)的标准实现,它遵循Mozilla Public License开源协议。

MQ全称为Message Queue, 消息队列(MQ)是一种应用程序对应用程序的通信方法。应用程序通过读写出入队列的消息(针对应用程序的数据)来通信,而无需专用连接来链接它们。消 息传递指的是程序之间通过在消息中发送数据进行通信,而不是通过直接调用彼此来通信,直接调用通常是用于诸如远程过程调用的技术。排队指的是应用程序通过 队列来通信。队列的使用除去了接收和发送应用程序同时执行的要求。

RabbitMQ 架构图:

RabbitMQ的安装:

安装配置epel源 
   $ rpm -ivh http://dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm 
  
安装erlang 
   $ yum -y install erlang 
  
安装RabbitMQ 
   $ yum -y install rabbitmq-server 

注意:service rabbitmq-server start/stop

安装API

注:使用API操作RabbitMQ

pip install pika 
or
easy_install pika 
or
源码 
  
https://pypi.python.org/pypi/pika

对比基友queue实现的生产者消费者模型和基于RabbitMQ的对象实现的生产者和消费者的模型:

基友queue实现的生产者消费者模型

 1 #!/usr/bin/env python
 2 # -*- coding:utf-8 -*-
 3 import Queue
 4 import threading
 5 
 6 
 7 message = Queue.Queue(10)
 8 
 9 
10 def producer(i):
11     while True:
12         message.put(i)
13 
14 
15 def consumer(i):
16     while True:
17         msg = message.get()
18 
19 
20 for i in range(12):
21     t = threading.Thread(target=producer, args=(i,))
22     t.start()
23 
24 for i in range(10):
25     t = threading.Thread(target=consumer, args=(i,))
26     t.start()
view code

基于RabbitMQ的对象实现的生产者和消费者的模型

对于RabbitMQ来说,生产和消费不再针对内存里的一个Queue对象,而是某台服务器上的RabbitMQ Server实现的消息队列。

 1 #!/usr/bin/env python 
 2 import pika 
 3   
 4 # ######################### 生产者 ######################### 
 5   
 6 connection = pika.BlockingConnection(pika.ConnectionParameters( 
 7         host='localhost')) 
 8 channel = connection.channel() 
 9   
10 channel.queue_declare(queue='hello') 
11   
12 channel.basic_publish(exchange='', 
13                       routing_key='hello', 
14                       body='Hello World!') 
15 print(" [x] Sent 'Hello World!'") 
16 connection.close() 
17 ?
18 1
19 2
20 3
21 4
22 5
23 6
24 7
25 8
26 9
27 10
28 11
29 12
30 13
31 14
32 15
33 16
34 17
35 18
36 19
37 20 #!/usr/bin/env python 
38 import pika 
39   
40 # ########################## 消费者 ########################## 
41   
42 connection = pika.BlockingConnection(pika.ConnectionParameters( 
43         host='localhost')) 
44 channel = connection.channel() 
45   
46 channel.queue_declare(queue='hello') 
47   
48 def callback(ch, method, properties, body): 
49     print(" [x] Received %r" % body) 
50   
51 channel.basic_consume(callback, 
52                       queue='hello', 
53                       no_ack=True) 
54   
55 print(' [*] Waiting for messages. To exit press CTRL+C') 
56 channel.start_consuming() 
View Code

 1、acknowledgment消息不丢失

no-ack = false,如果消费者遇到情况(its channel is closed, connection is closed, or TCP connection is lost)挂掉了,那么,RabbitMQ会重新将该任务添加到队列中。

 1 import pika
 2 
 3 connection = pika.BlockingConnection(pika.ConnectionParameters(
 4         host='10.211.55.4'))
 5 channel = connection.channel()
 6 
 7 channel.queue_declare(queue='hello')
 8 
 9 def callback(ch, method, properties, body):
10     print(" [x] Received %r" % body)
11     import time
12     time.sleep(10)
13     print 'ok'
14     ch.basic_ack(delivery_tag = method.delivery_tag)
15 
16 channel.basic_consume(callback,
17                       queue='hello',
18                       no_ack=False)
19 
20 print(' [*] Waiting for messages. To exit press CTRL+C')
21 channel.start_consuming()
消费者

 2、durable消息不丢失

 1 #!/usr/bin/env python
 2 import pika
 3 
 4 connection = pika.BlockingConnection(pika.ConnectionParameters(host='10.211.55.4'))
 5 channel = connection.channel()
 6 
 7 # make message persistent
 8 channel.queue_declare(queue='hello', durable=True)
 9 
10 channel.basic_publish(exchange='',
11                       routing_key='hello',
12                       body='Hello World!',
13                       properties=pika.BasicProperties(
14                           delivery_mode=2, # make message persistent
15                       ))
16 print(" [x] Sent 'Hello World!'")
17 connection.close()
生产者
 1 #!/usr/bin/env python
 2 # -*- coding:utf-8 -*-
 3 import pika
 4 
 5 connection = pika.BlockingConnection(pika.ConnectionParameters(host='10.211.55.4'))
 6 channel = connection.channel()
 7 
 8 # make message persistent
 9 channel.queue_declare(queue='hello', durable=True)
10 
11 
12 def callback(ch, method, properties, body):
13     print(" [x] Received %r" % body)
14     import time
15     time.sleep(10)
16     print 'ok'
17     ch.basic_ack(delivery_tag = method.delivery_tag)
18 
19 channel.basic_consume(callback,
20                       queue='hello',
21                       no_ack=False)
22 
23 print(' [*] Waiting for messages. To exit press CTRL+C')
24 channel.start_consuming()
消费者

3、消息获取顺序

默认消息队列里的数据是按照顺序被消费者拿走,例如:消费者1 去队列中获取 奇数 序列的任务,消费者1去队列中获取 偶数 序列的任务。

channel.basic_qos(prefetch_count=1) 表示谁来谁取,不再按照奇偶数排列

 1 #!/usr/bin/env python
 2 # -*- coding:utf-8 -*-
 3 import pika
 4 
 5 connection = pika.BlockingConnection(pika.ConnectionParameters(host='10.211.55.4'))
 6 channel = connection.channel()
 7 
 8 # make message persistent
 9 channel.queue_declare(queue='hello')
10 
11 
12 def callback(ch, method, properties, body):
13     print(" [x] Received %r" % body)
14     import time
15     time.sleep(10)
16     print 'ok'
17     ch.basic_ack(delivery_tag = method.delivery_tag)
18 
19 channel.basic_qos(prefetch_count=1)
20 
21 channel.basic_consume(callback,
22                       queue='hello',
23                       no_ack=False)
24 
25 print(' [*] Waiting for messages. To exit press CTRL+C')
26 channel.start_consuming()
消费者

 4、发布订阅

发布订阅和简单的消息队列区别在于,发布订阅会将消息发送给所有的订阅者,而消息队列中的数据被消费一次便消失。所以,RabbitMQ实现发布和订阅时,会为每一个订阅者创建一个队列,而发布者发布消息时,会将消息放置在所有相关队列中。

 exchange type = fanout

 1 #!/usr/bin/env python
 2 import pika
 3 import sys
 4 
 5 connection = pika.BlockingConnection(pika.ConnectionParameters(
 6         host='localhost'))
 7 channel = connection.channel()
 8 
 9 channel.exchange_declare(exchange='logs',
10                          type='fanout')
11 
12 message = ' '.join(sys.argv[1:]) or "info: Hello World!"
13 channel.basic_publish(exchange='logs',
14                       routing_key='',
15                       body=message)
16 print(" [x] Sent %r" % message)
17 connection.close()
发布
 1 #!/usr/bin/env python
 2 import pika
 3 
 4 connection = pika.BlockingConnection(pika.ConnectionParameters(
 5         host='localhost'))
 6 channel = connection.channel()
 7 
 8 channel.exchange_declare(exchange='logs',
 9                          type='fanout')
10 
11 result = channel.queue_declare(exclusive=True)
12 queue_name = result.method.queue
13 
14 channel.queue_bind(exchange='logs',
15                    queue=queue_name)
16 
17 print(' [*] Waiting for logs. To exit press CTRL+C')
18 
19 def callback(ch, method, properties, body):
20     print(" [x] %r" % body)
21 
22 channel.basic_consume(callback,
23                       queue=queue_name,
24                       no_ack=True)
25 
26 channel.start_consuming()
订阅

 5、关键字发送

exchange type = direct

之前事例,发送消息时明确指定某个队列并向其中发送消息,RabbitMQ还支持根据关键字发送,即:队列绑定关键字,发送者将数据根据关键字发送到消息exchange,exchange根据 关键字 判定应该将数据发送至指定队列。

 1 #!/usr/bin/env python
 2 import pika
 3 import sys
 4 
 5 connection = pika.BlockingConnection(pika.ConnectionParameters(
 6         host='localhost'))
 7 channel = connection.channel()
 8 
 9 channel.exchange_declare(exchange='direct_logs',
10                          type='direct')
11 
12 result = channel.queue_declare(exclusive=True)
13 queue_name = result.method.queue
14 
15 severities = sys.argv[1:]
16 if not severities:
17     sys.stderr.write("Usage: %s [info] [warning] [error]\n" % sys.argv[0])
18     sys.exit(1)
19 
20 for severity in severities:
21     channel.queue_bind(exchange='direct_logs',
22                        queue=queue_name,
23                        routing_key=severity)
24 
25 print(' [*] Waiting for logs. To exit press CTRL+C')
26 
27 def callback(ch, method, properties, body):
28     print(" [x] %r:%r" % (method.routing_key, body))
29 
30 channel.basic_consume(callback,
31                       queue=queue_name,
32                       no_ack=True)
33 
34 channel.start_consuming()
消费者
 1 #!/usr/bin/env python
 2 import pika
 3 import sys
 4 
 5 connection = pika.BlockingConnection(pika.ConnectionParameters(
 6         host='localhost'))
 7 channel = connection.channel()
 8 
 9 channel.exchange_declare(exchange='direct_logs',
10                          type='direct')
11 
12 severity = sys.argv[1] if len(sys.argv) > 1 else 'info'
13 message = ' '.join(sys.argv[2:]) or 'Hello World!'
14 channel.basic_publish(exchange='direct_logs',
15                       routing_key=severity,
16                       body=message)
17 print(" [x] Sent %r:%r" % (severity, message))
18 connection.close()
生产者

6、模糊匹配

exchange type = topic

在topic类型下,可以让队列绑定几个模糊的关键字,之后发送者将数据发送到exchange,exchange将传入”路由值“和 ”关键字“进行匹配,匹配成功,则将数据发送到指定队列。

  • # 表示可以匹配 0 个 或 多个 单词
  • *  表示只能匹配 一个 单词
 1 #!/usr/bin/env python
 2 import pika
 3 import sys
 4 
 5 connection = pika.BlockingConnection(pika.ConnectionParameters(
 6         host='localhost'))
 7 channel = connection.channel()
 8 
 9 channel.exchange_declare(exchange='topic_logs',
10                          type='topic')
11 
12 result = channel.queue_declare(exclusive=True)
13 queue_name = result.method.queue
14 
15 binding_keys = sys.argv[1:]
16 if not binding_keys:
17     sys.stderr.write("Usage: %s [binding_key]...\n" % sys.argv[0])
18     sys.exit(1)
19 
20 for binding_key in binding_keys:
21     channel.queue_bind(exchange='topic_logs',
22                        queue=queue_name,
23                        routing_key=binding_key)
24 
25 print(' [*] Waiting for logs. To exit press CTRL+C')
26 
27 def callback(ch, method, properties, body):
28     print(" [x] %r:%r" % (method.routing_key, body))
29 
30 channel.basic_consume(callback,
31                       queue=queue_name,
32                       no_ack=True)
33 
34 channel.start_consuming()
消费者
 1 #!/usr/bin/env python
 2 import pika
 3 import sys
 4 
 5 connection = pika.BlockingConnection(pika.ConnectionParameters(
 6         host='localhost'))
 7 channel = connection.channel()
 8 
 9 channel.exchange_declare(exchange='topic_logs',
10                          type='topic')
11 
12 routing_key = sys.argv[1] if len(sys.argv) > 1 else 'anonymous.info'
13 message = ' '.join(sys.argv[2:]) or 'Hello World!'
14 channel.basic_publish(exchange='topic_logs',
15                       routing_key=routing_key,
16                       body=message)
17 print(" [x] Sent %r:%r" % (routing_key, message))
18 connection.close()
生产者

SQLAlchemy

SQLAlchemy是Python编程语言下的一款ORM框架,该框架建立在数据库API之上,使用关系对象映射进行数据库操作,简言之便是:将对象转换成SQL,然后使用数据API执行SQL并获取执行结果。

Dialect用于和数据API进行交流,根据配置文件的不同调用不同的数据库API,从而实现对数据库的操作,如

MySQL-Python 
    mysql+mysqldb://<user>:<password>@<host>[:<port>]/<dbname> 
  
pymysql 
    mysql+pymysql://<username>:<password>@<host>/<dbname>[?<options>] 
  
MySQL-Connector 
    mysql+mysqlconnector://<user>:<password>@<host>[:<port>]/<dbname> 
  
cx_Oracle 
    oracle+cx_oracle://user:pass@host:port/dbname[?key=value&key=value...] 
  
更多详见:http://docs.sqlalchemy.org/en/latest/dialects/index.html

步骤一:

使用 Engine/ConnectionPooling/Dialect 进行数据库操作,Engine使用ConnectionPooling连接数据库,然后再通过Dialect执行SQL语句。

#!/usr/bin/env python 
# -*- coding:utf-8 -*- 
  
from sqlalchemy import create_engine 
  
  
engine = create_engine("mysql+mysqldb://root:123@127.0.0.1:3306/s11", max_overflow=5) 
  
engine.execute( 
    "INSERT INTO ts_test (a, b) VALUES ('2', 'v1')"
) 
  
engine.execute( 
     "INSERT INTO ts_test (a, b) VALUES (%s, %s)", 
    ((555, "v1"),(666, "v1"),) 
) 
engine.execute( 
    "INSERT INTO ts_test (a, b) VALUES (%(id)s, %(name)s)", 
    id=999, name="v1"
) 
  
result = engine.execute('select * from ts_test') 
result.fetchall()
 1 #!/usr/bin/env python
 2 # -*- coding:utf-8 -*-
 3 
 4 from sqlalchemy import create_engine
 5 
 6 
 7 engine = create_engine("mysql+mysqldb://root:123@127.0.0.1:3306/s11", max_overflow=5)
 8 
 9 
10 # 事务操作
11 with engine.begin() as conn:
12     conn.execute("insert into table (x, y, z) values (1, 2, 3)")
13     conn.execute("my_special_procedure(5)")
14     
15     
16 conn = engine.connect()
17 # 事务操作 
18 with conn.begin():
19        conn.execute("some statement", {'x':5, 'y':10})
事务操作

注:查看数据库连接:show status like 'Threads%';

步骤二

使用 Schema Type/SQL Expression Language/Engine/ConnectionPooling/Dialect 进行数据库操作。Engine使用Schema Type创建一个特定的结构对象,之后通过SQL Expression Language将该对象转换成SQL语句,然后通过 ConnectionPooling 连接数据库,再然后通过 Dialect 执行SQL,并获取结果。

 1 #!/usr/bin/env python 
 2 # -*- coding:utf-8 -*- 
 3   
 4 from sqlalchemy import create_engine, Table, Column, Integer, String, MetaData, ForeignKey 
 5   
 6 metadata = MetaData() 
 7   
 8 user = Table('user', metadata, 
 9     Column('id', Integer, primary_key=True), 
10     Column('name', String(20)), 
11 ) 
12   
13 color = Table('color', metadata, 
14     Column('id', Integer, primary_key=True), 
15     Column('name', String(20)), 
16 ) 
17 engine = create_engine("mysql+mysqldb://root:123@127.0.0.1:3306/s11", max_overflow=5) 
18   
19 metadata.create_all(engine) 
20 # metadata.clear() 
21 # metadata.remove() 
View Code

更多内容详见:
http://www.jianshu.com/p/e6bba189fcbd
http://docs.sqlalchemy.org/en/latest/core/expression_api.html

步骤三

使用 ORM/Schema Type/SQL Expression Language/Engine/ConnectionPooling/Dialect 所有组件对数据进行操作。根据类创建对象,对象转换成SQL,执行SQL。

 1 #!/usr/bin/env python 
 2 # -*- coding:utf-8 -*- 
 3   
 4 from sqlalchemy.ext.declarative import declarative_base 
 5 from sqlalchemy import Column, Integer, String 
 6 from sqlalchemy.orm import sessionmaker 
 7 from sqlalchemy import create_engine 
 8   
 9 engine = create_engine("mysql+mysqldb://root:123@127.0.0.1:3306/s11", max_overflow=5) 
10   
11 Base = declarative_base() 
12   
13   
14 class User(Base): 
15     __tablename__ = 'users'
16     id = Column(Integer, primary_key=True) 
17     name = Column(String(50)) 
18   
19 # 寻找Base的所有子类,按照子类的结构在数据库中生成对应的数据表信息 
20 # Base.metadata.create_all(engine) 
21   
22 Session = sessionmaker(bind=engine) 
23 session = Session() 
24   
25   
26 # ########## 增 ########## 
27 # u = User(id=2, name='sb') 
28 # session.add(u) 
29 # session.add_all([ 
30 #     User(id=3, name='sb'), 
31 #     User(id=4, name='sb') 
32 # ]) 
33 # session.commit() 
34   
35 # ########## 删除 ########## 
36 # session.query(User).filter(User.id > 2).delete() 
37 # session.commit() 
38   
39 # ########## 修改 ########## 
40 # session.query(User).filter(User.id > 2).update({'cluster_id' : 0}) 
41 # session.commit() 
42 # ########## 查 ########## 
43 # ret = session.query(User).filter_by(name='sb').first() 
44   
45 # ret = session.query(User).filter_by(name='sb').all() 
46 # print ret 
47   
48 # ret = session.query(User).filter(User.name.in_(['sb','bb'])).all() 
49 # print ret 
50   
51 # ret = session.query(User.name.label('name_label')).all() 
52 # print ret,type(ret) 
53   
54 # ret = session.query(User).order_by(User.id).all() 
55 # print ret 
56   
57 # ret = session.query(User).order_by(User.id)[1:3] 
58 # print ret 
59 # session.commit() 
View Code

 

posted @ 2016-07-29 11:25  梁怀军  阅读(415)  评论(0编辑  收藏  举报