分布式协调服务ZooKeeper的典型应用

Zookeeper典型应用


参考官方文档:

http://zookeeper.apache.org/doc/current/recipes.html


Barriers(障碍)

Distributed systems use barriers to block processing of a set of nodes until a condition is met at which time all the nodes are allowed to proceed.

Double Barriers(双屏障)

Double barriers enable clients to synchronize the beginning and the end of a computation. When enough processes have joined the barrier, processes start their computation and leave the barrier once they have finished. 


Distributed Queues(分布式队列)


Distributed Locks(分布式锁,排它锁)

At any snapshot in time no two clients think they hold the same lock.

Shared Locks(共享锁,又称为读锁,可以查看,但无法修改和删除的一种数据锁)

Recoverable Shared Locks(可撤销的共享锁)


Two-phased Commit(两阶段提交协议)

A two-phase commit protocol is an algorithm that lets all clients in a distributed system agree either to commit a transaction or abort.

两阶段提交协议可以保证数据的强一致性,许多分布式关系型数据管理系统采用此协议来完成分布式事务。它是协调所有分布式原子事务参与者,并决定提交或取消(回滚)的分布式算法。

在两阶段提交协议中,系统一般包含两类机器(或节点):一类为协调者(coordinator),通常一个系统中只有一个;另一类为事务参与者(participants,cohorts或workers),一般包含多个,在数据存储系统中可以理解为数据副本的个数。协议中假设每个节点都会记录写前日志(write-ahead log)并持久性存储,即使节点发生故障日志也不会丢失。协议中同时假设节点不会发生永久性故障而且任意两个节点都可以互相通信。


当事务的最后一步完成之后,协调器执行协议,参与者根据本地事务能够成功完成回复同意提交事务或者回滚事务。


顾名思义,两阶段提交协议由两个阶段组成。在正常的执行下,这两个阶段的执行过程如下所述:


阶段1:请求阶段(commit-request phase,或称表决阶段,voting phase)


在请求阶段,协调者将通知事务参与者准备提交或取消事务,然后进入表决过程。在表决过程中,参与者将告知协调者自己的决策:同意(事务参与者本地作业执行成功)或取消(本地作业执行故障)。


阶段2:提交阶段(commit phase)


在该阶段,协调者将基于第一个阶段的投票结果进行决策:提交或取消。当且仅当所有的参与者同意提交事务协调者才通知所有的参与者提交事务,否则协调者将通知所有的参与者取消事务。参与者在接收到协调者发来的消息后将执行响应的操作。


Leader Election(Leader 选举)


具体应用可以使用 Curator 的实现

具体用法参见官方文档:

http://curator.apache.org/curator-recipes/index.html

下面列出它的所有功能:

Curator implements all of the recipes listed on the ZooKeeper recipes doc (except two phase commit). Click on the recipe name below for detailed documentation. NOTE: Most Curator recipes will autocreate parent nodes of paths given to the recipe as CreateMode.CONTAINER. Also, see Tech Note 7 regarding "Curator Recipes Own Their ZNode/Paths".

Elections
Leader Latch - In distributed computing, leader election is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task is begun, all network nodes are unaware which node will serve as the "leader," or coordinator, of the task. After a leader election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task leader.
Leader Election - Initial Curator leader election recipe.
Locks
Shared Reentrant Lock - Fully distributed locks that are globally synchronous, meaning at any snapshot in time no two clients think they hold the same lock.
Shared Lock - Similar to Shared Reentrant Lock but not reentrant.
Shared Reentrant Read Write Lock - A re-entrant read/write mutex that works across JVMs. A read write lock maintains a pair of associated locks, one for read-only operations and one for writing. The read lock may be held simultaneously by multiple reader processes, so long as there are no writers. The write lock is exclusive.
Shared Semaphore - A counting semaphore that works across JVMs. All processes in all JVMs that use the same lock path will achieve an inter-process limited set of leases. Further, this semaphore is mostly "fair" - each user will get a lease in the order requested (from ZK's point of view).
Multi Shared Lock - A container that manages multiple locks as a single entity. When acquire() is called, all the locks are acquired. If that fails, any paths that were acquired are released. Similarly, when release() is called, all locks are released (failures are ignored).
Barriers
Barrier - Distributed systems use barriers to block processing of a set of nodes until a condition is met at which time all the nodes are allowed to proceed.
Double Barrier - Double barriers enable clients to synchronize the beginning and the end of a computation. When enough processes have joined the barrier, processes start their computation and leave the barrier once they have finished.
Counters
Shared Counter - Manages a shared integer. All clients watching the same path will have the up-to-date value of the shared integer (considering ZK's normal consistency guarantees).
Distributed Atomic Long - A counter that attempts atomic increments. It first tries using optimistic locking. If that fails, an optional InterProcessMutex is taken. For both optimistic and mutex, a retry policy is used to retry the increment.
Caches
Path Cache - A Path Cache is used to watch a ZNode. Whenever a child is added, updated or removed, the Path Cache will change its state to contain the current set of children, the children's data and the children's state. Path caches in the Curator Framework are provided by the PathChildrenCache class. Changes to the path are passed to registered PathChildrenCacheListener instances.
Node Cache - A utility that attempts to keep the data from a node locally cached. This class will watch the node, respond to update/create/delete events, pull down the data, etc. You can register a listener that will get notified when changes occur.
Tree Cache - A utility that attempts to keep all data from all children of a ZK path locally cached. This class will watch the ZK path, respond to update/create/delete events, pull down the data, etc. You can register a listener that will get notified when changes occur.
Nodes
Persistent Ephemeral Node - An ephemeral node that attempts to stay present in ZooKeeper, even through connection and session interruptions.
Group Member - Group membership management. Adds this instance into a group and keeps a cache of members in the group.
Queues
Distributed Queue - An implementation of the Distributed Queue ZK recipe. Items put into the queue are guaranteed to be ordered (by means of ZK's PERSISTENTSEQUENTIAL node). If a single consumer takes items out of the queue, they will be ordered FIFO. If ordering is important, use a LeaderSelector to nominate a single consumer.
Distributed Id Queue - A version of DistributedQueue that allows IDs to be associated with queue items. Items can then be removed from the queue if needed.
Distributed Priority Queue - An implementation of the Distributed Priority Queue ZK recipe.
Distributed Delay Queue - An implementation of a Distributed Delay Queue.
Simple Distributed Queue - A drop-in replacement for the DistributedQueue that comes with the ZK distribution.


posted @ 2016-12-31 22:39  Zollty  阅读(447)  评论(0编辑  收藏  举报