Akka源码分析-Cluster-Singleton
akka Cluster基本实现原理已经分析过,其实它就是在remote基础上添加了gossip协议,同步各个节点信息,使集群内各节点能够识别。在Cluster中可能会有一个特殊的节点,叫做单例节点。也就是具有某个角色的节点在集群中只能有一个,如果这个节点宕机了,需要把这个角色的工作转移到其他节点。
使用单例模式有几个场景。
- 负责集群的一致性决策,跨集群协调工作。比如集群事务。
- 对外部系统的单实例接入。
- 一个master,多个worker
- 集中化的命名服务或路由策略。
官网说单例对象在设计时,永远不要成为第一个选择,它可能存在单点性能瓶颈、单点故障等多个问题。这点我是赞同的,这年头都是分布式为王,当你考虑单点的时候,有两种情况:设计错误、业务特殊。单例虽然有那么多缺点,但还是有点用处的,源码分析还是要做的,我们要知其然还要知其所以然。
akka的单例模式由akka.cluster.singleton.ClusterSingletonManager
.实现,它在集群或角色相同的一组节点管理一个单例actor实例。ClusterSingletonManager
是集群或角色相同节点中最先被启动的actor 。ClusterSingletonManager
会在所有节点中运行时间最长的节点上启动,如果这个节点宕机,则重新选择运行时间最长的节点。如果你要问运行时间最长的信息从哪里获取?我不知道,哈哈,这肯定是在通过gossip协议广播的Member消息中呗。
那么问题来了,既然ClusterSingletonManager
的位置会发生漂移,那我怎么跟它通信呢?akka还是很贴心的,它帮你实现了一个akka.cluster.singleton.ClusterSingletonProxy,这个actor会把消息路由给当前的ClusterSingletonManager
单例对象。简单来说就是ClusterSingletonProxy会给运行时间最长的节点发送一个akka.actor.Identify消息,查询单例actor的actorRef。当然了,在查找的期间消息是被缓冲的,超过缓冲大小直接丢弃。读者需要注意的是,akka只是尽量保证同一时刻最多有一个ClusterSingletonManager
单例,但并不保证同一时刻有且仅有一个。
system.actorOf( ClusterSingletonManager.props( singletonProps = Props(classOf[Consumer], queue, testActor), terminationMessage = End, settings = ClusterSingletonManagerSettings(system).withRole("worker")), name = "consumer")
同样的,我们还是从官网demo入手。上面是ClusterSingletonManager的创建过程,很明显这是创建了一个actor,这个actor内部还有一个Props,也就是会创建对应的子actor:Consumer。ClusterSingletonManager会保证在集群中的Consumer实例只有一个。
val proxy = system.actorOf( ClusterSingletonProxy.props( singletonManagerPath = "/user/consumer", settings = ClusterSingletonProxySettings(system).withRole("worker")), name = "consumerProxy")
上面创建了ClusterSingletonProxy,这个proxy路由消息给集群中path为/user/consumer的单例对象,当然了ClusterSingletonProxy是可以有多个的。
下面切入正题,来看看ClusterSingletonManager和ClusterSingletonProxy的源码。首先这两个包是在cluster-tools这个包里面。
/** * Manages singleton actor instance among all cluster nodes or a group * of nodes tagged with a specific role. At most one singleton instance * is running at any point in time. * * The ClusterSingletonManager is supposed to be started on all nodes, * or all nodes with specified role, in the cluster with `actorOf`. * The actual singleton is started on the oldest node by creating a child * actor from the supplied `singletonProps`. * * The singleton actor is always running on the oldest member with specified role. * The oldest member is determined by [[akka.cluster.Member#isOlderThan]]. * This can change when removing members. A graceful hand over can normally * be performed when current oldest node is leaving the cluster. Be aware that * there is a short time period when there is no active singleton during the * hand-over process. */ @DoNotInherit class ClusterSingletonManager( singletonProps: Props, terminationMessage: Any, settings: ClusterSingletonManagerSettings) extends Actor with FSM[ClusterSingletonManager.State, ClusterSingletonManager.Data]
上面是ClusterSingletonManager的定义,官方注释很多,我只保留了最重要的三段。第一段介绍了什么是单例对象;第二段说ClusterSingletonManager需要在所有的节点启动,它管理的单例对象在集群中只会有一个;第三段说这个单例actor只会在运行时间最长的节点上,重启过程中单例不可用。其实第二点值得我们深究,既然ClusterSingletonManager需要在所有的节点启动,那怎么保持单例呢。其实吧,正如它的名字那样,它只是个管理者,真正的单例actor是通过singletonProps创建的。也就是说,ClusterSingletonManager在节点中互相协调,具体由谁创建整个单例actor。看到这里我都想自己实现一个单例actor了。
还要注意的是这个manager继承了FSM接口,应该是为了协调单例的状态什么的。既然这是个actor,那就需要看看这个actor启动时,做了哪些初始化。
override def preStart(): Unit = { super.preStart() require(!cluster.isTerminated, "Cluster node must not be terminated") // subscribe to cluster changes, re-subscribe when restart cluster.subscribe(self, ClusterEvent.InitialStateAsEvents, classOf[MemberRemoved]) setTimer(CleanupTimer, Cleanup, 1.minute, repeat = true) // defer subscription to avoid some jitter when // starting/joining several nodes at the same time cluster.registerOnMemberUp(self ! StartOldestChangedBuffer) }
简单来说就是订阅了ClusterEvent.InitialStateAsEvents、MemberRemoved、MemberUp消息,启动了一个timer。这个timer每个1分钟发送一个Cleanup消息,timer的名字是CleanupTimer。收到MemberUp消息会给自己发送StartOldestChangedBuffer。初始化是不是很简单?想想都简单,这就是一个普通actor,只不过需要跟其他actor协调一下,那个是运行时间最久的actor,然后再加上单例actor漂移的功能就行了。
startWith(Start, Uninitialized)
主构造函数中还有上面一段代码需要注意,这是FSM的函数,之前分析过,这是用来设置初始状态的,也就是定义当前ctor的receive。
when(Start) { case Event(StartOldestChangedBuffer, _) ⇒ oldestChangedBuffer = context.actorOf(Props(classOf[OldestChangedBuffer], role). withDispatcher(context.props.dispatcher)) getNextOldestChanged() stay case Event(InitialOldestState(oldestOption, safeToBeOldest), _) ⇒ oldestChangedReceived = true if (oldestOption == selfUniqueAddressOption && safeToBeOldest) // oldest immediately gotoOldest() else if (oldestOption == selfUniqueAddressOption) goto(BecomingOldest) using BecomingOldestData(None) else goto(Younger) using YoungerData(oldestOption) }
很显然Start状态会收到StartOldestChangedBuffer消息。
// started when when self member is Up var oldestChangedBuffer: ActorRef = _
首先创建了一个OldestChangedBuffer,这个actor会在当前节点变成up状态时启动,作用未知,然后就调用getNextOldestChanged。
/** * Notifications of member events that track oldest member is tunneled * via this actor (child of ClusterSingletonManager) to be able to deliver * one change at a time. Avoiding simultaneous changes simplifies * the process in ClusterSingletonManager. ClusterSingletonManager requests * next event with `GetNext` when it is ready for it. Only one outstanding * `GetNext` request is allowed. Incoming events are buffered and delivered * upon `GetNext` request. */ class OldestChangedBuffer(role: Option[String]) extends Actor
OldestChangedBuffer官方注释也很清晰,这个actor就是用来监听集群事件,用来判断运行时间最久的节点的。
def getNextOldestChanged(): Unit = if (oldestChangedReceived) { oldestChangedReceived = false oldestChangedBuffer ! GetNext }
getNextOldestChanged也比较简单就是给oldestChangedBuffer发送了GetNext消息。
def receive = { case state: CurrentClusterState ⇒ handleInitial(state) case MemberUp(m) ⇒ add(m) case MemberRemoved(m, _) ⇒ remove(m) case MemberExited(m) if m.uniqueAddress != cluster.selfUniqueAddress ⇒ remove(m) case SelfExiting ⇒ remove(cluster.readView.self) sender() ! Done // reply to ask case GetNext if changes.isEmpty ⇒ context.become(deliverNext, discardOld = false) case GetNext ⇒ sendFirstChange() }
oldestChangedBuffer收到GetNext消息,会判断changes是不是为空,如果为空,则简单的进行beconme,否则调用sendFirstChange。那changes什么时候赋值呢?
def trackChange(block: () ⇒ Unit): Unit = { val before = membersByAge.headOption block() val after = membersByAge.headOption if (before != after) changes :+= OldestChanged(after.map(_.uniqueAddress)) } def handleInitial(state: CurrentClusterState): Unit = { membersByAge = immutable.SortedSet.empty(ageOrdering) union state.members.filter(m ⇒ (m.status == MemberStatus.Up || m.status == MemberStatus.Leaving) && matchingRole(m)) val safeToBeOldest = !state.members.exists { m ⇒ (m.status == MemberStatus.Down || m.status == MemberStatus.Exiting) } val initial = InitialOldestState(membersByAge.headOption.map(_.uniqueAddress), safeToBeOldest) changes :+= initial }
简要分析之后,可以发现OldestChangedBuffer在收到集群的相关消息时会调用上面两个函数更改changes的值,其实就是保存运行时间最长节点的信息。但不论哪种情况,其实最终都是会调用sendFirstChange方法。
def sendFirstChange(): Unit = { // don't send cluster change events if this node is shutting its self down, just wait for SelfExiting if (!cluster.isTerminated) { val event = changes.head changes = changes.tail context.parent ! event } }
这个函数的功能不作过多介绍,其实就是获取集群节点信息,获取运行时间最长的节点信息,发送给context.parent也就是ClusterSingletonManager,只不过会区分当前节点是不是第一个启动的节点,但无论是不是第一个启动,区别都不大。
此时ClusterSingletonManager还在Start状态,会收到InitialOldestState消息。假设当前节点就是最老的,那么会调用gotoOldest
def gotoOldest(): State = { val singleton = context watch context.actorOf(singletonProps, singletonName) logInfo("Singleton manager starting singleton actor [{}]", singleton.path) goto(Oldest) using OldestData(singleton) }
可以看到,如果当前节点是最老的,那么会创建singletonProps,然后转入Oldest状态,且FSM数据就是这个singletonProps的ActorRef。那么后面启动的节点,同样会执行上面的流程,只不过它判断当前节点是最老节点时会失败,不会执行gotoOldest方法,也就不会创建singletonProps,而是转到Younger状态。
when(Younger) { case Event(OldestChanged(oldestOption), YoungerData(previousOldestOption)) ⇒ oldestChangedReceived = true if (oldestOption == selfUniqueAddressOption) { logInfo("Younger observed OldestChanged: [{} -> myself]", previousOldestOption.map(_.address)) previousOldestOption match { case None ⇒ gotoOldest() case Some(prev) if removed.contains(prev) ⇒ gotoOldest() case Some(prev) ⇒ peer(prev.address) ! HandOverToMe goto(BecomingOldest) using BecomingOldestData(previousOldestOption) } } else { logInfo("Younger observed OldestChanged: [{} -> {}]", previousOldestOption.map(_.address), oldestOption.map(_.address)) getNextOldestChanged() stay using YoungerData(oldestOption) } case Event(MemberRemoved(m, _), _) if m.uniqueAddress == cluster.selfUniqueAddress ⇒ logInfo("Self removed, stopping ClusterSingletonManager") stop() case Event(MemberRemoved(m, _), _) ⇒ scheduleDelayedMemberRemoved(m) stay case Event(DelayedMemberRemoved(m), YoungerData(Some(previousOldest))) if m.uniqueAddress == previousOldest ⇒ logInfo("Previous oldest removed [{}]", m.address) addRemoved(m.uniqueAddress) // transition when OldestChanged stay using YoungerData(None) case Event(HandOverToMe, _) ⇒ // this node was probably quickly restarted with same hostname:port, // confirm that the old singleton instance has been stopped sender() ! HandOverDone stay }
如果最老节点发生变化,且当前节点不是最老节点,而当前节点变成了最老节点,会发生什么呢?会命中Younger状态的第一个case店里面的第一个if语句。也就是会给原来的最老节点发送HandOverToMe消息 ,自己变成BecomingOldest状态。
def gotoHandingOver(singleton: ActorRef, singletonTerminated: Boolean, handOverTo: Option[ActorRef]): State = { if (singletonTerminated) { handOverDone(handOverTo) } else { handOverTo foreach { _ ! HandOverInProgress } singleton ! terminationMessage goto(HandingOver) using HandingOverData(singleton, handOverTo) } }
此时原来的最老节点,会调用上面的函数,简单来说既是给原来的singleton这个单例actor发送terminationMessage消息,当然,单例actor收到这个消息后可自行处理,但最终都需要stop。这样,才能给最新的最老节点发送HandOverDone消息。而此时最新的最老节点处于BecomingOldest状态,收到消息后命中以下case。
case Event(HandOverDone, BecomingOldestData(Some(previousOldest))) ⇒ if (sender().path.address == previousOldest.address) gotoOldest() else { logInfo( "Ignoring HandOverDone in BecomingOldest from [{}]. Expected previous oldest [{}]", sender().path.address, previousOldest.address) stay }
也比较简单,就是调用gotoOldest,这个方法的功能之前说过了。
但我发现了一个比较重要的问题,那就是如果当前节点处于Oldest状态,而创建的单例对象actor以外终止了,会发生什么呢?从Oldest的处理来看,它就是修改了当前状态数据的singletonTerminated为true,没有其他操作。难道不应该是重启么?很奇怪啊,不管了,先继续分析吧。
/** * The `ClusterSingletonProxy` works together with the [[akka.cluster.singleton.ClusterSingletonManager]] to provide a * distributed proxy to the singleton actor. * * The proxy can be started on every node where the singleton needs to be reached and used as if it were the singleton * itself. It will then act as a router to the currently running singleton instance. If the singleton is not currently * available, e.g., during hand off or startup, the proxy will buffer the messages sent to the singleton and then deliver * them when the singleton is finally available. The size of the buffer is configurable and it can be disabled by using * a buffer size of 0. When the buffer is full old messages will be dropped when new messages are sent via the proxy. * * The proxy works by keeping track of the oldest cluster member. When a new oldest member is identified, e.g. because * the older one left the cluster, or at startup, the proxy will try to identify the singleton on the oldest member by * periodically sending an [[akka.actor.Identify]] message until the singleton responds with its * [[akka.actor.ActorIdentity]]. * * Note that this is a best effort implementation: messages can always be lost due to the distributed nature of the * actors involved. */ final class ClusterSingletonProxy(singletonManagerPath: String, settings: ClusterSingletonProxySettings) extends Actor with ActorLogging
官方注释也很详细,ClusterSingletonProxy也会跟踪当前集群的最老节点,然后通过singletonManagerPath进行actorSelection获取单例actor的ActorRef。ClusterSingletonProxy跟OldestChangedBuffer关于集群消息的部分,功能框架有点类似,都是在集群节点变化时调用handleInitial、add、remove三个函数,不同的是这三个函数在ClusterSingletonProxy中还调用了trackChange。
def trackChange(block: () ⇒ Unit): Unit = { val before = membersByAge.headOption block() val after = membersByAge.headOption // if the head has changed, I need to find the new singleton if (before != after) identifySingleton() }
trackChange发现节点有变化时,会调用identifySingleton
def identifySingleton() { import context.dispatcher log.debug("Creating singleton identification timer...") identifyCounter += 1 identifyId = createIdentifyId(identifyCounter) singleton = None cancelTimer() identifyTimer = Some(context.system.scheduler.schedule(0 milliseconds, singletonIdentificationInterval, self, ClusterSingletonProxy.TryToIdentifySingleton)) }
其实就是启动了一个定时器,每隔singletonIdentificationInterval秒给自己发送ClusterSingletonProxy.TryToIdentifySingleton消息,为啥需要间隔时间重复发送了,这主要是为了防止单例还在漂移,无法回应Identify消息。
case ClusterSingletonProxy.TryToIdentifySingleton ⇒ identifyTimer match { case Some(_) ⇒ membersByAge.headOption foreach { oldest ⇒ val singletonAddress = RootActorPath(oldest.address) / singletonPath log.debug("Trying to identify singleton at [{}]", singletonAddress) context.actorSelection(singletonAddress) ! Identify(identifyId) } case _ ⇒ // ignore, if the timer is not present it means we have successfully identified }
很显然收到这个消息,会通过actorSelection发送Identify消息。
case ActorIdentity(identifyId, Some(s)) ⇒ // if the new singleton is defined, deliver all buffered messages log.info("Singleton identified at [{}]", s.path) singleton = Some(s) context.watch(s) cancelTimer() sendBuffered()
收到ActorIdentity消息后,会取消timer,然后把缓存的消息发送出去。
case msg: Any ⇒ singleton match { case Some(s) ⇒ if (log.isDebugEnabled) log.debug( "Forwarding message of type [{}] to current singleton instance at [{}]: {}", Logging.simpleName(msg.getClass), s.path) s forward msg case None ⇒ buffer(msg) }
如果singleton已经有值,收到消息后会命中上面的case分支,其实就是简单的forward消息。
好了,到此为止,单例模式基本就解释清楚了,其实源码也不是非常复杂,如果不考虑到通用、健壮性,自己实现一个获取更高效。而且在ClusterSingletonManager中,发现单例actor已经stop了,居然没有任何补偿措施,这真是恶心啊,说好的高可用呢,说好的故障恢复呢。另外ClusterSingletonManager需要自己单独的用actorOf进行创建,为啥不用一个扩展来实现呢,简单、方便、易管理。