muduo网络库核心代码阅读(Channel,EventLoop,Poller)(3)
Channel类、EventLoop类和Poller类是muduo库网络架构中Reactor的重要组成部分。在muduo网络库(即Reactor模式)中,Reactor是一个中心调度器,负责监听所有的文件描述符,当文件描述符就绪,即事件发生时,调度器将事件分配给相应的事件处理器进行处理。如图(网上找的图)是一个基本的单Reactor模式架构图:
Channel类、EventLoop类和Poller类三者关系:
Channel类
我们知道epoll是基于文件描述符进行监听的,通过提前为文件描述符设置感兴趣的事情,当注册的文件描述符发生变化时,epoll会记录变化的文件描述符以及对应发生的事件。而Channel则是文件描述符的事件管理类,封装着文件描述符有关事件的一系列操作,最终通过EventLoop注册到Poller当中。
事件类型
读事件默认带有Timestamp类,记录事件发生的时间戳。
typedef std::function<void()> EventCallback;
typedef std::function<void(Timestamp)> ReadEventCallback;
ReadEventCallback readCallback_;
EventCallback writeCallback_;
EventCallback closeCallback_;
EventCallback errorCallback_;
static const int kNoneEvent; //事件类型常量,表示没有感兴趣的事件
static const int kReadEvent; //事件类型常量,表示读事件(POLLIN | POLLPRI)
static const int kWriteEvent; //事件类型常量,表示写事件(POLLOUT)
int events_; //感兴趣的事件类型
int revents_; //实际发生的事件类型
主要方法
设置感兴趣的事件类型:
void enableReading() { events_ |= kReadEvent; update(); }
void disableReading() { events_ &= ~kReadEvent; update(); }
void enableWriting() { events_ |= kWriteEvent; update(); }
void disableWriting() { events_ &= ~kWriteEvent; update(); }
void disableAll() { events_ = kNoneEvent; update(); }
设置事件的回调函数(事件类型为复杂对象,使用移动语义减少拷贝消耗):
void setReadCallback(ReadEventCallback cb)
{ readCallback_ = std::move(cb); }
void setWriteCallback(EventCallback cb)
{ writeCallback_ = std::move(cb); }
void setCloseCallback(EventCallback cb)
{ closeCallback_ = std::move(cb); }
void setErrorCallback(EventCallback cb)
{ errorCallback_ = std::move(cb); }
回调函数调用:
handleEvent为共有方法,供外部来调用Channel注册的对应事件,调用时会根据tied_标记来保护Channel的生命周期。handleEventWithGuard则是私有方法,在handleEvent中调用,用于根据实际发生的事件类型以及本身感兴趣的事件类型来调用相应的回调函数处理事件。
public: void handleEvent(Timestamp receiveTime);
private: void handleEventWithGuard(Timestamp receiveTime);
void Channel::handleEvent(Timestamp receiveTime)
{
std::shared_ptr<void> guard;
if (tied_)
{
guard = tie_.lock();
if (guard)
{
handleEventWithGuard(receiveTime);
}
}
else
{
handleEventWithGuard(receiveTime);
}
}
void Channel::handleEventWithGuard(Timestamp receiveTime)
{
eventHandling_ = true;
LOG_TRACE << reventsToString();
if ((revents_ & POLLHUP) && !(revents_ & POLLIN))
{
if (logHup_) //logHup_默认为true,表示打印挂起事件日志
{
LOG_WARN << "fd = " << fd_ << " Channel::handle_event() POLLHUP";
}
if (closeCallback_) closeCallback_();
}
if (revents_ & POLLNVAL)
{
LOG_WARN << "fd = " << fd_ << " Channel::handle_event() POLLNVAL";
}
if (revents_ & (POLLERR | POLLNVAL))
{
if (errorCallback_) errorCallback_();
}
if (revents_ & (POLLIN | POLLPRI | POLLRDHUP))
{
if (readCallback_) readCallback_(receiveTime);
}
if (revents_ & POLLOUT)
{
if (writeCallback_) writeCallback_();
}
eventHandling_ = false;
}
Poller和EpollPoller
Poller类是抽象基类,定义了IO多路复用的相关接口,包括轮询以及事件管理的方法,EpollPoller是Poller的具体实现类,使用epoll作为底层IO多路复用机制,muduo还提供了基于poll的PollPoller,不过实现并不完善,默认使用的还是EpollPoller。
///
/// Base class for IO Multiplexing
///
/// This class doesn't own the Channel objects.
class Poller : noncopyable
{
public:
typedef std::vector<Channel*> ChannelList;
Poller(EventLoop* loop);
virtual ~Poller();
/// Polls the I/O events.
/// Must be called in the loop thread.
virtual Timestamp poll(int timeoutMs, ChannelList* activeChannels) = 0;
/// Changes the interested I/O events.
/// Must be called in the loop thread.
virtual void updateChannel(Channel* channel) = 0;
/// Remove the channel, when it destructs.
/// Must be called in the loop thread.
virtual void removeChannel(Channel* channel) = 0;
virtual bool hasChannel(Channel* channel) const;
static Poller* newDefaultPoller(EventLoop* loop);
void assertInLoopThread() const
{
ownerLoop_->assertInLoopThread();
}
protected:
typedef std::map<int, Channel*> ChannelMap; //文件描述符与Channel的映射
ChannelMap channels_;
private:
EventLoop* ownerLoop_;
};
newDefaultPoller方法根据环境变量MUDUO_USE_POLL来选择使用PollPoller还是EpollPoller,默认使用EpollPoller。其他类也默认通过newDefaultPoller方法来创建Poller实例。值得一提的是newDefaultPoller方法的实现并不在Poller类的默认实现文件中,这是因为newDefaultPoller方法构建的是子类实例,因此需要包含子类头文件,而且直接在基类实现文件中包含子类头文件在设计上显然是不合理的(父类实现依赖于子类的实现,这是不应该的)。
Poller* Poller::newDefaultPoller(EventLoop* loop)
{
if (::getenv("MUDUO_USE_POLL"))
{
return new PollPoller(loop);
}
else
{
return new EPollPoller(loop);
}
}
EpollPoller
EpollPoller类继承自Poller类,基于epoll实现了父类IO多路用的抽象接口,实际上EpollPoller类的所有成员方法和成员变量也是基于epoll_create、epoll_ctl、epoll_wait来封装的。
///
/// IO Multiplexing with epoll(4).
///
class EPollPoller : public Poller
{
public:
EPollPoller(EventLoop* loop);
~EPollPoller() override;
Timestamp poll(int timeoutMs, ChannelList* activeChannels) override;
void updateChannel(Channel* channel) override;
void removeChannel(Channel* channel) override;
private:
static const int kInitEventListSize = 16; //默认初始事件数组大小
static const char* operationToString(int op);
//返回所有活跃的Channel列表
void fillActiveChannels(int numEvents,
ChannelList* activeChannels) const;
void update(int operation, Channel* channel);
typedef std::vector<struct epoll_event> EventList;
int epollfd_; //epoll句柄
EventList events_; //epoll事件数组,用std::vector来管理可以方便扩容
};
主要方法
构造方法通过epoll_create1创建epoll实例,epoll_create1是epoll_create的改进版本,可以在创建时选择一系列参数。EPOLL_CLOEXEC该标志会在新创建的文件描述符上设置 close-on-exec 属性(FD_CLOEXEC),确保在执行 exec 系列函数时自动关闭该文件描述符
EPollPoller::EPollPoller(EventLoop* loop)
: Poller(loop),
epollfd_(::epoll_create1(EPOLL_CLOEXEC)),
events_(kInitEventListSize)
{
if (epollfd_ < 0)
{
LOG_SYSFATAL << "EPollPoller::EPollPoller";
}
}
updateChannel、removeChannel方法分别用于更新和删除Channel实例,通过调用update私有方法实现,而update方法最终则是通过调用epoll_ctl来实现的。muduo的方法实现中,应用了大量的断言来保证在正确的线程中调用即loop本身所在线程,这也是我们平时在程序设计中可以学习的点,即采用适当的断言来保障程序的正确性,保证程序运行在我们的预期中。
void EPollPoller::updateChannel(Channel* channel)
{
Poller::assertInLoopThread(); //断言在loop本线程中调用
const int index = channel->index();
LOG_TRACE << "fd = " << channel->fd()
<< " events = " << channel->events() << " index = " << index;
if (index == kNew || index == kDeleted)
{
// a new one, add with EPOLL_CTL_ADD
int fd = channel->fd();
//Channel为新创建的
if (index == kNew)
{
assert(channels_.find(fd) == channels_.end());
channels_[fd] = channel; //添加到映射表
}
else // index == kDeleted
{
assert(channels_.find(fd) != channels_.end());
assert(channels_[fd] == channel);
}
//设置当前Channel的事件的状态(kNew,kAdded, kDeleted即表示新创建的,已添加,已删除)
channel->set_index(kAdded);
update(EPOLL_CTL_ADD, channel);
}
else
{
// update existing one with EPOLL_CTL_MOD/DEL
int fd = channel->fd();
(void)fd;
assert(channels_.find(fd) != channels_.end());
assert(channels_[fd] == channel);
assert(index == kAdded);
if (channel->isNoneEvent())
{
//如果已经添加过事件且现在没有感兴趣的事件,则删除原本注册的事件
update(EPOLL_CTL_DEL, channel);
channel->set_index(kDeleted);
}
else
{
//否则更新原本注册的事件,操作为EPOLL_CTL_MOD
update(EPOLL_CTL_MOD, channel);
}
}
}
void EPollPoller::removeChannel(Channel* channel)
{
Poller::assertInLoopThread();
int fd = channel->fd();
LOG_TRACE << "fd = " << fd;
assert(channels_.find(fd) != channels_.end());
assert(channels_[fd] == channel);
assert(channel->isNoneEvent());
int index = channel->index();
assert(index == kAdded || index == kDeleted);
size_t n = channels_.erase(fd);
(void)n;
assert(n == 1);
if (index == kAdded)
{
update(EPOLL_CTL_DEL, channel);
}
channel->set_index(kNew);
}
void EPollPoller::update(int operation, Channel* channel)
{
struct epoll_event event;
memZero(&event, sizeof event);
event.events = channel->events();
event.data.ptr = channel;
int fd = channel->fd();
LOG_TRACE << "epoll_ctl op = " << operationToString(operation)
<< " fd = " << fd << " event = { " << channel->eventsToString() << " }";
if (::epoll_ctl(epollfd_, operation, fd, &event) < 0)
{
if (operation == EPOLL_CTL_DEL)
{
LOG_SYSERR << "epoll_ctl op =" << operationToString(operation) << " fd =" << fd;
}
else
{
LOG_SYSFATAL << "epoll_ctl op =" << operationToString(operation) << " fd =" << fd;
}
}
}
poll方法则是通过调用epoll_wait来等待事件发生,并将发生的事件封装成ChannelList返回。muduo通过vector来管理事件数组,并在需要时自动扩容。
Timestamp EPollPoller::poll(int timeoutMs, ChannelList* activeChannels)
{
LOG_TRACE << "fd total count " << channels_.size();
//&*events_.begin()返回events_的第一个元素的地址,即数组的首地址
int numEvents = ::epoll_wait(epollfd_,
&*events_.begin(),
static_cast<int>(events_.size()),
timeoutMs);
int savedErrno = errno;
Timestamp now(Timestamp::now());
if (numEvents > 0)
{
LOG_TRACE << numEvents << " events happened";
fillActiveChannels(numEvents, activeChannels);
if (implicit_cast<size_t>(numEvents) == events_.size())
{
events_.resize(events_.size()*2);
}
}
else if (numEvents == 0)
{
LOG_TRACE << "nothing happened";
}
else
{
// error happens, log uncommon ones
if (savedErrno != EINTR)
{
errno = savedErrno;
LOG_SYSERR << "EPollPoller::poll()";
}
}
return now;
}
void EPollPoller::fillActiveChannels(int numEvents,
ChannelList* activeChannels) const
{
assert(implicit_cast<size_t>(numEvents) <= events_.size());
for (int i = 0; i < numEvents; ++i)
{
Channel* channel = static_cast<Channel*>(events_[i].data.ptr);
#ifndef NDEBUG
int fd = channel->fd();
ChannelMap::const_iterator it = channels_.find(fd);
assert(it != channels_.end());
assert(it->second == channel);
#endif
channel->set_revents(events_[i].events);
activeChannels->push_back(channel);
}
}
EventLoop
EventLoop是Reactor模式的核心,负责事件循环和事件的分发(通过调用活跃Channel的handleEvent方法)。
muduo是典型的多Reactor模式,设计为one loop per thread,一个线程对应一个EventLoop实例。要实现one loop per thread,首先便是确保一个线程不能创建多个EventLoop。
//__thread为线程局部变量,通过t_loopInThisThread来记录当前线程的EventLoop实例,来标记一个线程是否已经创建EventLoop实例
__thread EventLoop* t_loopInThisThread = 0;
EventLoop::EventLoop()
{
LOG_DEBUG << "EventLoop created " << this << " in thread " << threadId_;
//如果当前线程EventLoop实例已经创建
if (t_loopInThisThread)
{
//输出错误日志并退出程序
LOG_FATAL << "Another EventLoop " << t_loopInThisThread
<< " exists in this thread " << threadId_;
}
else
{
//否则将当前EventLoop实例记录到t_loopInThisThread中
t_loopInThisThread = this;
}
}
既然已经确保了每个线程无法创建额外的EventLoop实例,那么下一个问题,如何实现各个EventLoop之间的通信呢?要知道所有EventLoop之间并不是相互完全独立的,muduo中就有负责监听即接收新链接的主loop以及监听其他客户端套接字的从loop。主loop在接收到新连接后,会将新连接的客户端均衡分配给从loop,在此过程中,从loop就需要及时接收到新连接的客户端并进行连接。
答:通过eventfd。
eventfd 是一种轻量级的进程间通信机制,通常用于线程或进程之间的事件通知。通过往eventfd中写入固定字节大小数据,可以快速实现事件通过。
//创建eventfd
int createEventfd()
{
//创建eventfd的系统调用,设置为非阻塞,并设置close-on-exec属性,返回文件描述符。
int evtfd = ::eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
if (evtfd < 0)
{
LOG_SYSERR << "Failed in eventfd";
abort();
}
return evtfd;
}
//构造函数,wakeupFd_就是eventfd,主loop专门用来提醒从loop获取新连接,通过wakeupChannel_来管理
EventLoop::EventLoop()
: wakeupFd_(createEventfd()),
wakeupChannel_(new Channel(this, wakeupFd_))
{
//设置回调函数,当有数据可读时,调用handleRead方法
wakeupChannel_->setReadCallback(
std::bind(&EventLoop::handleRead, this));
// we are always reading the wakeupfd
wakeupChannel_->enableReading();
}
//唤醒loop
void EventLoop::wakeup()
{
uint64_t one = 1;
//固定写入uint64_t类型数据,具体原因可以参考eventfd的实现
ssize_t n = sockets::write(wakeupFd_, &one, sizeof one);
if (n != sizeof one)
{
LOG_ERROR << "EventLoop::wakeup() writes " << n << " bytes instead of 8";
}
}
//处理eventfd可读事件
void EventLoop::handleRead()
{
uint64_t one = 1;
//固定读取uint64_t类型数据
ssize_t n = sockets::read(wakeupFd_, &one, sizeof one);
if (n != sizeof one)
{
LOG_ERROR << "EventLoop::handleRead() reads " << n << " bytes instead of 8";
}
}
通过设置eventfd,主loop可以很方便间从loop唤醒,即从epoll_wait的等待中返回,从而及时接收新连接。
loop方法是EventLoop事件循环的实现方法,通过调用Poller的poll方法来获取活跃的Channel列表,接着遍历Channel列表,调用每个Channel的handleEvent方法来处理事件。
void EventLoop::loop()
{
assert(!looping_);
assertInLoopThread();
looping_ = true;
quit_ = false; // FIXME: what if someone calls quit() before loop() ?
LOG_TRACE << "EventLoop " << this << " start looping";
while (!quit_)
{
activeChannels_.clear();
pollReturnTime_ = poller_->poll(kPollTimeMs, &activeChannels_);
++iteration_;
if (Logger::logLevel() <= Logger::TRACE)
{
printActiveChannels();
}
// TODO sort channel by priority
eventHandling_ = true;
for (Channel* channel : activeChannels_)
{
currentActiveChannel_ = channel;
currentActiveChannel_->handleEvent(pollReturnTime_);
}
currentActiveChannel_ = NULL;
eventHandling_ = false;
doPendingFunctors(); //执行向EventLoop中添加的回调函数
}
}
对于EventLoop自身,有时候也需要注册一些回调函数。因为Loop所在线程是不断轮询进行poll操作的,如果需要EventLoop执行其他操作(如主loop绑定服务端套接字,进行监听),就需要注册回调函数在轮询时进行调用。
typedef std::function<void()> Functor;
std::vector<Functor> pendingFunctors_ GUARDED_BY(mutex_); //回调函数列表
//回调函数注册
/// Runs callback immediately in the loop thread.
/// It wakes up the loop, and run the cb.
/// If in the same loop thread, cb is run within the function.
/// Safe to call from other threads.
void runInLoop(Functor cb);
/// Queues callback in the loop thread.
/// Runs after finish pooling.
/// Safe to call from other threads.
void queueInLoop(Functor cb);
//直接在loop线程中执行回调函数
void EventLoop::runInLoop(Functor cb)
{
//如果当前线程是loop线程,则直接执行回调函数
if (isInLoopThread())
{
cb();
}
else
{
//否则调用queueInLoop方法,将回调函数加入pendingFunctors_列表
queueInLoop(std::move(cb));
}
}
//将回调函数加入pendingFunctors_列表
void EventLoop::queueInLoop(Functor cb)
{
{
//加锁保护pendingFunctors_列表
MutexLockGuard lock(mutex_);
//添加到pendingFunctors_列表
pendingFunctors_.push_back(std::move(cb));
}
//如果当前不是loop线程或正在执行pendingFunctors_列表的回调函数,则唤醒loop,及时处理pendingFunctors_列表中的回调函数
if (!isInLoopThread() || callingPendingFunctors_)
{
wakeup();
}
}
//在loop轮询的最后,会调用doPendingFunctors方法来执行pendingFunctors_列表中的回调函数
void EventLoop::doPendingFunctors()
{
std::vector<Functor> functors;
callingPendingFunctors_ = true;
{
//枷锁保护pendingFunctors_列表
MutexLockGuard lock(mutex_);
//将pendingFunctors_列表中的回调函数交换出来,swap交换的内部指针,效率比较高,还可以减少加锁临界区的大小,减少锁的占有时间
functors.swap(pendingFunctors_);
}
//执行回调函数
for (const Functor& functor : functors)
{
functor();
}
callingPendingFunctors_ = false;
}
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 阿里最新开源QwQ-32B,效果媲美deepseek-r1满血版,部署成本又又又降低了!
· 单线程的Redis速度为什么快?
· SQL Server 2025 AI相关能力初探
· AI编程工具终极对决:字节Trae VS Cursor,谁才是开发者新宠?
· 展开说说关于C#中ORM框架的用法!