Hrnet

cvpr2019 微软亚洲研究院的工作,主要思想是构建一个并行的多分辨率网络(有的应用只使用高分辨率特征,因此得名),

这是一个系列的工作,包括分类,检测,分割等。终于来了一个非nas的网络。。。

项目地址    https://github.com/HRNet

知乎上的介绍 https://zhuanlan.zhihu.com/p/66848624

两篇讲的比较清楚的博客

 https://ai-chen.github.io/%E8%AF%AD%E4%B9%89%E5%88%86%E5%89%B2%E8%AE%BA%E6%96%87%E9%98%85%E8%AF%BB/2019/06/20/HrNet-V2.html

https://zhuanlan.zhihu.com/p/89199075

虽然讲的是v2,其实跟v1几乎没有区别,分类模型只是差了一个head。

分类模型的论文https://arxiv.org/pdf/1904.04514.pdf

下面单独补充下分类的head(可参考下结构 https://github.com/HRNet/HRNet-Image-Classification/blob/master/figures/cls-head.png)

分为三个部分,incre_modules用于分别增加三个分支的通道数目

down_modules用于降低三个分支的通道数目以及分辨率,便于进行融合,输出一个1024维的特征

final_layer通过一个1*1输出2048维的特征,用于最后的类别输出(2048×num_cls)

 

def _make_head(self, pre_stage_channels):
        head_block = Bottleneck
        head_channels = [32, 64, 128, 256]

        # Increasing the #channels on each resolution 
        # from C, 2C, 4C, 8C to 128, 256, 512, 1024
        incre_modules = []
        for i, channels  in enumerate(pre_stage_channels):
            incre_module = self._make_layer(head_block,
                                            channels,
                                            head_channels[i],
                                            1,
                                            stride=1)
            incre_modules.append(incre_module)
        incre_modules = nn.ModuleList(incre_modules)
            
        # downsampling modules
        downsamp_modules = []
        for i in range(len(pre_stage_channels)-1):
            in_channels = head_channels[i] * head_block.expansion
            out_channels = head_channels[i+1] * head_block.expansion

            downsamp_module = nn.Sequential(
                nn.Conv2d(in_channels=in_channels,
                          out_channels=out_channels,
                          kernel_size=3,
                          stride=2,
                          padding=1),
                nn.BatchNorm2d(out_channels, momentum=BN_MOMENTUM),
                nn.ReLU(inplace=True)
            )

            downsamp_modules.append(downsamp_module)
        downsamp_modules = nn.ModuleList(downsamp_modules)

        final_layer = nn.Sequential(
            nn.Conv2d(
                in_channels=head_channels[3] * head_block.expansion,
                out_channels=2048,
                kernel_size=1,
                stride=1,
                padding=0
            ),
            nn.BatchNorm2d(2048, momentum=BN_MOMENTUM),
            nn.ReLU(inplace=True)
        )

        return incre_modules, downsamp_modules, final_layer

 

posted @ 2020-01-31 23:48  牧马人夏峥  阅读(775)  评论(0编辑  收藏  举报