【语义分割系列:五】DeepLab v3v3+ 论文阅读翻译笔记

    xiaoxiao2024-11-17  17

    【语义分割系列:五】DeepLab v1 / v2 论文阅读翻译笔记

    DeepLab v3

    2017 CVPR

    Rethinking Atrous Convolution for Semantic Image Segmentation

    ❤ fregu856/deeplabv3 :PyTorch implementation of DeepLabV3, trained on the Cityscapes dataset.

    官方开源 |Tensorflow

    Introduce

    v3主要贡献

    提出了更通用的框架,适用于任何网络

    改进了ASPP:由不同采样率的空洞卷积和BN层组成,我们尝试以级联或并行的方式布局模块。

    讨论了一个重要问题:使用大采样率的3×3 的空洞卷积,因为图像边界响应无法捕捉远距离信息,会退化为1×1的卷积, 我们建议将图像级特征融合到ASPP模块中。

    阐述了训练细节并分享了训练经验

    challenge

    连续池化和下采样,使特征分辨率下降,不利于定位。

    全局特征或上下文之间的互相作用有助于做语义分割 4种解决目标多尺度问题的结构

    - a. Image Pyramid: 将输入图片放缩成不同比例,分别应用在DCNN上,将预测结果融合得到最终输出 - b. Encoder-Decoder: 利用Encoder阶段的多尺度特征,运用到Decoder阶段上恢复空间分辨率(代表工作有FCN、SegNet、PSPNet、UNet等) - c. Deeper w. Atrous Convolution: 在原始模型的顶端增加额外的模块,例如DenseCRF,捕捉像素间长距离信息 - d. Spatial Pyramid Pooling: 空间金字塔池化具有不同采样率和多种视野的卷积核,能够以多尺度捕捉对象

    Related Work

    全局特征或上下文之间的互相作用有助于做语义分割,下面是四种不同类型利用上下文信息做语义分割的全卷积网络。

    图像金字塔(Image pyramid): 通常使用共享权重的模型,适用于多尺度的输入。小尺度的输入响应控制语义,大尺寸的输入响应控制细节。通过拉布拉斯金字塔对输入变换成多尺度,传入DCNN,融合输出。 这类的缺点是:因为GPU存储器的限制,对于更大/更深的模型不方便扩展。通常应用于推断阶段。

    编码器-解码器(Encoder-decoder): 编码器的高层次的特征容易捕获更长的距离信息,在解码器阶段使用编码器阶段的信息帮助恢复目标的细节和空间维度。 例如SegNet利用下采样的池化索引作为上采样的指导;U-Net增加了编码器部分的特征跳跃连接到解码器;RefineNet等证明了Encoder-Decoder结构的有效性。

    上下文模块(Context module): 包含了额外的模块用于级联编码长距离的上下文。 一种有效的方法是DenseCRF并入DCNN中,共同训练DCNN和CRF。

    空间金字塔池化(Spatial pyramid pooling): 采用空间金字塔池化可以捕捉多个层次的上下文。 在ParseNet中从不同图像等级的特征中获取上下文信息;DeepLabv2提出ASPP,以不同采样率的并行空洞卷积捕捉多尺度信息。PSPNet在不同网格尺度上执行空间池化,并在多个数据集上获得优异的表现。

    Architecture

    基于编码器—解码器结构的神经网络实现。FCNs、SegNet、 UNet …

    编码器:将信息「编码」为压缩向量来代表输入 解码器:是将这个信号重建为期望的输出。

    与大多数编码器—解码器架构设计不同的是,Deeplab 提供了一种与众不同的语义分割方法。Deeplab 提出了一种用于控制信号抽取和学习多尺度语境特征的架构

    DeepLab v3 架构如图

    Deeplab 把在 ImagNet 上预训练得到的 ResNet 作为它的主要特征提取网络。

    最后一个 ResNet Block

    使用了空洞卷积(atrous convolution),而不是常规的卷积。 这个残差块内的每个卷积都使用了不同的扩张率来捕捉多尺度的语境信息。顶部使用了空洞空间金字塔池化 (ASPP,Atrous Spatial Pyramid Pooling) ASPP 使用了不同扩张率的卷积来对任意尺度的区域进行分类。

    Atrous Convolution

    理论上,它是这样工作的:首先,根据扩张率对卷积滤波器进行扩张。然后,它用零填充空白空间,创建稀疏的类似滤波器。最后,使用扩张的滤波器进行常规卷积。

    不同扩张率的空洞卷积,空洞卷积的效率依赖于对扩张率的选择。

    Deeplab 还讨论了不同输出步长对分割模型的影响。

    Deeplab 认为过强的信号抽象不利于密集预测任务。总之,具有较小输出步长 (较弱信号抽象) 的模型倾向于输出更精细的分割结果。然而,使用较小的输出步长训练模型需要更多的训练时间。

    由于空洞卷积块没有实现降采样,所以 ASPP 也运行在相同的特征响应大小上。因此,它允许使用相对较大的扩张率从多尺度的语境中学习特征。

    Model 1 :Going Deeper with Atrous Convolution

    串行的空洞卷积模型(modules with atrous convolution laid out in cascade)变得更深了

    采用串行的ResNet,级联block为block5、block6、block7(均为block4的复制)

    输出步长为16.在没有使用带孔卷积的情况下它的输出步长是256。

    Multi-grid Method

    对block4~block7 采用不同atrous rates

    定义Multi_Grid=(r1,r2,r3)为block4~block7的三个convolutional layers的 unit rates。

    convolutional layer最终的atrous rate等于unit rate与对应的rate的乘积。

    Example: 当output_stride=16, Multi_Grid=(1,2,4)时, block4中three convolutions的rate分别为:rates=2∗(1,2,4) = (2,4,8)

    Module 2 :Parallel modules with atrous convolution (ASPP)

    better than modules with atrous convolution laid out in cascade

    并行的空洞卷积模型

    空洞空间金字塔池化(ASPP)的思想是提供具有多尺度信息的模型。为了做到这一点,ASPP 添加了一系列具有不同扩张率的空洞卷积。这些扩张率是被设计用来捕捉大范围语境的。

    将batch normalization加入到ASPP模块。

    为了增加全局的语境信息,ASPP 还通过全局平均池化(GAP global average pooling)

    获得图像级特征。即将特征做全局平均池化,经过卷积,再融合。如下图(b)部分Image Pooling.

    - 将image pooling得到的图像级特征输入到一个1×1 convolution with 256 filters(加入 batch normalization)中 - 然后将特征进行双线性上采样(bilinearly upsample)到特定的空间维度

    改进的ASPP由一个 1×1 卷积,和三个 3×3卷积组成,步长为(6,12,18),输出步长为16 如下图(a)部分Atrous Spatial Pyramid Pooling

    ASPP所有分支处理好的特征以及image pooling 卷积 上采样之后的特征将会 concat 在一起通过另一个 1×1 卷积(也有着256个filter,加入批次归一化)

    最后还有一个 1×1 卷积产生最终的分类。

    条件随机场(CRF)被去除了,模型比较简洁易懂。

    这个版本的 ASPP 包含 4 个并行的操作:

    一个 1×1 的卷积三个 3×3 的卷积(扩张率分别是(6,12,18)) 特征图的标称步长(nominal stride)是 16.

    实现细节

    用 ResNet-50 作为特征提取器,Deeplab_v3 采取了以下网络配置:

    输出步长=16 两种输出步长(8 和 16)将输出步长设置为 16 有利于可持续地快速训练。 16 比 8 时处理的特征图小四倍。

    为新的空洞残差块(block 4)使用固定的多重网格空洞卷积率(1,2,4)

    在最后一个空洞卷积残差块之后使用扩张率为(6,12,18)的 ASPP。

    连接所有分支的最终特征,输入到另一个 1×1 convolution (256 filters,有bn)

    learning rate的更新方式是上次提到的新方式“poly”

    Experimental Evalution

    Training protocal

    部分设置数据集PASCAL VOC 2012工具TensorFlow裁剪尺寸采样513大小的裁剪尺寸学习率策略采用poly策略,在初始学习率基础上,乘以,其中power=0.9BN层策略当output_stride=16时,我们采用batchsize=16,同时BN层的参数做参数衰减0.9997。在增强的数据集上,以初始学习率0.007训练30K后,冻结BN层参数。采用output_stride=8时,再使用初始学习率0.001训练30K。训练output_stride=16比output_stride=8要快很多,因为中间的特征映射在空间上小的四倍但因为output_stride=16在特征映射上粗糙是牺牲了精度。上采样策略在先前的工作上,我们是将最终的输出与GroundTruth下采样8倍做比较.现在我们发现保持GroundTruth更重要,故我们是将最终的输出上采样8倍与完整的GroundTruth比较。

    Going Deeper with Atrous Convolution

    级联使用多个带空洞卷积的block模块

    参数效果OS(output_stride)OS(output_stride)越小,mIOU ↑。stride=256,信号严重抽取,性能大大下降Hole使用空洞卷积 ↑ResNet网络越深,block越多,效果↑Multi-grid主分支的三个卷积都使用空洞卷积,Multi-gird策略 (1,2,1)时+网络越深 效果↑

    Atrous Spatial Pyramid Pooling

    提高效果的方法设置MethodMulti-grid (1,2,4)+APSS(6,12,18)+Image PoolingOS小inputsMulti-scale input during testAdding left-right flipped inputspretrainedModel pretrained on MS-COCO 参数效果同时使用上采样和BN↑batch_size越大效果 ↑eval output_stride 影响train output_stride = 16 ,eval output_stride=8/16时,效果↑

    train

    级联模块越多,准确率越好,速度下降使用Multi_Grid 方法比原来要好使用 hole 卷积 ,步长越小,准确率越好,速度下降ASPP最好的模型准确率高于级联带孔卷积模型的最优值,因此选择ASPP作为我们最终的模型。

    和 DeepLab v2 对比:

    v3的级联模型和ASPP模型在PASCAL VOC 2012的验证集上表现都要比v2好提升主要来自增加了调好的批次归一化参数和更好地编码多尺度上下文信息。

    fregu856/deeplabv3

    PyTorch implementation of DeepLabV3, trained on the Cityscapes dataset.

    AdaptiveAvgPool2d

    # target output size of 5x7 m = nn.AdaptiveAvgPool2d((5,7)) input = torch.randn(1, 64, 8, 9) output = m(input) # target output size of 7x7 (square) m = nn.AdaptiveAvgPool2d(7) input = torch.randn(1, 64, 10, 9) output = m(input) # target output size of 10x7 m = nn.AdaptiveMaxPool2d((None, 7)) input = torch.randn(1, 64, 10, 9) output = m(input)

    ResNet

    在中间层(而不仅仅是最后一层)获取输出的方法 如何预计算VGG16的卷积输出,或者在平均池化层之前得到ResNet的输出?

    resnet = models.resnet18() # load pretrained model: resnet.load_state_dict(torch.load("/root/deeplabv3/pretrained_models/resnet/resnet18-5c106cde.pth")) # remove fully connected layer, avg pool and layer5: self.resnet = nn.Sequential(*list(resnet.children())[:-3])

    Example

    res50_model = models.resnet50(pretrained=True) res50_conv = nn.Sequential(*list(res50_model.children())[:-2])

    从torchvision包中获取一个预先训练好的resnet50模型,然后在此基础上构建一个顺序模型,去掉最后两个模块(remove fully connected layer, avg pool)

    for param in res50_conv.parameters(): param.requires_grad = False

    不需要支持整个模型,因为我只是使用它来提取特性。

    network

    pretrained resnet18 , OS16

    network=DeepLabV3( (resnet): ResNet_BasicBlock_OS16( (resnet): Sequential( (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (4): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (5): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (6): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (layer5): Sequential( (0): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential() ) ) ) (aspp): ASPP( (conv_1x1_1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (bn_conv_1x1_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv_3x3_1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6)) (bn_conv_3x3_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv_3x3_2): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(12, 12), dilation=(12, 12)) (bn_conv_3x3_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv_3x3_3): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(18, 18), dilation=(18, 18)) (bn_conv_3x3_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_1x1_2): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (bn_conv_1x1_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv_1x1_3): Conv2d(1280, 256, kernel_size=(1, 1), stride=(1, 1)) (bn_conv_1x1_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv_1x1_4): Conv2d(256, 20, kernel_size=(1, 1), stride=(1, 1)) ) )

    pretrained resnet 18 , OS8

    network=DeepLabV3( (resnet): ResNet_BasicBlock_OS8( (resnet): Sequential( (0): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) (1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU(inplace) (3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False) (4): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (5): Sequential( (0): BasicBlock( (conv1): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 2), bias=False) (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) ) (layer4): Sequential( (0): BasicBlock( (conv1): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (bn1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(2, 2), dilation=(2, 2), bias=False) (bn2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential() ) ) (layer5): Sequential( (0): BasicBlock( (conv1): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False) (bn1): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv2): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(4, 4), dilation=(4, 4), bias=False) (bn2): BatchNorm2d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential() ) ) ) (aspp): ASPP( (conv_1x1_1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (bn_conv_1x1_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv_3x3_1): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(6, 6), dilation=(6, 6)) (bn_conv_3x3_1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv_3x3_2): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(12, 12), dilation=(12, 12)) (bn_conv_3x3_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv_3x3_3): Conv2d(512, 256, kernel_size=(3, 3), stride=(1, 1), padding=(18, 18), dilation=(18, 18)) (bn_conv_3x3_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (avg_pool): AdaptiveAvgPool2d(output_size=1) (conv_1x1_2): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1)) (bn_conv_1x1_2): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv_1x1_3): Conv2d(1280, 256, kernel_size=(1, 1), stride=(1, 1)) (bn_conv_1x1_3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (conv_1x1_4): Conv2d(256, 20, kernel_size=(1, 1), stride=(1, 1)) ) )

    References

    DeepLabv3 论文解析 论文翻译 Semantic Segmentation – (DeepLabv3)Rethinking Atrous Convolution for Semantic Image Segmentation

    DeepLab v3+

    2018 ECCV

    Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation

    YudeWang/deeplabv3+ : pytorch deeplabv3+ supporting ResNet(79.155%) and Xception(79.945%).

    ❤ jfzhang95/pytorch-deeplab-xception :DeepLab v3+ model in PyTorch. Support different backbones.

    Introduce

    deeplab v3 缺陷

    输出图放大的效果不好,信息太少

    改进

    encoder-decoder

    设计基于v3的 decode module把中间一层的特征图用于输出图放大为了融合多尺度信息,引入语义分割常用的 encoder-decoder

    Xception

    DeepLabv3+

    Atrous Spatial pyramid pooling module captures rich contextual information by pooling features at different resolution one can arbitrarily control the resolution of extracted encoder features by atrous convolution to trade-off precision and runtime

    encoder-decoder structure obtain sharp object boundaries

    (a)是v3的纵式结构,(b)是常见的编码—解码结构,(c)是本文提出的基于deeplab v3的encode-decode结构

    Depthwise separable convolution Xception model: apply depthwise separable convolution to both ASPP module and decoder module , both speed and accuracy

    Methods

    Encoder-Decoder with Atrous Convolution

    Atrous convolution:

    3×3 Depthwise separable convolution decomposes a standard convolution into (a) a depthwise convolution (applying a single filter for each input channel) (b) a pointwise convolution (combining the outputs from depthwise convolution across channels).

    atrous separable convolution ,with rate = 2.

    proposed decoder

    encoder 的output feature 首先 经过一个 1×1 conv,然后 bilinearly 向上采样4倍和具有相同空间分辨率的低层特征concat再经过一个3×3 conv ,上采样4倍

    Modified Aligned Xception

    Entry flow 保持不变,但是添加了更多的 Middle flow所有的 max pooling 被 depthwise separable convolutions 替代在每个 3x3 depthwise convolution 之外,增加了 batch normalization 和 ReLU。

    Sep Conv是深度可分离卷积(Depthwise separable convolution)

    代码链接

    # Entry flow self.conv1 = nn.Conv2d(3, 32, 3, stride=2, padding=1, bias=False) self.bn1 = BatchNorm(32) self.relu = nn.ReLU(inplace=True) self.conv2 = nn.Conv2d(32, 64, 3, stride=1, padding=1, bias=False) self.bn2 = BatchNorm(64) self.block1 = Block(64, 128, reps=2, stride=2, BatchNorm=BatchNorm, start_with_relu=False) self.block2 = Block(128, 256, reps=2, stride=2, BatchNorm=BatchNorm, start_with_relu=False, grow_first=True) self.block3 = Block(256, 728, reps=2, stride=entry_block3_stride, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True, is_last=True) # Middle flow self.block4 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block5 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block6 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block7 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block8 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block9 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block10 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block11 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block12 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block13 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block14 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block15 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block16 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block17 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block18 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) self.block19 = Block(728, 728, reps=3, stride=1, dilation=middle_block_dilation, BatchNorm=BatchNorm, start_with_relu=True, grow_first=True) # Exit flow self.block20 = Block(728, 1024, reps=2, stride=1, dilation=exit_block_dilations[0], BatchNorm=BatchNorm, start_with_relu=True, grow_first=False, is_last=True) self.conv3 = SeparableConv2d(1024, 1536, 3, stride=1, dilation=exit_block_dilations[1], BatchNorm=BatchNorm) self.bn3 = BatchNorm(1536) self.conv4 = SeparableConv2d(1536, 1536, 3, stride=1, dilation=exit_block_dilations[1], BatchNorm=BatchNorm) self.bn4 = BatchNorm(1536) self.conv5 = SeparableConv2d(1536, 2048, 3, stride=1, dilation=exit_block_dilations[1], BatchNorm=BatchNorm) self.bn5 = BatchNorm(2048)

    Experimental Evaluation

    部分设置数据集PASCAL VOC 2012(1464 (train)、1449 (val)和1456 (test)像素级注释图像)工具TensorFlowPretrainedImageNet-1k pretrained ResNet-101 or modified aligned Xception学习率策略采用poly策略,在初始学习率基础上,乘以,其中power=0.9。初始0.007crop size513 ×513output_stride16Trainent-to-end

    Decoder Design Choices

    参数效果decoder 1 × 1 convolutionused to reduce the channels of low-level feature mapadopt [1 × 1, 48] for channel reduction.

    解码器模块设计了3×3卷积结构

    参数效果eval output_stride 影响train output_stride = 16 ,eval output_stride=8/16时,效果↑DecoderEmploying the proposed decoder structure↑MS and FlipMultiscale inputs during evaluation. Adding left-right flipped inputs ↑network backboneXception betterSCAdopting depthwise separable convolution for both ASPP and decoder modules ↑COCOModels pretrained on MS-COCO ↑JFTModels pretrained on JFT ↑

    References

    DeepLabv3+:语义分割领域的新高峰

    最新回复(0)