Learning Fully Dense Neural Networks for Image Semantic Segmentation
1、融合feature map时,concate所有前层feature map,encoder network is based on the DenseNet-264
2、boundary-aware loss:边界像素作为hard example
Architecture
Boundary-aware loss
根据pixel到boundary的距离,将pixel划分为多个set,每个set赋不同weight α
融合feature map均参与loss计算
Implementation details
Training:
Random crops of 512×512
horizontal flip
“poly” learning rate policy
train the dataset with 30K iterations.
The initial learning rate is set to 0:00025. We set momentum to 0:9 and weight decay to 0:0005.
Inference
multi-scale inference:
pad images with mean value
Horizontal flipping
multi-scale inference,ranging from 0.6 to 1.4
average the predictions on the same image across different scales for the final prediction
average the predictions on the
same image across different scales for the final prediction