• Corpus ID: 239998744

Boundary Guided Context Aggregation for Semantic Segmentation

  title={Boundary Guided Context Aggregation for Semantic Segmentation},
  author={Haoxiang Ma and Hongyu Yang and Di Huang},
The recent studies on semantic segmentation are starting to notice the significance of the boundary information, where most approaches see boundaries as the supplement of semantic details. However, simply combing boundaries and the mainstream features cannot ensure a holistic improvement of semantics modeling. In contrast to the previous studies, we exploit boundary as a significant guidance for context aggregation to promote the overall semantic understanding of an image. To this end, we… 

Figures and Tables from this paper


Joint Semantic Segmentation and Boundary Detection Using Iterative Pyramid Contexts
This paper presents a joint multi-task learning framework for semantic segmentation and boundary detection, which couples two tasks and stores the shared latent semantics to interact between the two tasks, and proposes the novel spatial gradient fusion to suppress non-semantic edges.
Semantic Segmentation with Boundary Neural Fields
A Boundary Neural Field (BNF) is introduced, which is a global energy model integrating FCN predictions with boundary cues that is used to enhance semantic segment coherence and to improve object localization.
Boundary-Aware Feature Propagation for Scene Segmentation
A boundary-aware feature propagation (BFP) module to harvest and propagate the local features within their regions isolated by the learned boundaries in the UAG-structured image and achieves new state-of-the-art segmentation performance on three challenging semantic segmentation datasets, i.e., PASCAL-Context, CamVid, and Cityscapes.
Context Prior for Scene Segmentation
This work develops a Context Prior with the supervision of the Affinity Loss, an effective Context Prior Network that can selectively capture the intra-class and inter-class contextual dependencies, leading to robust feature representation.
Improving Semantic Segmentation via Decoupled Body and Edge Supervision
This paper proposes a new paradigm for semantic segmentation that establishes new state of the art while retaining high efficiency in inference and shows that the proposed framework with various baselines or backbone networks leads to better object inner consistency and object boundaries.
Dual Graph Convolutional Network for Semantic Segmentation
The Dual Graph Convolutional Network (DGCNet) models the global context of the input feature by modelling two orthogonal graphs in a single framework, which achieves state-of-the-art results on both Cityscapes and Pascal Context datasets.
Object-Contextual Representations for Semantic Segmentation
This paper addresses the semantic segmentation problem with a focus on the context aggregation strategy, and presents a simple yet effective approach, object-contextual representations, characterizing a pixel by exploiting the representation of the corresponding object class.
CCNet: Criss-Cross Attention for Semantic Segmentation
This work proposes a Criss-Cross Network (CCNet) for obtaining contextual information in a more effective and efficient way and achieves the mIoU score of 81.4 and 45.22 on Cityscapes test set and ADE20K validation set, respectively, which are the new state-of-the-art results.
Context Encoding for Semantic Segmentation
The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN, and can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset.
Dual Attention Network for Scene Segmentation
New state-of-the-art segmentation performance on three challenging scene segmentation datasets, i.e., Cityscapes, PASCAL Context and COCO Stuff dataset is achieved without using coarse data.