Boundary Knowledge Translation based Reference Semantic Segmentation

  title={Boundary Knowledge Translation based Reference Semantic Segmentation},
  author={Lechao Cheng and Zunlei Feng and Xinchao Wang and Ya Jie Liu and Jie Lei and Mingli Song},
  • Lechao Cheng, Zunlei Feng, +3 authors Mingli Song
  • Published in IJCAI 1 August 2021
  • Computer Science
Given a reference object of an unknown type in an image, human observers can effortlessly find the objects of the same category in another image and precisely tell their visual boundaries. Such visual cognition capability of humans seems absent from the current research spectrum of computer vision. Existing segmentation networks, for example, rely on a humongous amount of labeled data, which is laborious and costly to collect and annotate; besides, the performance of segmentation networks tend… Expand

Figures and Tables from this paper


Boundary-Aware Instance Segmentation
This paper introduces a novel object segment representation based on the distance transform of the object masks, and designs an object mask network (OMN) with a new residual-deconvolution architecture that infers such a representation and decodes it into the final binary object mask. Expand
Semantic Projection Network for Zero- and Few-Label Semantic Segmentation
The proposed semantic projection network (SPNet) achieves this goal by incorporating a class-level semantic information into any network designed for semantic segmentation, in an end-to-end manner. Expand
SG-One: Similarity Guidance Network for One-Shot Semantic Segmentation
This article proposes a simple yet effective similarity guidance network to tackle the one-shot (SG-One) segmentation problem, aiming at predicting the segmentation mask of a query image with the reference to one densely labeled support image of the same category. Expand
CANet: Class-Agnostic Segmentation Networks With Iterative Refinement and Attentive Few-Shot Learning
Canet is presented, a class-agnostic segmentation network that performs few-shot segmentation on new classes with only a few annotated images available, and introduces an attention mechanism to effectively fuse information from multiple support examples under the setting of k-shot learning. Expand
Co-attention CNNs for Unsupervised Object Co-segmentation
This paper presents a CNN-based method that is unsupervised and end-to-end trainable to better solve object co-segmentation and achieves superior results, even outperforming the state-of-the-art, supervised methods. Expand
PANet: Few-Shot Image Semantic Segmentation With Prototype Alignment
This paper tackles the challenging few-shot segmentation problem from a metric learning perspective and presents PANet, a novel prototype alignment network to better utilize the information of the support set to better generalize to unseen object categories. Expand
Self-supervised Scale Equivariant Network for Weakly Supervised Semantic Segmentation
A novel scale equivariant regularization is elaborately designed to ensure consistency of CAMs from the same input image with different resolutions, which can guide the whole network to learn more accurate class activation. Expand
Boundary-Aware Feature Propagation for Scene Segmentation
A boundary-aware feature propagation (BFP) module to harvest and propagate the local features within their regions isolated by the learned boundaries in the UAG-structured image and achieves new state-of-the-art segmentation performance on three challenging semantic segmentation datasets, i.e., PASCAL-Context, CamVid, and Cityscapes. Expand
Unsupervised Domain Adaptation for Semantic Segmentation via Class-Balanced Self-training
This paper proposes a novel UDA framework based on an iterative self-training (ST) procedure, where the problem is formulated as latent variable loss minimization, and can be solved by alternatively generating pseudo labels on target data and re-training the model with these labels. Expand
Unsupervised Object Segmentation by Redrawing
ReDO is presented, a new model able to extract objects from images without any annotation in an unsupervised way based on the idea that it should be possible to change the textures or colors of the objects without changing the overall distribution of the dataset. Expand