• Corpus ID: 235358940

Alpha Matte Generation from Single Input for Portrait Matting

@article{Yaman2021AlphaMG,
  title={Alpha Matte Generation from Single Input for Portrait Matting},
  author={Dogucan Yaman and Hazim Kemal Ekenel and Alexander Waibel},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.03210}
}
In the portrait matting, the goal is to predict an alpha matte that identifies the effect of each pixel on the foreground subject. Traditional approaches and most of the ex-isting works utilized an additional input, e.g., trimap, background image, to predict alpha matte. However, (1) providing additional input is not always practical, and (2) models are too sensitive to these additional inputs. To address these points, in this paper, we introduce an additional input-free approach to perform… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 65 REFERENCES
Fast Deep Matting for Portrait Animation on Mobile Phone
TLDR
A real-time automatic deep matting approach for mobile devices based on densely connected blocks and the dilated convolution, which is designed to predict a coarse binary mask for portrait image and an automatic portrait animation system built on mobile devices, which does not need any interaction and can realize real- time matting with 15 fps.
Deep Automatic Portrait Matting
TLDR
An automatic image matting method for portrait images that does not need user interaction is proposed and achieves comparable results with state-of-the-art methods that require specified foreground and background regions or pixels.
Shared Sampling for Real‐Time Alpha Matting
TLDR
The first real‐time alpha matting technique for natural images and videos is presented, based on the observation that, for small neighborhoods, pixels tend to share similar attributes, and achieves speedups of up to two orders of magnitude compared to previous ones, while producing high‐quality alpha mattes.
Towards Light‐Weight Portrait Matting via Parameter Sharing
TLDR
Qualitative and quantitative evaluations show that sharing the encoder is an effective way to achieve portrait matting with limited computational budgets, indicating a promising direction for applications of real‐time portraitMatting on mobile devices.
Highly Efficient Natural Image Matting
TLDR
An extremely light-weighted model is constructed, which achieves comparable performance with 1% parameters (344k) of large models on popular natural image matting benchmarks.
A Late Fusion CNN for Digital Matting
TLDR
Experimental results demonstrate that the structure of a deep convolutional neural network to predict the foreground alpha matte by taking a single RGB image as input can achieve high-quality alpha mattes for various types of objects and outperform the state-of-the-art CNN-based image matting methods on the human imageMatting task.
Is a Green Screen Really Necessary for Real-Time Human Matting?
TLDR
A light-weight matting objective decomposition network (MODNet), which can process human matting from a single input image in real time and greatly outperforms prior trimap-free methods.
KNN Matting
TLDR
The matting technique, aptly called KNN matting, capitalizes on the nonlocal principle by using K nearest neighbors (KNN) in matching nonlocal neighborhoods, and contributes a simple and fast algorithm giving competitive results with sparse user markups.
Sparse Coding for Alpha Matting
TLDR
A multi-frame graph model, as opposed to a single image as for image matting, is proposed that can be solved efficiently in closed form and outperforms the current stateoftheart in image and video matting.
Attention-Guided Hierarchical Structure Aggregation for Image Matting
TLDR
An end-to-end Hierarchical Attention Matting Network (HAttMatting), which can predict the better structure of alpha mattes from single RGB images without additional input, and introduces a hybrid loss function fusing Structural SIMilarity, Mean Square Error and Adversarial loss to guide the network to further improve the overall foreground structure.
...
1
2
3
4
5
...