• Publications
  • Influence
PolarMask: Single Shot Instance Segmentation With Polar Representation
  • Enze Xie, Pei Sun, Ping Luo
  • Computer Science
    IEEE/CVF Conference on Computer Vision and…
  • 29 September 2019
In this paper, we introduce an anchor-box free and single shot instance segmentation method, which is conceptually simple, fully convolutional and can be used by easily embedding it into most
Norm-Based Curriculum Learning for Neural Machine Translation
TLDR
This paper aims to improve the efficiency of training an NMT by introducing a novel norm-based curriculum learning method that uses the norm (aka length or module) of a word embedding as a measure of the difficulty of the sentence, the competence of the model, and the weight of the sentences.
Scene Text Image Super-Resolution in the Wild
TLDR
A new Text Super-Resolution Network termed TSRN, with three novel modules is developed, which improves the recognition accuracy by over 13% of CRNN, and by nearly 9.0% of ASTER and MORAN compared to synthetic SR data.
DocStruct: A Multimodal Method to Extract Hierarchy Structure in Document for General Form Understanding
TLDR
This work utilizes the state-of-the-art models and design targeted extraction modules to extract multimodal features from semantic contents, layout information, and visual images, and adopts an asymmetric algorithm and negative sampling in the model.
R³ Adversarial Network for Cross Model Face Recognition
TLDR
It is shown that face feature can be deciphered into original face image roughly by the reconstruction path, which may give valuable hints for improving the original face recognition models.
Understanding and Improving Lexical Choice in Non-Autoregressive Translation
TLDR
This study empirically shows that as a side effect of training non-autoregressive translation models, the lexical choice errors on low-frequency words are propagated to the NAT model from the teacher model, and proposes to expose the raw data to NAT models to restore the useful information of low- Frequency words, which are missed in the distilled data.
Meta-Curriculum Learning for Domain Adaptation in Neural Machine Translation
TLDR
Experimental results on 10 different low-resource domains show that meta-curriculum learning can improve the translation performance of both familiar and unfamiliar domains.
Shared-Private Bilingual Word Embeddings for Neural Machine Translation
TLDR
Experiments demonstrate that the proposed shared-private bilingual word embeddings provide a significant performance boost over the strong baselines with dramatically fewer model parameters.
...
1
2
...