Corpus ID: 201317624

VL-BERT: Pre-training of Generic Visual-Linguistic Representations

@article{Su2020VLBERTPO,
  title={VL-BERT: Pre-training of Generic Visual-Linguistic Representations},
  author={Weijie Su and X. Zhu and Y. Cao and B. Li and Lewei Lu and Furu Wei and Jifeng Dai},
  journal={ArXiv},
  year={2020},
  volume={abs/1908.08530}
}
  • Weijie Su, X. Zhu, +4 authors Jifeng Dai
  • Published 2020
  • Computer Science
  • ArXiv
  • We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To… CONTINUE READING
    151 Citations
    ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
    • 273
    • PDF
    What BERT Sees: Cross-Modal Transfer for Visual Question Generation
    Are we pretraining it right? Digging deeper into visio-linguistic pretraining
    • 5
    • PDF
    SeqDialN: Sequential Visual Dialog Networks in Joint Visual-Linguistic Representation Space
    Large-scale Pretraining for Visual Dialog: A Simple State-of-the-Art Baseline
    • 12
    • PDF
    Visuo-Lingustic Question Answering (VLQA) Challenge
    Learning Visual Representations with Caption Annotations
    • 1
    • PDF
    RVL-BERT: Visual Relationship Detection with Visual-Linguistic Knowledge from Pre-trained Representations

    References

    SHOWING 1-10 OF 56 REFERENCES
    ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
    • 273
    • Highly Influential
    • PDF
    Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training
    • 88
    • PDF
    VisualBERT: A Simple and Performant Baseline for Vision and Language
    • 127
    • PDF
    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    • 12,375
    • Highly Influential
    • PDF
    Visual7W: Grounded Question Answering in Images
    • 435
    • PDF
    Improving Language Understanding by Generative Pre-Training
    • 1,581
    • PDF
    VideoBERT: A Joint Model for Video and Language Representation Learning
    • 169
    • PDF
    MAttNet: Modular Attention Network for Referring Expression Comprehension
    • 188
    • Highly Influential
    • PDF