• Corpus ID: 244462912

UFO: A UniFied TransfOrmer for Vision-Language Representation Learning

@article{Wang2021UFOAU,
  title={UFO: A UniFied TransfOrmer for Vision-Language Representation Learning},
  author={Jianfeng Wang and Xiaowei Hu and Zhe Gan and Zhengyuan Yang and Xiyang Dai and Zicheng Liu and Yumao Lu and Lijuan Wang},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.10023}
}
In this paper, we propose a single UniFied transfOrmer (UFO), which is capable of processing either unimodal inputs (e.g., image or language) or multimodal inputs (e.g., the concatenation of the image and the question), for vision-language (VL) representation learning. Existing approaches typically design an individual network for each modality and/or a specific fusion network for multimodal tasks. To simplify the network architecture, we use a single transformer network and enforce multi-task… 
All in One: Exploring Unified Video-Language Pre-training
TLDR
This work introduces an end-to-end video-language model, namely all-in-one Transformer, that embeds raw video and textual signals into joint representations using a unified backbone architecture and introduces a novel and effective token rolling operation to encode temporal representations from video clips in a non-parametric manner.
Flamingo: a Visual Language Model for Few-Shot Learning
TLDR
It is demonstrated that a single Flamingo model can achieve a new state of the art for few-shot learning, simply by prompting the model with task-specific examples.
CommerceMM: Large-Scale Commerce MultiModal Representation Learning with Omni Retrieval
TLDR
The CommerceMM model is introduced, a multimodal model capable of providing a diverse and granular understanding of commerce topics associated to the given piece of content, and having the capability to generalize to a wide range of tasks.
xGQA: Cross-Lingual Visual Question Answering
TLDR
This work provides xGQA, a new multilingual evaluation benchmark for the visual question answering task, and extends the established English GQA dataset to 7 typologically diverse languages, enabling us to detect and explore crucial challenges in cross-lingualVisual question answering.

References

SHOWING 1-10 OF 61 REFERENCES
Unified Vision-Language Pre-Training for Image Captioning and VQA
TLDR
VLP is the first reported model that achieves state-of-the-art results on both vision-language generation and understanding tasks, as disparate as image captioning and visual question answering, across three challenging benchmark datasets: COCO Captions, Flickr30k Captions and VQA 2.0.
Align before Fuse: Vision and Language Representation Learning with Momentum Distillation
TLDR
This paper introduces a contrastive loss to ALign the image and text representations BEfore Fusing through cross-modal attention, which enables more grounded vision and language representation learning and proposes momentum distillation, a self-training method which learns from pseudo-targets produced by a momentum model.
Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training
TLDR
After pretraining on large-scale image-caption pairs, Unicoder-VL is transferred to caption-based image-text retrieval and visual commonsense reasoning, with just one additional output layer, and shows the powerful ability of the cross-modal pre-training.
LXMERT: Learning Cross-Modality Encoder Representations from Transformers
TLDR
The LXMERT (Learning Cross-Modality Encoder Representations from Transformers) framework, a large-scale Transformer model that consists of three encoders, achieves the state-of-the-art results on two visual question answering datasets and shows the generalizability of the pre-trained cross-modality model.
VATT: Transformers for Multimodal Self-Supervised Learning from Raw Video, Audio and Text
TLDR
The convolution-free VATT outperforms state-of-the-art ConvNet-based architectures in the downstream tasks and sets a new record on waveform-based audio event recognition, showing the generalizability of the model despite the domain gap between videos and images.
ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a
Learning Transferable Visual Models From Natural Language Supervision
TLDR
It is demonstrated that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet.
VinVL: Making Visual Representations Matter in Vision-Language Models
TLDR
This paper develops an improved object detection model to provide object-centric representations of images and feeds the visual features generated into a Transformer-based VL fusion model OSCAR, and utilizes an improved approach OSCARS to pre-train the VL model and fine-tune it on a wide range of downstream VL tasks.
Attention is All you Need
TLDR
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.
Unified Language Model Pre-training for Natural Language Understanding and Generation
TLDR
A new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks that compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks.
...
1
2
3
4
5
...