• Publications
  • Influence
Linformer: Self-Attention with Linear Complexity
TLDR
This paper demonstrates that the self-attention mechanism of the Transformer can be approximated by a low-rank matrix, and proposes a new self-Attention mechanism, which reduces the overall self-ATTention complexity from $O(n^2)$ to $O (n)$ in both time and space. Expand
Entailment as Few-Shot Learner
TLDR
The key idea of this approach is to reformulate potential NLP task into an entailment one, and then fine-tune the model with as little as 8 examples, which improves the various existing SOTA few-shot learning methods by 12%, and yields competitive few- shot performance with 500 times larger models, such as GPT-3. Expand
Micro-Estimates of Wealth for all Low- and Middle-Income Countries
TLDR
The first micro-estimates of wealth and poverty that cover the populated surface of all 135 low and middle-income countries at 2.4km resolution are developed, built by applying machine learning algorithms to vast and heterogeneous data from satellites, mobile phone networks, topographic maps, as well as aggregated and de-identified connectivity data from Facebook. Expand
CLIP2Video: Mastering Video-Text Retrieval via Image CLIP
TLDR
This work leverage pretrained imagelanguage model, simplify it as a two-stage framework with co-learning of image-text and enhancing temporal relations between video frames and video-text respectively, make it able to train on comparatively small datasets. Expand