SCROLLS: Standardized CompaRison Over Long Language Sequences
- Uri Shaham, Elad Segal, Omer Levy
- Computer ScienceArXiv
- 10 January 2022
S CROLLS is introduced, a suite of tasks that require reasoning over long texts, covering multiple domains, including literature, science, busi-ness, and entertainment, and is made available in a unified text-to-text format to facilitate research on model architecture and pretraining methods.
Scene Graph to Image Generation with Contextualized Object Layout Refinement
- Maor Ivgi, Yaniv Benny, Avichai Ben-David, Jonathan Berant, Lior Wolf
- Computer ScienceInternational Conference on Information Photonics
- 2021
This work proposes a novel method that alleviates issues by generating the entire layout description gradually to improve inter-object dependency, and empirically shows on the COCO-STUFF dataset that this approach improves the quality of both the intermediate layout and the final image.
Efficient Long-Text Understanding with Short-Text Models
- Maor Ivgi, Uri Shaham, Jonathan Berant
- Computer ScienceArXiv
- 1 August 2022
This work proposes SLED, a simple approach for processing long sequences that re-uses and leverages battle-tested short-text pretrained LMs and shows that SLED is competitive with specialized models that are up to 50x larger and require a dedicated and expensive pretraining step.
Achieving Model Robustness through Discrete Adversarial Training
- Maor Ivgi, Jonathan Berant
- Computer ScienceConference on Empirical Methods in Natural…
- 11 April 2021
Surprisingly, it is found that random sampling leads to impressive gains in robustness, outperforming the commonly-used offline augmentation, while leading to a speedup at training time of ~10x.
Beyond Importance Scores: Interpreting Tabular ML by Visualizing Feature Semantics
- Amirata Ghorbani, Dina Berenbaum, Maor Ivgi, Yuval Dafna, James Y. Zou
- Computer ScienceInf.
- 10 November 2021
This work introduces Feature Vectors, a new global interpretability method designed for tabular datasets that discovers the inherent semantic relationship among features via an intuitive feature visualization technique.
Scaling Laws Under the Microscope: Predicting Transformer Performance from Small Scale Experiments
- Maor Ivgi, Y. Carmon, Jonathan Berant
- Computer ScienceArXiv
- 13 February 2022
It is found that scaling laws emerge at netuning time in some NLP tasks, and that they can also be ex-ploited for debugging convergence when training large models.