• Publications
  • Influence
End-to-End Bias Mitigation by Modelling Biases in Corpora
TLDR
This work proposes two learning strategies to train neural models, which are more robust to such biases and transfer better to out-of-domain datasets and better transfer to other textual entailment datasets.
Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks
TLDR
This paper shows that one can learn adapter parameters for all layers and tasks by generating them using shared hypernetworks, which condition on task, adapter position, and layer id in a transformer model.
Learning-Based Compressive MRI
TLDR
A learning-based framework for optimizing MRI subsampling patterns for a specific reconstruction rule and anatomy, considering both the noiseless and noisy settings is proposed and a novel parameter-free greedy mask selection method is presented.
Simple but effective techniques to reduce biases
TLDR
This work introduces an additional lightweight bias-only model which learns dataset biases and uses its prediction to adjust the loss of the base model to reduce the biases.
Variational Information Bottleneck for Effective Low-Resource Fine-Tuning
TLDR
This work proposes to use Variational Information Bottleneck (VIB) to suppress irrelevant features when fine-tuning on low-resource target tasks, and shows that the method successfully reduces overfitting.
ParsiNLU: A Suite of Language Understanding Challenges for Persian
TLDR
This work introduces ParsiNLU, the first benchmark in Persian language that includes a range of language understanding tasks—reading comprehension, textual entailment, and so on, and presents the first results on state-of-the-art monolingual and multilingual pre-trained language models on this benchmark and compares them with human performance.
Prompt-free and Efficient Few-shot Learning with Language Models
TLDR
Experiments demonstrate that Perfect, a simple and efficient method for few-shot fine-tuning of PLMs without relying on any such handcrafting, also outperforms existing state-of-the-art few- shot learning methods.
Learning Entailment-Based Sentence Embeddings from Natural Language Inference
TLDR
This work proposes a simple interaction layer based on predefined entailment and contradiction scores applied directly to the sentence embeddings, which achieves results on natural language inference competitive with MLP-based models and directly represents the information needed for textual entailment.
Segment based 3D object shape priors
TLDR
A novel shape prior formulation that splits the object into multiple convex parts that can resolve issues such as undesired holes and disconnected parts and is able to reconstruct concavities, such as the interior of a mug.
Scalable Sparse Covariance Estimation via Self-Concordance
We consider the class of convex minimization problems, composed of a self-concordant function, such as the logdet metric, a convex data fidelity term h(.) and, a regularizing — possibly
...
...