• Publications
  • Influence
Fairness for Robust Log Loss Classification
This work re-derive a new classifier from the first principles of distributional robustness that incorporates fairness criteria into a worst-case logarithmic loss minimization that produces a parametric exponential family conditional distribution that resembles truncated logistic regression.
ParsiNLU: A Suite of Language Understanding Challenges for Persian
This work introduces ParsiNLU, the first benchmark in Persian language that includes a range of language understanding tasks—reading comprehension, textual entailment, and so on, and presents the first results on state-of-the-art monolingual and multilingual pre-trained language models on this benchmark and compares them with human performance.
Fair Logistic Regression: An Adversarial Perspective
A new approach to fair data-driven decision making is investigated by designing predictors with fairness requirements integrated into their core formulations, producing a novel prediction model that robustly and fairly minimizes the logarithmic loss.
Robust Fairness under Covariate Shift
This work investigates fairness under covariate shift, a relaxation of the iid assumption in which the inputs or covariates change while the conditional label distribution remains the same, and proposes an approach that obtains the predictor that is robust to the worst-case in terms of target performance while satisfying target fairness requirements and matching statistical properties of the source data.
Fairness for Robust Learning to Rank
This work derives a new ranking system based on the first principles of distributional robustness that provides better utility for highly fair rankings than existing baseline methods.