• Corpus ID: 246294745

To what extent should we trust AI models when they extrapolate?

@article{Yousefzadeh2022ToWE,
  title={To what extent should we trust AI models when they extrapolate?},
  author={Roozbeh Yousefzadeh and Xuenan Cao},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.11260}
}
Many applications affecting human lives rely on models that have come to be known under the umbrella of machine learning and artificial intelligence. These AI models are usually complicated mathematical functions that make decisions and predictions by mapping from an input space to an output space. Stakeholders are interested to know the rationales behind models’ decisions; that understanding requires knowledge about models’ functional behavior. We study this functional behavior in relation to… 

Figures and Tables from this paper

Towards an AI-based Early Warning System for Bridge Scour

Scour is the number one cause of bridge failure in many parts of the world. Considering the lack of reliability in existing empirical equations for scour depth estimation and the complexity and

References

SHOWING 1-10 OF 55 REFERENCES

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

Learning Functional Relations Based on Experience With Input-Output Pairs by Humans and Artificial Neural Networks

Before making any serious decision, we normally try to anticipate how the effects of our action will vary depending on the action taken. For example, before an anaesthetist can decide the amount of

Are you sure about that? On the origins of confidence in concept learning

Humans possess a rich repertoire of abstract concepts about which they can often judge their confidence. These judgements help guide behaviour, but the mechanisms underlying them are still poorly

Why Are We Using Black Box Models in AI When We Don’t Need To? A Lesson From An Explainable AI Competition

TLDR
The question of whether the real world of machine learning is similar to the Explainable Machine Learning Challenge, where black box models are used even when they are not needed is discussed.

Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead

  • C. Rudin
  • Computer Science
    Nat. Mach. Intell.
  • 2019
TLDR
This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision.

From local explanations to global understanding with explainable AI for trees

TLDR
An explanation method for trees is presented that enables the computation of optimal local explanations for individual predictions, and the authors demonstrate their method on three medical datasets.

Learning Representations that Support Extrapolation

TLDR
A novel visual analogy benchmark is introduced that allows the graded evaluation of extrapolation as a function of distance from the convex domain defined by the training data, and a simple technique is introduced, temporal context normalization, that encourages representations that emphasize the relations between objects.

A Survey of Bias in Machine Learning Through the Prism of Statistical Parity

TLDR
This article presents a mathematical framework for the fair learning problem, specifically in the binary classification setting, and proposes to quantify the presence of bias by using the standard disparate impact index on the real and well-known adult income dataset.

A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI

TLDR
It is argued that a new data protection right, the ‘right to reasonable inferences’, is needed to help close the accountability gap currently posed ‘high risk inferences' , meaning inferences that are privacy invasive or reputation damaging and have low verifiability in the sense of being predictive or opinion-based.

How Neural Networks Extrapolate: From Feedforward to Graph Neural Networks

TLDR
The success of GNNs in extrapolating algorithmic tasks to new data relies on encoding task-specific non-linearities in the architecture or features, and a hypothesis is suggested for which theoretical and empirical evidence is provided.
...