• Corpus ID: 238259667

Trustworthy AI: From Principles to Practices

@article{Li2021TrustworthyAF,
  title={Trustworthy AI: From Principles to Practices},
  author={Bo Li and Peng Qi and Bo Liu and Shuai Di and Jingen Liu and Jiquan Pei and Jinfeng Yi and Bowen Zhou},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.01167}
}
The rapid development of Artificial Intelligence (AI) technology has enabled the deployment of various systems based on it. However, many current AI systems are found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection. These shortcomings degrade user experience and erode people’s trust in all AI systems. In this review, we provide AI practitioners with a comprehensive guide for building trustworthy AI systems. We first introduce the… 

Figures and Tables from this paper

Trustworthy Graph Neural Networks: Aspects, Methods and Trends
TLDR
A comprehensive roadmap to build trustworthy GNNs from the view of the various computing technologies involved is proposed, including robustness, explainability, privacy, fairness, accountability, and environmental well-being.
Explainable Artificial Intelligence for Bayesian Neural Networks: Towards trustworthy predictions of ocean dynamics
TLDR
A Bayesian Neural Network is implemented, where parameters are distributions rather than deterministic, and novel implementations of explainable AI (XAI) techniques are applied, revealing the extent to which the BNN is suitable and/or trustworthy.
Which Style Makes Me Attractive? Interpretable Control Discovery and Counterfactual Explanation on StyleGAN
TLDR
A novel approach to disentangle latent subspace semantics by exploiting existing face analysis models, e.g., face parsers and face landmark detectors, is proposed and a new perspective to explain the behavior of a CNN classifier is proposed by generating counterfactuals in the interpretable latent subspaces the authors discovered.

References

SHOWING 1-10 OF 484 REFERENCES
Demographic Bias in Biometrics: A Survey on an Emerging Challenge
TLDR
The main contributions of this article are an overview of the topic of algorithmic bias in the context of biometrics, a comprehensive survey of the existing literature on biometric bias estimation and mitigation, and a discussion of the pertinent technical and social matters.
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
TLDR
This work introduces AI Explainability 360, an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics, and provides a taxonomy to help entities requiring explanations to navigate the space of explanation methods.
Software Transparency as a Key Requirement for Self-Driving Cars
TLDR
This work investigates how to pursue the elicitation and modeling of transparency as a Non-Functional Requirement (NFR) to produce self-driving cars that are more robust.
The case against teaching kids to be polite to Alexa
    Trustworthy AI Inference Systems: An Industry Research View
    TLDR
    An industry research view for approaching the design, deployment, and operation of trustworthy Artificial Intelligence (AI) inference systems, which highlights opportunities and challenges in AI systems using trusted execution environments combined with more recent advances in cryptographic techniques to protect data in use.
    Mitigating Face Recognition Bias via Group Adaptive Classifier
    TLDR
    Experiments show that the proposed group adaptive classifier mitigates bias by using adaptive convolution kernels and attention mechanisms on faces based on their demographic attributes to mitigate face recognition bias across demographic groups while maintaining the competitive accuracy.
    Multi-view 3D Object Detection Network for Autonomous Driving
    TLDR
    This paper proposes Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes and designs a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths.
    The world's first artificial intelligence act: Europe's proposal to lead in human-centered AI
    • 2021
    Human Language Understanding & Reasoning
    TLDR
    These models show the first inklings of a more general form of artificial intelligence, which may lead to powerful foundation models in domains of sensory experience beyond just language.
    Ethics Guidelines for Trustworthy AI
    • M. Cannarsa
    • The Cambridge Handbook of Lawyering in the Digital Age
    • 2021
    ...
    ...