Corpus ID: 209832245

A Framework for Democratizing AI

  title={A Framework for Democratizing AI},
  author={Shakkeel Ahmed and Ravi S. Mula and S. Dhavala},
Machine Learning and Artificial Intelligence are considered an integral part of the Fourth Industrial Revolution. Their impact, and far-reaching consequences, while acknowledged, are yet to be comprehended. These technologies are very specialized, and few organizations and select highly trained professionals have the wherewithal, in terms of money, manpower, and might, to chart the future. However, concentration of power can lead to marginalization, causing severe inequalities. Regulatory… Expand
1 Citations
PySyft: A Library for Easy Federated Learning
This chapter introduces Duet: the authors' tool for easier FL for scientists and data owners and provides a proof-of-concept demonstration of a FL workflow using an example of how to train a convolutional neural network. Expand


"Why Should I Trust You?": Explaining the Predictions of Any Classifier
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction. Expand
AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias
A new open source Python toolkit for algorithmic fairness, AI Fairness 360 (AIF360), released under an Apache v2.0 license to help facilitate the transition of fairness research algorithms to use in an industrial setting and to provide a common framework for fairness researchers to share and evaluate algorithms. Expand
Auto-Keras: An Efficient Neural Architecture Search System
A novel framework enabling Bayesian optimization to guide the network morphism for efficient neural architecture search is proposed and an open-source AutoML system based on the developed framework is built, namely Auto-Keras. Expand
Julia: A Fresh Approach to Numerical Computing
The Julia programming language and its design is introduced---a dance between specialization and abstraction, which recognizes what remains the same after computation, and which is best left untouched as they have been built by the experts. Expand
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
This Perspective clarifies the chasm between explaining black boxes and using inherently interpretable models, outlines several key reasons why explainable black boxes should be avoided in high-stakes decisions, identifies challenges to interpretable machine learning, and provides several example applications whereinterpretable models could potentially replace black box models in criminal justice, healthcare and computer vision. Expand
Ray: A Distributed Framework for Emerging AI Applications
This paper proposes an architecture that logically centralizes the system's control state using a sharded storage system and a novel bottom-up distributed scheduler that speeds up challenging benchmarks and serves as both a natural and performant fit for an emerging class of reinforcement learning applications and algorithms. Expand
Explainable AI for Trees: From Local Explanations to Global Understanding
Improvements to the interpretability of tree-based models through the first polynomial time algorithm to compute optimal explanations based on game theory, and a new type of explanation that directly measures local feature interaction effects. Expand
Deep neural networks are easily fooled: High confidence predictions for unrecognizable images
This work takes convolutional neural networks trained to perform well on either the ImageNet or MNIST datasets and finds images with evolutionary algorithms or gradient ascent that DNNs label with high confidence as belonging to each dataset class, and produces fooling images, which are then used to raise questions about the generality of DNN computer vision. Expand
A generic framework for privacy preserving deep learning
A new framework for privacy preserving deep learning that allows one to implement complex privacy preserving constructs such as Federated Learning, Secure Multiparty Computation, and Differential Privacy while still exposing a familiar deep learning API to the end-user is detailed. Expand
Deep Neural Decision Trees
This work presents Deep Neural Decision Trees (DNDT) -- tree models realised by neural networks, which can be easily implemented in NN toolkits, and trained with gradient descent rather than greedy splitting. Expand