• Corpus ID: 220962427

Safety design concepts for statistical machine learning components toward accordance with functional safety standards

@article{Morikawa2020SafetyDC,
  title={Safety design concepts for statistical machine learning components toward accordance with functional safety standards},
  author={Akihisa Morikawa and Yamato Matsubara},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.01263}
}
In recent years, curial incidents and accidents have been reported due to un-intended control caused by misjudgment of statistical machine learning (SML), which include deep learning. The international functional safety standards for Electric/Electronic/Programmable (E/E/P) systems have been widely spread to improve safety. However, most of them do not recom-mended to use SML in safety critical systems so far. In practical the new concepts and methods are urgently required to enable SML to be… 
1 Citations

Figures and Tables from this paper

Responsible and Regulatory Conform Machine Learning for Medicine: A Survey of Technical Challenges and Solutions
TLDR
This paper surveys the technical challenges involved in creating medical machine learning systems responsibly and in conformity with existing regulations, as well as possible solutions to address these challenges.

References

SHOWING 1-9 OF 9 REFERENCES
Proposed Guidelines for the Responsible Use of Explainable Machine Learning.
TLDR
This short text presents internal definitions and a few examples before covering the proposed guidelines for explainable ML, and concludes with a seemingly natural argument for the use of interpretable models and explanatory, debugging, and disparate impact testing methods in life- or mission-critical ML systems.
An Analysis of ISO 26262: Using Machine Learning Safely in Automotive Software
TLDR
The impacts that the use of ML as an implementation approach has on ISO 26262 safety lifecycle is analyzed and a set of recommendations on how to adapt the standard to accommodate ML are provided.
A Lyapunov-based Approach to Safe Reinforcement Learning
TLDR
This work defines and presents a method for constructing Lyapunov functions, which provide an effective way to guarantee the global safety of a behavior policy during training via a set of local, linear constraints.
Safe Exploration in Continuous Action Spaces
TLDR
This work addresses the problem of deploying a reinforcement learning agent on a physical system such as a datacenter cooling unit or robot, where critical constraints must never be violated, and directly adds to the policy a safety layer that analytically solves an action correction formulation per each state.
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.
Learning Certifiably Optimal Rule Lists for Categorical Data
TLDR
The results indicate that it is possible to construct optimal sparse rule lists that are approximately as accurate as the COMPAS proprietary risk prediction tool on data from Broward County, Florida, but that are completely interpretable.
Prototype selection for interpretable classification
TLDR
This paper discusses a method for selecting prototypes in the classification setting (in which the samples fall into known discrete categories), and demonstrates the interpretative value of producing prototypes on the well-known USPS ZIP code digits data set and shows that as a classifier it performs reasonably well.
Examples are not enough, learn to criticize! Criticism for Interpretability
TLDR
Motivated by the Bayesian model criticism framework, MMD-critic is developed, which efficiently learns prototypes and criticism, designed to aid human interpretability.