The Care Label Concept: A Certification Suite for Trustworthy and Resource-Aware Machine Learning
@article{Morik2021TheCL, title={The Care Label Concept: A Certification Suite for Trustworthy and Resource-Aware Machine Learning}, author={Katharina Morik and Helena Kotthaus and Lukas Heppe and Danny Heinrich and Raphael Fischer and Andrea Pauly and Nico Piatkowski}, journal={ArXiv}, year={2021}, volume={abs/2106.00512} }
Machine learning applications have become ubiquitous. This has led to an increased effort of making machine learning trustworthy. Explainable and fair AI have already matured. They address knowledgeable users and application engineers. For those who do not want to invest time into understanding the method or the learned model, we offer care labels: easy to understand at a glance, allowing for method or model comparisons, and, at the same time, scientifically well-based. On one hand, this…
3 Citations
Explainable AI via Learning to Optimize
- Computer ScienceArXiv
- 2022
This work provides concrete tools for XAI in situations where prior knowledge must be encoded and untrustworthy inferences flagged, and uses the “learn to optimize” (L2O) method- ology wherein each inference solves a data-driven optimization problem.
The Past, Presence and Future of a Field of Science
- 2022
Towards Warranted Trust: A Model on the Relation Between Actual and Perceived System Trustworthiness
- Computer Science, BusinessMuC
- 2021
This work describes how the model can be used to systematically investigate determinants that increase the match between system’s actual trustworthiness and user's perceived trustworthiness in order to achieve warranted trust.
References
SHOWING 1-10 OF 52 REFERENCES
Yes we care!-Certification for machine learning methods through the care label framework
- Computer ScienceFrontiers in Artificial Intelligence
- 2022
A unified framework that certifies learning methods via care labels is proposed that considers both, the machine learning theory and a given implementation and test the implementation's compliance with theoretical properties and bounds.
Model Cards for Model Reporting
- Computer ScienceFAT
- 2019
This work proposes model cards, a framework that can be used to document any trained machine learning model in the application fields of computer vision and natural language processing, and provides cards for two supervised models: One trained to detect smiling faces in images, and one training to detect toxic comments in text.
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims
- Computer ScienceArXiv
- 2020
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems.
Machine learning - a probabilistic perspective
- Computer ScienceAdaptive computation and machine learning series
- 2012
This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach, and is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.
A Survey of Methods for Explaining Black Box Models
- Computer ScienceACM Comput. Surv.
- 2019
A classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system is provided to help the researcher to find the proposals more useful for his own work.
Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
- Computer Science
- 2016
The core functionalities of the CleverHans library are presented, namely the attacks based on adversarial examples and defenses to improve the robustness of machine learning models to these attacks.
Is neuron coverage a meaningful measure for testing deep neural networks?
- Computer ScienceESEC/SIGSOFT FSE
- 2020
The results invoke skepticism that increasing neuron coverage may not be a meaningful objective for generating tests for deep neural networks and call for a new test generation technique that considers defect detection, naturalness, and output impartiality in tandem.
Adversarial examples in the physical world
- Computer ScienceICLR
- 2017
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing
- Computer ScienceFAT*
- 2020
The proposed auditing framework is intended to contribute to closing the accountability gap in the development and deployment of large-scale artificial intelligence systems by embedding a robust process to ensure audit integrity.
Once for All: Train One Network and Specialize it for Efficient Deployment
- Computer ScienceICLR
- 2020
This work proposes to train a once-for-all (OFA) network that supports diverse architectural settings by decoupling training and search, to reduce the cost and propose a novel progressive shrinking algorithm, a generalized pruning method that reduces the model size across many more dimensions than pruning.