Technology readiness levels for machine learning systems

@article{Lavin2020TechnologyRL,
  title={Technology readiness levels for machine learning systems},
  author={Alexander Lavin and Ciar{\'a}n M. Gilligan-Lee and Alessya Visnjic and Siddha Ganju and Dava Newman and Atilim Gunes Baydin and Sujoy Ganguly and Danny B. Lange and Ajay Sharma and Stephan Zheng and Eric P. Xing and Adam Gibson and James Parr and Chris Mattmann and Yarin Gal},
  journal={Nature Communications},
  year={2020},
  volume={13}
}
The development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end. Lack of diligence can lead to technical debt, scope creep and misaligned objectives, model misuse and failures, and expensive consequences. Engineering systems, on the other hand, follow well-defined processes and testing standards to streamline development for high-quality, reliable results. The extreme is spacecraft systems, with mission… 

Challenges in Deploying Machine Learning: A Survey of Case Studies

By mapping found challenges to the steps of the machine learning deployment workflow, it is shown that practitioners face issues at each stage of the deployment process.

Learnings from Frontier Development Lab and SpaceML - AI Accelerators for NASA and ESA

A case study of the Frontier Development Lab (FDL), an AI accelerator under a public-private partnership from NASA and ESA, enabling FDL to churn successful interdisciplinary and interorganizational research projects, measured through NASA's Technology Readiness Levels is performed.

mlpack 4: a fast, header-only C++ machine learning library

The mlpack machine learning library has been significantly refactored and redesigned to facilitate an easier prototyping-to-deployment pipeline, including bindings to other languages that allow prototyping to be seamlessly performed in environments other than C++.

Space ML: Distributed Open-source Research with Citizen Scientists for the Advancement of Space Technology for NASA

A short case study of Space ML, an extension of the Frontier Development Lab, an AI accelerator for NASA, which distributes open-source research and invites volunteer citizen scientists to partake in development and deployment of high social value products at the intersection of space and AI.

Trustworthy AI: From Principles to Practices

This review provides AI practitioners with a comprehensive guide for building trustworthy AI systems and introduces the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, and accountability.

Space Trusted Autonomy Readiness Levels

— Technology Readiness Levels are a mainstay for or- ganizations that fund, develop, test, acquire, or use technologies. Technology Readiness Levels provide a standardized assessment of a

Diagnostic quality model (DQM): an integrated framework for the assessment of diagnostic quality when using AI/ML

A conceptual diagnostic quality framework is presented and can help to specify and communicate the key implications of AI/ML solutions in laboratory diagnostics and compare the model to existing quality management systems.

Design Considerations Towards AI-Driven Co-Processor Accelerated Database Management

This short paper proposes a series of seven ideal design characteristics for AI-driven heterogeneous processing systems, and makes the case for revisiting the traditional Mariposa system to consider its market concepts as a useful starting point for new system designs to support the identified characteristics.

References

SHOWING 1-10 OF 92 REFERENCES

Technology Readiness Levels for AI & ML

The Technology Readiness Levels for ML (TRL4ML) framework defines a principled process to ensure robust systems while being streamlined for ML research and product, including key distinctions from traditional software engineering.

Software Engineering for Machine Learning: A Case Study

  • Saleema AmershiA. Begel T. Zimmermann
  • Computer Science
    2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP)
  • 2019
A study conducted on observing software teams at Microsoft as they develop AI-based applications finds that various Microsoft teams have united this workflow into preexisting, well-evolved, Agile-like software engineering processes, providing insights about several essential engineering challenges that organizations may face in creating large-scale AI solutions for the marketplace.

Towards Accountability for Machine Learning Datasets: Practices from Software Engineering and Infrastructure

A rigorous framework for dataset development transparency that supports decision-making and accountability is introduced, which uses the cyclical, infrastructural and engineering nature of dataset development to draw on best practices from the software development lifecycle.

Challenges in Deploying Machine Learning: A Survey of Case Studies

By mapping found challenges to the steps of the machine learning deployment workflow, it is shown that practitioners face issues at each stage of the deployment process.

Learnings from Frontier Development Lab and SpaceML - AI Accelerators for NASA and ESA

A case study of the Frontier Development Lab (FDL), an AI accelerator under a public-private partnership from NASA and ESA, enabling FDL to churn successful interdisciplinary and interorganizational research projects, measured through NASA's Technology Readiness Levels is performed.

Towards Compliant Data Management Systems for Healthcare ML

The objective is to design tools to detect and track sensitive data across machines and users across the life cycle of a project, prioritizing efficiency, consistency and ease of use.

Understanding and Visualizing Data Iteration in Machine Learning

This work designs a collection of interactive visualizations and integrates them into a prototype, Chameleon, that lets users compare data features, training/testing splits, and performance across data versions and identifies opportunities for future data iterations.

Hidden Technical Debt in Machine Learning Systems

It is found it is common to incur massive ongoing maintenance costs in real-world ML systems, and several ML-specific risk factors to account for in system design are explored.
...