STARdom: an architecture for trusted and secure human-centered manufacturing systems

  title={STARdom: an architecture for trusted and secure human-centered manufacturing systems},
  author={Jo{\vz}e M. Ro{\vz}anec and Patrik Zajec and Klemen Kenda and Inna Novalija and Bla{\vz} Fortuna and Dunja Mladenic and Entso Veliou and Dimitrios Papamartzivanos and Thanassis Giannetsos and Sofia-Anna Menesidou and Rub{\'e}n Alonso and Nino Cauli and Diego Reforgiato Recupero and Dimosthenis Kyriazis and Georgios Sofianidis and Spyros Theodoropoulos and John Soldatos},
There is a lack of a single architecture specification that addresses the needs of trusted and secure Artificial Intelligence systems with humans in the loop, such as human-centered manufacturing systems at the core of the evolution towards Industry 5.0. To realize this, we propose an architecture that integrates forecasts, Explainable Artificial Intelligence, supports collecting users' feedback, and uses Active Learning and Simulated Reality to enhance forecasts and provide decision-making… 

Human-Centric Artificial Intelligence Architecture for Industry 5.0 Applications

This work proposes an architecture that integrates Active Learning, Forecasting, Explainable Intelligence, simulated reality, decision-making, and users’ feedback, focusing on synergies between humans and machines, and aligns with the Big Data Value Association Reference Architecture Model.

Towards Robustifying Image Classifiers against the Perils of Adversarial Attacks on Artificial Intelligence Systems

This work introduces an AI architecture augmented with adversarial examples and defense algorithms to safeguard, secure, and make more reliable AI systems.



Topological Approach for Mapping Technologies in Reference Architectural Model Industrie 4.0 (RAMI 4.0)

This work aims to create a concrete, yet universal, application-oriented model that fosters the widespread of RAMI 4.0 in practice, supports further research and amendments, and hence, facilitates the implementation of Industrie 4.

Concrete Problems in AI Safety

A list of five practical research problems related to accident risk, categorized according to whether the problem originates from having the wrong objective function, an objective function that is too expensive to evaluate frequently, or undesirable behavior during the learning process, are presented.

A Multi-layered Approach for Tailored Black-Box Explanations

A solution to the problem of providing explanation methods for algorithmic decision systems based on a multi-layered approach allowing users to express their requests for explanations at different levels of abstraction is presented.

Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics

This paper presents a visual analytics framework for explaining and exploring model vulnerabilities to adversarial attacks and employs a multi-faceted visualization scheme designed to support the analysis of data poisoning attacks from the perspective of models, data instances, features, and local structures.

The Digital Shopfloor: Industrial Automation in the Industry 4.0 Era

SHERLOCK: Simple Human Experiments Regarding Locally Observed Collective Knowledge

The design of human-machine conversation experiments that support the evaluation of the context aware approach in coalition decision making at or near the network edge are described.

Explainable Demand Forecasting: A Data Mining Goldmine

Demand forecasting is a crucial component of demand management. Value is provided to the organization through accurate forecasts and insights into the reasons driving the forecasts to increase

Natural multimodal communication for human–robot collaboration

This article explains in detail how the proposed semantic approach for multimodal interaction between humans and industrial robots has been implemented in two real industrial cases in which a robot and a worker collaborate in assembly and deburring operations.

Learning from Explanations and Demonstrations: A Pilot Study

It is argued that explainability methods, in particular methods that model the recipient of an explanation, might help increasing sample efficiency and the relationship between explainability and knowledge transfer in reinforcement learning.

Adversarial Examples: Opportunities and Challenges

The concept, cause, characteristics, and evaluation metrics of AEs are introduced, then a survey on the state-of-the-art AE generation methods with the discussion of advantages and disadvantages are given.