• Publications
  • Influence
Coordinated Multi-Robot Exploration Under Communication Constraints Using Decentralized Markov Decision Processes
TLDR
This paper extends the DVF methodology to address full local observability, limited share of information and communication breaks and applies it in a real-world application consisting of multi-robot exploration where each robot computes locally a strategy that minimizes the interactions between the robots and maximizes the space coverage of the team even under communication constraints. Expand
Toward a transnational history of the social sciences.
TLDR
It is argued that a transnational history of the social sciences may be fruitfully understood on the basis of three general mechanisms, which have structured the transnational flows of people and ideas in decisive ways: the functioning of international scholarly institutions, the trans national mobility of scholars, and the politics of trans-national exchange of nonacademic institutions. Expand
Partially Observable Markov Decision Process for Managing Robot Collaboration with Human
TLDR
A new framework for controlling a robot collaborating with a human to accomplish a common mission and some preliminary results of solving the POMDP model with standard optimal algorithms as a base work to compare with state-of-the-art and future-work approximate algorithms. Expand
Distributed value functions for multi-robot exploration
TLDR
This paper addresses the problem of exploring an unknown area with a team of autonomous robots using decentralized decision making techniques as a set of individual Decentralized Markov Decision Process (Dec-MDPs), where interactions between MDPs are considered in a distributed value function. Expand
Human-robot collaboration for a shared mission
TLDR
A model is presented that gives the robot the ability to build a belief over human intentions in order to predict his goals and integrates this prediction into a Partially Observable Markov Decision Process (POMDP) model to achieve the most appropriate and flexible decisions for the robot. Expand
A Practical Framework for Robust Decision-Theoretic Planning and Execution for Service Robots
TLDR
This paper presents a practical framework based on a decision-theoretic formalism for generation and execution of robust plans for service robots and has been implemented and succesfully tested on service robots interacting with non-expert users in public environments. Expand
Human-robot collaboration for a shared mission
TLDR
A model is presented that gives the robot the ability to build a belief over human intentions in order to predict his goals and integrates this prediction into a Partially Observable Markov Decision Process (POMDP) model to achieve the most appropriate and flexible decisions for the robot. Expand
A Decision-Theoretic Approach to Cooperative Control and Adjustable Autonomy
TLDR
A decision-theoretic approach is presented to accomplish an optimal plan that tells the AS what actions to perform as well as when to request SU attention or transfer control to the SU. Expand
Using Markov Decision Processes to define an adaptive strategy to control the spread of an animal disease
TLDR
This paper illustrates how to use a Markov Decision Process (MDP) to compute an adaptive strategy depending on the pathogen spread within a group of farmers with only one decision-maker for the group. Expand
Automated Medical Diagnosis with Fuzzy Stochastic Models: Monitoring Chronic Diseases
TLDR
An automated system to monitor a patient population, detecting anomalies in instantaneous data and in their temporal evolution, so that it could alert physicians, which allows physicians to spend comparatively more time with patients who need their services. Expand
...
1
2
3
4
5
...