Adaptive Multiple Resources Consumption Control for an Autonomous Rover

@inproceedings{Gloannec2008AdaptiveMR,
  title={Adaptive Multiple Resources Consumption Control for an Autonomous Rover},
  author={Simon Le Gloannec and A. Mouaddib and F. Charpillet},
  booktitle={EUROS},
  year={2008}
}
Resources consumption control is crucial in the autonomous rover context. Most of the time, the resources consumption is probabilistic. During execution time, the rover has to adapt its resources consumption, in order to keep more resources for important tasks or avoid to fail. Progressive processing is a model that describes tasks that can be performed in several ways. Therefore, it allows the agent to adapt and to control its resources consumption during the mission. The resource control is… Expand
Navigation Method Selector for an Autonomous Explorer Rover with a Markov Decision Process
TLDR
An automatic navigation method selector using Markov Decision Process (MDP) framework is proposed and Navigation method are modelized in the MDP transitions functions. Expand
Vector-Value Markov Decision Process for multi-objective stochastic path planning
  • A. Mouaddib
  • Computer Science
  • Int. J. Hybrid Intell. Syst.
  • 2012
TLDR
This paper considers the problem of path planning in stochastic environments where the length of the path is not the unique criterion to consider and formalizes this problem as a multi-objective decision-theoretic path planning and transforms this latter into 2VMDP Vector-Valued Markov Decision Process. Expand

References

SHOWING 1-10 OF 11 REFERENCES
Meta-level control under uncertainty for handling multiple consumable resources of robots
TLDR
An approach to control the operation of an autonomous rover which operates under multiple resource constraints by combining decomposition of a large MDP into smaller ones, compression of the state space by exploiting characteristics of the multiple resources constraint, construction of local policies for the decomposed MDPs using state space discretization and resource compression, and recomposition of the local policies to obtain a near optimal global policy. Expand
Decision-Theoretic Control of Planetary Rovers
TLDR
Two decision-theoretic approaches to maximize the productivity of planetary rovers are described: one based on adaptive planning and the other on hierarchical reinforcement learning. Expand
Planning with Continuous Resources in Stochastic Domains
TLDR
This work considers the problem of optimal planning in stochastic domains with resource constraints, where resources are continuous and the choice of action at each step may depend on the current resource level, and proposes an algorithm that performs search in a hybrid state space that is modeled using both discrete and continuous state variables. Expand
Adaptive Control of Acyclic Progressive Processing Task Structures
TLDR
The progressive processing model is examined to control the operation of an autonomous rover which operates under tight resource constraints and it is shown that it provides a practical approach to building an adaptive controller for this application. Expand
Decision-Theoretic Military Operations Planning
TLDR
This paper shows that problems with such features can be successfully approached by realtime heuristic search algorithms, operating on a formulation of the problem as a Markov decision process. Expand
Optimal Scheduling of Dynamic Progressive Processing
TLDR
A new approach to scheduling the process- ing units by constructing and solving a particular Markov decision problem and finding an optimal policy (or schedule) is introduced, offering a significant improvement over existing heuristic scheduling techniques. Expand
Advances in Plan-Based Control of Robotic Agents
TLDR
Plan-Based Multi-robot Cooperation for Autonomous Soccer Robots Preliminary Report and Learning How to Combine Sensory-Motor Modalities for a Robust Behavior are published. Expand
Knowledge-Based Anytime Computation
TLDR
The model of progressive reasoning that is presented here is based on a hierarchy of reasoning units that allow for gradual improvement of decision quality in a predictable manner and is an important step towards the application of knowledge-based systems in time-critical domains. Expand
Dynamic Programming for Structured Continuous Markov Decision Problems
TLDR
This work describes an approach for exploiting structure in Markov Decision Processes with continuous state variables and extends it to piecewise constant representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. Expand
LAO*: A heuristic search algorithm that finds solutions with loops
TLDR
It is shown that LAO* can be used to solve Markov decision problems and that it shares the advantage heuristic search has over dynamic programming for other classes of problems. Expand
...
1
2
...