Pierrick Plamondon

Learn More
The position of a frigate to face some threats can augment its survival chances and therefore it is important to investigate this aspect in order to determine how a frigate can position itself during an attack. To achieve that, we propose a first method based on the Bayesian movement, performed by a learning agent, which determines the optimal positioning(More)
The coordination of anti-air warfare (AAW) hardkill (HK) and softkill (SK) weapon systems is an important aspect of command and control for the HALIFAX Class Frigate. This led to the development of a rapid prototyping environment, described here, which supports the investigation of methods to coordinate the plans produced by AAW HK and SK agents. The HK and(More)
We are interested by contributing to stochastic problems of which the main distinction is that some tasks may create other tasks. In particular, we present a first approach which represent the problem by an acyclic graph, and solves each node in a certain order so as to produce an optimal solution. Then, we detail a second algorithm, which solves each task(More)
This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete. To address this complex resource management problem, a Q-decomposition approach is proposed when the resources which are already shared among the agents, but the actions made by an agent may influence the reward obtained by at least another agent.(More)
We are interested in contributing to solving effectively the a specific type of real-time stochastic resource allocation problem, which is known as NP-Hard, of which the main distinction is the high number of possible interacting actions to execute in a group of tasks. To address this complex resource management problem, we propose an adaptation of the(More)
This paper contributes to solve effectively stochastic resource allocation problems in multiagent environments. To address it, a distributed Q-values approach is proposed when the resources are distributed among agents a priori, but the actions made by an agent may influence the reward obtained by at least another agent. This distributed Q-values approach(More)
This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete. To address this complex resource management problem, previous works on pruning the action space of real-time heuristic search is extended. The pruning is accomplished by using upper and lower bounds on the value function. This way, if an action in a(More)
Resource allocation is a widely studied class of problems in Operation Research and Artificial Intelligence. Specially, constrained stochastic resource allocation problems, where the assignment of a constrained resource do not automatically imply the realization of the task. This kind of problems are generally addressed with Markov Decision Processes(More)
We are interested in contributing to solving effectively a particular type of real-time stochastic resource allocation problem. Firstly, one distinction is that certain tasks may create other tasks. Then, positive and negative interactions among the resources are considered, in achieving the tasks, in order to obtain and maintain an efficient coordination.(More)
This paper contributes to solve effectively stochastic resource allocation problems known to be NP-Complete. To address this complex resource management problem, the merging of two approaches is made: The Q-decomposition model, which coordinates reward separated agents through an arbitrator, and the Labeled Real-Time Dynamic Programming (LRTDP) approaches(More)
  • 1