#### Filter Results:

#### Publication Year

2008

2016

#### Co-author

#### Key Phrase

#### Publication Venue

Learn More

We present relaxed notions of simulation and bisimulation on Probabilistic Automata (PA), that allow some error ε. When ε = 0 we retrieve the usual notions of bisimulation and simulation on PAs. We give logical characterisations of these notions by choosing suitable logics which differ from the elementary ones, L and L ¬ , by the modal operator. Using flow… (More)

We tackle the problem of non robustness of simulation and bisimulation when dealing with probabilistic processes. It is important to ignore tiny deviations in probabilities because these often come from experience or estimations. A few approaches have been proposed to treat this issue, for example metrics to quantify the non bisimilarity (or close-ness) of… (More)

We consider probabilistic automata on infinite words with acceptance defined by parity conditions. We consider three qualitative decision problems: (i) the positive decision problem asks whether there is a word that is accepted with positive probability; (ii) the almost decision problem asks whether there is a word that is accepted with probability 1; and… (More)

We consider partially observable Markov decision processes (POMDPs) with ω-regular conditions specified as parity objectives. The class of ω-regular languages extends regular languages to infinite strings and provides a robust specification language to express all properties used in verification, and parity objectives are canonical forms to express… (More)

In a context of ω-regular specifications for infinite execution sequences, the classical B ¨ uchi condition, or repeated liveness condition, asks that an accepting state is visited infinitely often. In this paper, we show that in a probabilistic context it is relevant to strengthen this infinitely often condition. An execution path is now accepting if the… (More)

—We associate a statistical vector to a trace and a geometrical embedding to a Markov Decision Process, based on a distance on words, and study basic Membership and Equivalence problems. The Membership problem for a trace w and a Markov Decision Process S decides if there exists a strategy on S which generates with high probability traces close to w. We… (More)

We consider networks of Markov Decision Processes (MDPs) where each MDP is one of the N nodes of a graph G. The transition probabilities of an MDP depend on the states of its direct neighbors in the graph, and runs operate by selecting a random node and following a random transition in the chosen device MDP. As the state space of all the configurations of… (More)