Learn More
Concurrent constraint programming Sar89,SR90] is a simple and powerful model of concurrent computation based on the notions of store-as-constraint and process as information transducer. The store-as-valuation conception of von Neumann computing is replaced by the notion that the store is a constraint (a nite representation of a possibly innnite set of(More)
In this paper we introduce a new class of labelled transition systems-Labelled Markov Processes and deene bisimulation for them. Labelled Markov processes are probabilistic labelled transition systems where the state space is not necessarily discrete. We assume that the state space is a certain type of common metric space called an analytic space. We show(More)
We study approximate reasoning about continuous-state labeled Markov processes. We show how to approximate a labeled Markov process by a family of finite-state labeled Markov chains. We show that the collection of labeled Markov processes carries a Polish space structure with a countable basis given by finite state Markov chains with rational probabilities.(More)
We observe that equivalence is not a robust concept in the presence of numerical information-such as probabilities-in the model. We develop a metric analogue of weak bisimulation in the spirit of our earlier work on metric analogues for strong bisimulation. We give a fixed point characterization of the metric. This makes available coinductive reasoning(More)
The notion of process equivalence of probabilistic processes is sensitive to the exact probabilities of transitions. Thus, a slight change in the transition probabilities will result in two equivalent processes being deemed no longer equivalent. This instability is due to the quantitative nature of probabilistic processes. In a situation where the process(More)
Partial Labeled Markov Chains are simultaneously generalizations of process algebra and of traditional Markov chains. They provide a foundation for interacting discrete probabilistic systems, the interaction being synchronization on labels as in process algebra. Existing notions of process equivalence are too sensitive to the exact probabilities of various(More)
Markov decision processes (MDPs) offer a popular mathematical tool for planning and learning in the presence of uncertainty (Boutilier, Dean, & Hanks 1999). MDPs are a standard formalism for describing multi-stage decision making in probabilistic environments. The objective of the decision making is to maximize a cumulative measure of long-term performance,(More)