Cost-Bounded Active Classification Using Partially Observable Markov Decision Processes

@article{Wu2018CostBoundedAC,
  title={Cost-Bounded Active Classification Using Partially Observable Markov Decision Processes},
  author={B. Wu and Mohamadreza Ahmadi and Suda Bharadwaj and Ufuk Topcu},
  journal={2019 American Control Conference (ACC)},
  year={2018},
  pages={1216-1223}
}
  • B. WuM. Ahmadi U. Topcu
  • Published 28 September 2018
  • Computer Science
  • 2019 American Control Conference (ACC)
Active classification, i.e., the sequential decision making process aimed at data acquisition for classification purposes, arises naturally in many applications, including medical diagnosis, intrusion detection, and object tracking. In this work, we study the problem of actively classifying dynamical systems with a finite set of Markov decision process (MDP) models. We are interested in finding strategies that actively interact with the dynamical system, and observe its reactions so that the… 

Figures and Tables from this paper

Constrained Active Classification Using Partially Observable Markov Decision Processes

This work presents a decision-theoretic framework based on partially observable Markov decision processes (POMDPs) that relies on assigning a classification belief (a probability distribution) to the attributes of interest and presents two different algorithms to compute such strategies.

On the Detection of Markov Decision Processes

This work investigates whether it is possible to asymptotically detect the ground truth MDP model perfectly based on a single observed history (stateaction sequence) and develops an algorithm that efficiently determines the existence of policies and synthesizes one when they exist.

An integrated methodology to control the risk of cardiovascular disease in patients with hypertension and type 1 diabetes

An integrated methodology including Markov decision processes (MDP) and genetic algorithm (GA) to control the risk of cardiovascular disease in patients with hypertension and type 1 diabetes is aimed at developing.

Operations research and health systems: A literature review

The present study aimed to evaluate the application of operations research models based on the research process in health systems including Markov decision-making processes (MDPs) and partially observableMarkov decision process (POMDP), etc., and compare these methods with each other.

References

SHOWING 1-10 OF 26 REFERENCES

An Adaptive Sampling Algorithm for Solving Markov Decision Processes

An adaptive sampling algorithm that adaptively chooses which action to sample as the sampling process proceeds and generates an asymptotically unbiased estimator, whose bias is bounded by a quantity that converges to zero at rate (lnN)/ N.

Markov Decision Processes: Discrete Stochastic Dynamic Programming

  • M. Puterman
  • Computer Science
    Wiley Series in Probability and Statistics
  • 1994
Markov Decision Processes covers recent research advances in such areas as countable state space models with average reward criterion, constrained models, and models with risk sensitive optimality criteria, and explores several topics that have received little or no attention in other books.

Simulation-Based Algorithms for Markov Decision Processes

The self-contained approach of this book will appeal not only to researchers in MDPs, stochastic modeling, and control, and simulation but will be a valuable source of tuition and reference for students of control and operations research.

OR Forum - A POMDP Approach to Personalize Mammography Screening Decisions

This work proposes a personalized mammography screening policy based on the prior screening history and personal risk characteristics of women, and formulation of a finite-horizon, partially observable Markov decision process POMDP model that outperforms existing guidelines with respect to the total expected quality-adjusted life years.

SARSOP: Efficient Point-Based POMDP Planning by Approximating Optimally Reachable Belief Spaces

This work has developed a new point-based POMDP algorithm that exploits the notion of optimally reachable belief spaces to improve com- putational efficiency and substantially outperformed one of the fastest existing point- based algorithms.

POMDP Model Learning for Human Robot Collaboration

This work adopts a partially observable Markov decision process (POMDP) model, which provides a general modeling framework for sequential decision making where states are hidden and actions have stochastic outcomes.

Cooperative Active Perception using POMDPs

A decision-theoretic approach to cooperative active perception is presented, by formalizing the problem as a Partially Observable Markov Decision Process (POMDP).

Bounded Policy Synthesis for POMDPs with Safe-Reachability Objectives

The method compactly represents a goal-constrained belief space, which only contains beliefs reachable from the initial belief under desired executions that can achieve the given safe-reachability objective, and employs an incremental Satisfiability Modulo Theories solver to efficiently search for a valid policy over it.

Active Classification: Theory and Application to Underwater Inspection

The problem in which an autonomous vehicle must classify an object based on multiple views is formulated as an extension to Bayesian active learning, and the benefit of acting adaptively as new information becomes available is formally analyzed.