Learn More
The achievement of a wide variety of tasks using a complex system in an unknown environment presents formidable challenges to the control system and its designer. This paper presents a hybrid DEDS approach to the control of such systems which allows for reactivity in the continuous domain and for the automatic generation of the control strategy in the(More)
This paper addresses adaptive control architectures for systems that respond autonomously to changing tasks. Such systems often have many sensory and motor alternatives and behavior drawn from these produces varying quality solutions. The objective is then to ground behavior in control laws which, combined with resources , enumerate closed-loop behavioral(More)
This paper describes research towards a system for locating wireless nodes in a home environment requiring merely a single access point. The only sensor reading used for the location estimation is the received signal strength indication (RSSI) as given by an RF interface, e.g., Wi-Fi. Wireless signal strength maps for the positioning filter are obtained by(More)
Citation Hsiao, Kaijen et al. " Reactive grasping using optical proximity sensors. Article is made available in accordance with the publisher's policy and may be subject to US copyright law. Please refer to the publisher's site for terms of use. The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story(More)
This paper describes a probabilistic approach to global localization within an indoor environment with minimum infrastructure requirements. Global localization is a flavor of localization in which the device is unaware of its initial position and has to determine the same from scratch. Localization is performed based on the Received Signal Strength(More)
Autonomous robot systems operating in an uncertain environment have to be reactive and adaptive in order to cope with changing environment conditions and task requirements. To achieve this, the hybrid control architecture presented in this paper uses reinforcement learning on top of a Discrete Event Dynamic System (DEDS) framework to learn to supervise a(More)
To my parents and my husband and son ACKNOWLEDGEMENTS I would like to express my deep gratitude to my supervising professor, Dr. Sajal K. Das, for his constant encouragement, guidance and support throughout my study in the has been spending countless hours with me, on problems discussing, papers critiquing. This work would not have been completed without(More)
This paper provides new techniques for abstracting the state space of a Markov Decision Process (MDP). These techniques extend one of the recent minimization models, known as-reduction, to construct a partition space that has a smaller number of states than the original MDP. As a result, learning policies on the partition space should be faster than on the(More)