Joshua Juster

Learn More
We present a trainable, visually-grounded, spoken language understanding system. The system acquires a grammar and vocabulary from a " show-and-tell " procedure in which visual scenes are paired with verbal descriptions. The system is embodied in a table-top mounted active vision platform. During training, a set of objects is placed in front of the vision(More)
We describe a home lighting robot that uses directional spotlights to create complex lighting scenes. The robot senses its visual environment using a panoramic camera and attempts to maintain its target goal state by adjusting the positions and intensities of its lights. Users can communicate desired changes in the lighting environment through speech and(More)
  • 1