Navigation of uncertain terrain by fusion of information from real and synthetic imagery

@inproceedings{Lyons2012NavigationOU,
  title={Navigation of uncertain terrain by fusion of information from real and synthetic imagery},
  author={Damian M. Lyons and P. Nirmal and D. Paul Benjamin},
  booktitle={Defense + Commercial Sensing},
  year={2012}
}
We consider the scenario where an autonomous platform that is searching or traversing a building may observe unstable masonry or may need to travel over unstable rubble. A purely behaviour-based system may handle these challenges but produce behaviour that works against long-terms goals such as reaching a victim as quickly as possible. We extend our work on ADAPT, a cognitive robotics architecture that incorporates 3D simulation and image fusion, to allow the robot to predict the behaviour of… Expand
Progress in building a cognitive vision system
TLDR
The research hypothesis of this work is that the movements of the robot's cameras are only those that are necessary to build a sufficiently accurate world model for the robot’s current goals, so this approach trades model accuracy for speed. Expand
A Hybrid System of Hierarchical Planning of Behaviour Selection Networks for Mobile Robot Control
TLDR
A hybrid system using hierarchical planning of modular behaviour selection networks to generate autonomous behaviour in the office delivery robot and reduced the elapsed time during tasks by 17.5% since it adjusts the behaviour module sequences more effectively. Expand
A Monte Carlo Approach to Closing the Reality Gap
TLDR
A novel approach to the 'reality gap' problem, modifying a robot simulation so that its performance becomes more similar to observed real world phenomena, which supports not just that the kernel approach can force the simulation to behave more like reality, but that an improved control policy tested in the modified simulation also performs better in the real world. Expand
Spatial Understanding as a Common Basis for Human-Robot Collaboration
TLDR
A robotic cognitive architecture to be embedded in autonomous robots that can safely interact and collaborate with people on a wide range of physical tasks, which provides a common representation between the robot and humans, thus improving trust between them and promoting effective collaboration. Expand
Building a Virtual 3 D World Model for a Mobile Robot
Building a Virtual 3D World Model for a Mobile Robot by Hong Yue Submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Science December 2015 In thisExpand
Effects of using a 3D model on the performance of vision algorithms
TLDR
The effect of a 3D model built in real time on the accuracy and speed of various computer vision algorithms, including tracking, optical flow and stereo disparity, is evaluated. Expand

References

SHOWING 1-10 OF 19 REFERENCES
A relaxed fusion of information from real and synthetic images to predict complex behavior
TLDR
An extended MMD operation is proposed that relaxes the constraint and allows the real and synthetic scenes to differ in some features but not in (selected) other features, and allows a real image and synthetic image generated from an arbitrarily colored graphical model of a scene to be compared. Expand
Integrating perception and problem solving to predict complex object behaviours
TLDR
An architecture for the perceptive and world modelling components of ADAPT is presented and experimental results using this architecture to predict complex object behaviour are reported on, showing that this perception-based problem solving approach has the potential to be used to predictcomplex object motions. Expand
Surprise-driven acquisition of visual object representations for cognitive mobile robots
  • W. Maier, E. Steinbach
  • Computer Science
  • 2011 IEEE International Conference on Robotics and Automation
  • 2011
TLDR
Experimental results show that the method for the detection of surprising events reliably directs the robot's attention to the novel objects and that the recognition behavior based on the authors' acquired object representations outperforms a state-of-the-art approach. Expand
An Behavior-based Robotics
  • R. Arkin
  • Computer Science, Engineering
  • 1998
TLDR
Following a discussion of the relevant biological and psychological models of behavior, the author covers the use of knowledge and learning in autonomous robots, behavior-based and hybrid robot architectures, modular perception, robot colonies, and future trends in robot intelligence. Expand
Integrating cognition, perception and action through mental simulation in robots
TLDR
An architecture that integrates disparate reasoning, planning, sensation and mobility algorithms by composing them from strategies for managing mental simulations is described, demonstrating that knowledge representation and inference techniques enable more complex and flexible robot behavior. Expand
Locating and tracking objects by efficient comparison of real and predicted synthetic video imagery
TLDR
An approach to tracking targets with complex behavior is presented, leveraging a 3D simulation engine to generate predicted imagery and comparing that against real imagery, to compare real and simulated imagery. Expand
CiceRobot: a cognitive robot for interactive museum tours
TLDR
A cognitive robot architecture is proposed based on the integration between subsymbolic and linguistic computations through the introduction of an intermediate level of representation based on conceptual spaces to allow an autonomous robot to operate in unstructured environments and to interact with non‐expert users. Expand
Embodying a cognitive model in a mobile robot
TLDR
The issues faced in developing an embodied cognitive architecture, the implementation choices, and the formal semantics of RS provides the basis for the semantics of ADAPT's use of natural language are described. Expand
Inner rehearsal modeling for cognitive robotics
TLDR
The inner-rehearsal algorithmic approach developed is posed and investigated in the context of a relatively complex cognitive task, an under-rubble search and rescue. Expand
Motion and Structure from Motion in a piecewise Planar Environment
TLDR
It is shown that when the environment is piecewise linear, it provides a powerful constraint on the kind of matches that exist between two images of the scene when the camera motion is unknown, and that this constraint can be recovered from an estimate of the matrix of this collineation. Expand
...
1
2
...