• Corpus ID: 40804271

Vision-Based Navigation and Deep-Learning Explanation for Autonomy

@inproceedings{Konam2017VisionBasedNA,
  title={Vision-Based Navigation and Deep-Learning Explanation for Autonomy},
  author={Sandeep Konam},
  year={2017}
}
In this thesis, we investigate vision-based techniques to support robot mobile autonomy in human environments, including also understanding the important image features with respect to a classification task. Given this wide goal of transparent vision-based autonomy, the work proceeds along three main fronts. Our first algorithm enables a UAV to visually localize and navigate with respect to CoBot, a ground mobile robot, in order to perform visual search tasks. Our approach leverages the robust… 
Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions
TLDR
This work proposes a method called Saliency Tubes which demonstrate the foremost points and regions in both frame level and over time that are found to be the main focus points of the network.
Grad-CAM++: Generalized Gradient-Based Visual Explanations for Deep Convolutional Networks
TLDR
This paper proposes Grad-CAM++, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the class label under consideration, to provide better visual explanations of CNN model predictions.
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks.
TLDR
This paper proposes a generalized method called Grad-CAM++ that can provide better visual explanations of CNN model predictions, in terms of better object localization as well as explaining occurrences of multiple object instances in a single image, when compared to state-of-the-art.
Mask-GradCAM: Object Identification and Localization of Visual Presentation for Deep Convolutional Network
  • X. A. Inbaraj, J. Jeng
  • Computer Science
    2021 6th International Conference on Inventive Computation Technologies (ICICT)
  • 2021
This paper presents the conceptually simple, flexible and more suitable framework to demonstrate object localization and object recognition by Mask RCNN along with Grad-CAM (Mask-GradCAM) method that

References

SHOWING 1-10 OF 40 REFERENCES
A deep-network solution towards model-less obstacle avoidance
  • L. Tai, Shaohua Li, Ming Liu
  • Computer Science
    2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2016
TLDR
Inspired by the advantages of deep learning, this work takes indoor obstacle avoidance as example to show the effectiveness of a hierarchical structure that fuses a convolutional neural network (CNN) with a decision process, by which a model-less obstacle avoidance behavior is achieved.
Learning monocular reactive UAV control in cluttered natural environments
TLDR
A system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments using only a single cheap camera to perceive the environment, and using recent state-of-the-art imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading.
Fast robust monocular depth estimation for Obstacle Detection with fully convolutional networks
TLDR
This work proposes a novel appearance-based Object Detection system that is able to detect obstacles at very long range and at a very high speed (~ 300Hz), without making assumptions on the type of motion.
A Machine Learning Approach to Visual Perception of Forest Trails for Mobile Robots
TLDR
This work proposes a different approach to perceive forest trials based on a deep neural network used as a supervised image classifier that outperforms alternatives, and yields an accuracy comparable to the accuracy of humans that are tested on the same image classification task.
Robust Autonomous Flight in Constrained and Visually Degraded Environments
TLDR
This paper proposes a fast and robust state estimation algorithm that fuses estimates from a direct depth odometry method and a Monte Carlo localization algorithm with other sensor information in an EKF framework and an online motion planning algorithm that combines trajectory optimization with receding horizon control framework is proposed for fast obstacle avoidance.
Vision-based state estimation for autonomous rotorcraft MAVs in complex environments
TLDR
This paper proposes a vision-based state estimation approach that does not drift when the vehicle remains stationary and shows indoor experimental results with performance benchmarking and illustrates the autonomous operation of the system in challenging indoor and outdoor environments.
Camera-based navigation of a low-cost quadrocopter
TLDR
A novel, closed-form solution to estimate the absolute scale of the generated visual map from inertial and altitude measurements and shows its robustness to temporary loss of visual tracking and significant delays in the communication process.
Autonomous Flight in Unknown Indoor Environments
TLDR
The difficulties in achieving fully autonomous helicopter flight are described, highlighting the differences between ground and helicopter robots that make it difficult to use algorithms that have been developed for ground robots.
Adaptive navigation for autonomous robots
DeepDriving: Learning Affordance for Direct Perception in Autonomous Driving
TLDR
This paper proposes to map an input image to a small number of key perception indicators that directly relate to the affordance of a road/traffic state for driving and argues that the direct perception representation provides the right level of abstraction.
...
...