• Publications
  • Influence
3D Packing for Self-Supervised Monocular Depth Estimation
TLDR
In this work, we propose a novel self-supervised monocular depth estimation method combining geometry with a new deep network, PackNet, learned only from unlabeled monocular videos. Expand
  • 72
  • 17
  • PDF
Semantically-Guided Representation Learning for Self-Supervised Monocular Depth
TLDR
This paper introduces a novel architecture for self-supervised monocular depth estimation that leverages semantic information from a fixed pretrained network to guide the generation of multi-level depth features via pixel-adaptive convolutions. Expand
  • 37
  • 11
  • PDF
Meta-rooms: Building and maintaining long term spatial models in a dynamic world
TLDR
We present a novel method for re-creating the static structure of cluttered office environments - which we define as the "meta-room" - from multiple observations collected by an autonomous robot equipped with an RGB-D depth camera over extended periods of time. Expand
  • 59
  • 7
  • PDF
The STRANDS Project: Long-Term Autonomy in Everyday Environments
TLDR
Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project enables long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance. Expand
  • 136
  • 5
  • PDF
SuperDepth: Self-Supervised, Super-Resolved Monocular Depth Estimation
TLDR
We propose a subpixel convolutional layer extension to self-supervised monocular disparity estimation that enables state-of-the-art performance on the public KITTI benchmark. Expand
  • 88
  • 5
  • PDF
Automatic Room Segmentation From Unstructured 3-D Data of Indoor Environments
TLDR
We present an automatic approach for the task of reconstructing a 2-D floor plan from unstructured point clouds of building interiors using only 3D point information. Expand
  • 38
  • 2
  • PDF
Augmented autonomy: Improving human-robot team performance in Urban search and rescue
TLDR
We present an integrated system for semi-autonomous cooperative exploration, augmented by an intuitive user interface for efficient human supervision and control. Expand
  • 31
  • 1
  • PDF
Unsupervised learning of spatial-temporal models of objects in a long-term autonomy scenario
TLDR
We present a novel method for clustering dynamic parts of indoor RGB-D scenes across repeated observations by performing an analysis of their spatial-temporal distributions. Expand
  • 21
  • 1
  • PDF
Efficient retrieval of arbitrary objects from long-term robot observations
TLDR
We present a novel method for efficient querying and retrieval of arbitrarily shaped objects from large amounts of unstructured 3D point cloud data. Expand
  • 12
  • 1
  • PDF
Semantic Labeling of Indoor Environments from 3D RGB Maps
TLDR
We present an approach to automatically assign semantic labels to rooms reconstructed from 3D RGB maps of apartments. Expand
  • 12
  • 1
...
1
2
3
4
...