What Goes Where: Predicting Object Distributions from Above

  title={What Goes Where: Predicting Object Distributions from Above},
  author={Connor Greenwell and Scott Workman and N. Jacobs},
  journal={IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium},
  • Connor Greenwell, Scott Workman, N. Jacobs
  • Published 2018
  • Computer Science
  • IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium
  • In this work, we propose a cross-view learning approach, in which images captured from a ground-level view are used as weakly supervised annotations for interpreting overhead imagery. The outcome is a convolutional neural network for overhead imagery that is capable of predicting the type and count of objects that are likely to be seen from a ground-level perspective. We demonstrate our approach on a large dataset of geotagged ground-level and overhead imagery and find that our network captures… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Learning to Map Nearly Anything
    • 3
    • PDF
    Remote Estimation of Free-Flow Speeds
    • 3
    • PDF
    Building Instance Classification using Social Media Images
    • 3
    • PDF
    Learning to Map the Visual and Auditory World
    Learning a Dynamic Map of Visual Appearance
    Image-Based Roadway Assessment Using Convolutional Neural Networks
    Modeling and Mapping Location-Dependent Human Appearance


    Publications referenced by this paper.
    Deep Residual Learning for Image Recognition
    • 50,589
    • PDF
    ImageNet Large Scale Visual Recognition Challenge
    • 16,507
    • PDF
    Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
    • 18,510
    • PDF
    Microsoft COCO: Common Objects in Context
    • 10,324
    • PDF
    Speed/Accuracy Trade-Offs for Modern Convolutional Object Detectors
    • 1,361
    • PDF
    Wide-Area Image Geolocalization with Aerial Reference Imagery
    • 108
    • PDF
    Predicting Ground-Level Scene Layout from Aerial Imagery
    • 55
    • PDF
    On the location dependence of convolutional neural network features
    • 33
    • PDF
    Deep Learners Benefit More from Out-of-Distribution Examples
    • 88
    • PDF
    Understanding and Mapping Natural Beauty
    • 20
    • PDF