Solar Power Plant Detection on Multi-Spectral Satellite Imagery using Weakly-Supervised CNN with Feedback Features and m-PCNN Fusion

@inproceedings{Imamoglu2017SolarPP,
  title={Solar Power Plant Detection on Multi-Spectral Satellite Imagery using Weakly-Supervised CNN with Feedback Features and m-PCNN Fusion},
  author={Nevrez Imamoglu and Motoki Kimura and Hiroki Miyamoto and Aito Fujita and Ryosuke Nakamura},
  booktitle={BMVC},
  year={2017}
}
Most of the traditional convolutional neural networks (CNNs) implements bottom-up approach (feed-forward) for image classifications. However, many scientific studies demonstrate that visual perception in primates rely on both bottom-up and top-down connections. Therefore, in this work, we propose a CNN network with feedback structure for Solar power plant detection on middle-resolution satellite images. To express the strength of the top-down connections, we introduce feedback CNN network (FB… Expand
Verifying Rapid Increasing of Mega-Solar PV Power Plants in Japan by Applying a CNN-Based Classification Method to Satellite Images
TLDR
This study successfully identified the increasing of the area of the PV power plants quantitatively at the same time identifying the location of each PV power plant in many prefectures in the capital region of Japan. Expand
Performance Analysis of Deep Convolutional Autoencoders with Different Patch Sizes for Change Detection from Burnt Areas
TLDR
This study aimed to analyze the use of deep convolutional Autoencoders in the classification of burnt areas, considering different sample patch sizes, and shown that the U-Net and ResUnet architectures offered the best classifications. Expand
This Little Light of Mine: Electricity Access Mapping Using Night-time Light Data
TLDR
This work aims to improve upon an existing open-source electricity mapping tool that uses night-time light data as the main proxy of electrification and proposed a learning model to improve the detection of electrified sites. Expand
On the relation between landscape beauty and land cover: A case study in the U.K. at Sentinel-2 resolution with interpretable AI
TLDR
A deep learning model called ScenicNet is presented for the large-scale inventorisation of landscape scenicness from satellite imagery and a landscape beauty estimator based on crowdsourced scores derived from more than two hundred thousand landscape images in the United Kingdom is learned. Expand

References

SHOWING 1-10 OF 30 REFERENCES
Detection by classification of buildings in multispectral satellite imagery
TLDR
An approach to classify multispectral image patches taken by satellites as whether or not they belong to a class of buildings by training a Convolutional Neural Network from scratch and providing an indepth evaluation of the seven different spectral bands provided by the satellite images. Expand
Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery
TLDR
This paper proposes two scenarios for generating image features via extracting CNN features from different layers and reveals that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Expand
Land Use Classification in Remote Sensing Images by Convolutional Neural Networks
TLDR
This work explores the use of convolutional neural networks for the semantic classification of remote sensing scenes, and resorts to pre-trained networks that are only fine-tuned on the target data, to avoid overfitting problems and reduce design time. Expand
Convolutional Neural Network Based Automatic Object Detection on Aerial Images
TLDR
This letter presents an automatic content-based analysis of aerial imagery in order to detect and mark arbitrary objects or regions in high-resolution images and proposes a method for automatic object detection based on a convolutional neural network. Expand
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
TLDR
This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image. Expand
Saliency Fusion in Eigenvector Space with Multi-Channel Pulse Coupled Neural Network
TLDR
Experimental results, which are evaluated by precision, recall, F-measure, and area under curve, support the reliability of the proposed method by enhancing the saliency computation. Expand
Look and Think Twice: Capturing Top-Down Visual Attention with Feedback Convolutional Neural Networks
TLDR
The background of feedbacks in the human visual cortex is introduced, which motivates the development of a computational feedback mechanism in deep neural networks, and a feedback loop is introduced to infer the activation status of hidden layer neurons according to the "goal" of the network. Expand
Do deep features generalize from everyday objects to remote sensing and aerial scenes domains?
TLDR
ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. Expand
Fully Convolutional Networks for Semantic Segmentation
TLDR
It is shown that convolutional networks by themselves, trained end- to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Expand
...
1
2
3
...