Remote Sensing Image Classification using Transfer Learning and Attention Based Deep Neural Network

  title={Remote Sensing Image Classification using Transfer Learning and Attention Based Deep Neural Network},
  author={Lam Dang Pham and Khoa Tran and Dat Thanh Ngo and Jasmin Lampert and Alexander Schindler},
— The task of remote sensing image scene classification (RSISC), which aims at classifying remote sensing images into groups of semantic categories based on their contents, has taken the important role in a wide range of applications such as urban planning, natural hazards detection, environment monitoring,vegetation mapping, or geospatial object detection. During the past years, research community focusing on RSISC task has shown significant effort to publish diverse datasets as well as propose… 

Figures and Tables from this paper



When Self-Supervised Learning Meets Scene Classification: Remote Sensing Scene Classification Based on a Multitask Learning Framework

The proposed multitask learning framework empowers a deep neural network to learn more discriminative features without increasing the amounts of parameters and simultaneously encode orientation information while effectively improving the accuracy of remote sensing scene classification.

Remote Sensing Image Scene Classification: Benchmark and State of the Art

A large-scale data set, termed “NWPU-RESISC45,” is proposed, which is a publicly available benchmark for REmote Sensing Image Scene Classification (RESISC), created by Northwestern Polytechnical University (NWPU).

A Lightweight and Robust Lie Group-Convolutional Neural Networks Joint Representation for Remote Sensing Scene Classification

This study introduces lie group machine learning into the CNN model, tries to combine both approaches to extract more distinguishing ability and effective features, and proposes a novel network model, namely, the lie group regional influence network (LGRIN).

When Deep Learning Meets Metric Learning: Remote Sensing Image Scene Classification via Learning Discriminative CNNs

This paper proposes a simple but effective method to learn discriminative CNNs (D-CNNs) to boost the performance of remote sensing image scene classification and comprehensively evaluates the proposed method on three publicly available benchmark data sets using three off-the-shelf CNN models.

A Lightweight and Discriminative Model for Remote Sensing Scene Classification With Multidilation Pooling Module

This paper proposes a lightweight and effective CNN which is capable of maintaining high accuracy and uses MobileNet V2 as a base network and introduces the dilated convolution and channel attention to extract discriminative features.

Simple Yet Effective Fine-Tuning of Deep CNNs Using an Auxiliary Classification Loss for Remote Sensing Scene Classification

This work provides best practices for fine-tuning pre-trained CNNs using the root-mean-square propagation (RMSprop) method and proposes a simple yet effective solution for tackling the vanishing gradient problem by injecting gradients at an earlier layer of the network using an auxiliary classification loss function.

AID: A Benchmark Data Set for Performance Evaluation of Aerial Scene Classification

The Aerial Image data set (AID), a large-scale data set for aerial scene classification, is described to advance the state of the arts in scene classification of remote sensing images and can be served as the baseline results on this benchmark.

APDC-Net: Attention Pooling-Based Convolutional Network for Aerial Scene Classification

This letter proposes an attention pooling-based dense connected convolutional network (APDC-Net) for aerial scene classification that uses a simplified dense connection structure as the backbone to preserve features from different levels and introduces a multi-level supervision strategy.

Multi-Granularity Canonical Appearance Pooling for Remote Sensing Scene Classification

A novel Multi-Granularity Canonical Appearance Pooling (MG-CAP) to automatically capture the latent ontological structure of remote sensing datasets is proposed and a granular framework that allows progressively cropping the input image to learn multi-grained features is designed.

Data Augmentation Using Random Image Cropping and Patching for Deep CNNs

A new data augmentation technique called random image cropping and patching (RICAP) which randomly crops four images and patches them to create a new training image and achieves a new state-of-the-art test error of 2.19% on CIFAR-10.