Corpus ID: 236986836

Self-supervised Contrastive Learning for Irrigation Detection in Satellite Imagery

@article{Agastya2021SelfsupervisedCL,
  title={Self-supervised Contrastive Learning for Irrigation Detection in Satellite Imagery},
  author={Chitra Agastya and Sirak Ghebremusse and Ian Anderson and Colorado Reed and Hossein Vahabi and Alberto Todeschini},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.05484}
}
Climate change has caused reductions in river runoffs and aquifer recharge resulting in an increasingly unsustainable crop water demand from reduced freshwater availability. Achieving food security while deploying water in a sustainable manner will continue to be a major challenge necessitating careful monitoring and tracking of agricultural water usage. Historically, monitoring water usage has been a slow and expensive manual process with many imperfections and abuses. Machine learning and… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 18 REFERENCES
Bigearthnet: A Large-Scale Benchmark Archive for Remote Sensing Image Understanding
TLDR
Experimental results obtained in the framework of RS image scene classification problems show that a shallow Convolutional Neural Network architecture trained on the BigEarthNet provides much higher accuracy compared to a state-of-the-art CNN model pre-trained on the ImageNet. Expand
NASA Making Earth System Data Records for Use in Research Environments (MEaSUREs) Global Food Security-support Analysis Data (GFSAD) Cropland Extent 2015 Australia, New Zealand, China, Mongolia 30 m V001
The NASA Making Earth System Data Records for Use in Research Environments (MEaSUREs) Global Food Security-support Analysis Data (GFSAD) data product provides cropland extent data over Australia, NewExpand
SelfAugment: Automatic Augmentation Policies for Self-Supervised Learning.
TLDR
This work shows that evaluating the learned representations with a self-supervised image rotation task is highly correlated with a standard set of supervised evaluations, and provides an algorithm (SelfAugment) to automatically and efficiently select augmentation policies without using supervised evaluations. Expand
Self-supervised Pretraining of Visual Features in the Wild
TLDR
The final SElf-supERvised (SEER) model, a RegNetY with 1.3B parameters trained on 1B random images with 512 GPUs achieves 84.2% top-1 accuracy, surpassing the best self-supervised pretrained model by 1% and confirming that self- Supervised learning works in a real world setting. Expand
Self-Supervised Pretraining Improves Self-Supervised Pretraining
TLDR
H Hierarchical PreTraining (HPT) is explored, which decreases convergence time and improves accuracy by initializing the pretraining process with an existing pretrained model, and provides a simple framework for obtaining better pretrained representations with less computational resources. Expand
Data-Efficient Image Recognition with Contrastive Predictive Coding
TLDR
This work revisit and improve Contrastive Predictive Coding, an unsupervised objective for learning such representations which make the variability in natural signals more predictable, and produces features which support state-of-the-art linear classification accuracy on the ImageNet dataset. Expand
Representation Learning with Contrastive Predictive Coding
TLDR
This work proposes a universal unsupervised learning approach to extract useful representations from high-dimensional data, which it calls Contrastive Predictive Coding, and demonstrates that the approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments. Expand
Learning Representations by Maximizing Mutual Information Across Views
TLDR
This work develops a model which learns image representations that significantly outperform prior methods on the tasks the authors consider, and extends this model to use mixture-based representations, where segmentation behaviour emerges as a natural side-effect. Expand
Momentum Contrast for Unsupervised Visual Representation Learning
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and aExpand
Automatic augmentation policies for self-supervised learning
  • Proceedings of the IEEE conference on Computer Vision and Pattern Recognition
  • 2021
...
1
2
...