• Corpus ID: 239998745

Training Lightweight CNNs for Human-Nanodrone Proximity Interaction from Small Datasets using Background Randomization

@article{Ferri2021TrainingLC,
  title={Training Lightweight CNNs for Human-Nanodrone Proximity Interaction from Small Datasets using Background Randomization},
  author={Marco Ferri and Dario Mantegazza and Elia Cereda and Nicky Zimmerman and Luca Maria Gambardella and Daniele Palossi and J{\'e}r{\^o}me Guzzi and Alessandro Giusti},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14491}
}
We consider the task of visually estimating the pose of a human from images acquired by a nearby nano-drone; in this context, we propose a data augmentation approach based on synthetic background substitution to learn a lightweight CNN model from a small real-world training set. Experimental results on data from two different labs proves that the approach improves generalization to unseen environments. 

Figures from this paper

References

SHOWING 1-10 OF 24 REFERENCES
Training Deep Networks with Synthetic Data: Bridging the Reality Gap by Domain Randomization
TLDR
This work presents a system for training deep neural networks for object detection using synthetic images that relies upon the technique of domain randomization, in which the parameters of the simulator are randomized in non-realistic ways to force the neural network to learn the essential features of the object of interest.
Data Augmentation Using Random Image Cropping and Patching for Deep CNNs
TLDR
A new data augmentation technique called random image cropping and patching (RICAP) which randomly crops four images and patches them to create a new training image and achieves a new state-of-the-art test error of 2.19% on CIFAR-10.
A survey on Image Data Augmentation for Deep Learning
TLDR
This survey will present existing methods for Data Augmentation, promising developments, and meta-level decisions for implementing DataAugmentation, a data-space solution to the problem of limited data.
Unsupervised Data Augmentation for Consistency Training
TLDR
A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
Domain Randomization and Pyramid Consistency: Simulation-to-Real Generalization Without Accessing Target Domain Data
TLDR
A new approach of domain randomization and pyramid consistency to learn a model with high generalizability for semantic segmentation of real-world self-driving scenes in a domain generalization fashion is proposed.
Domain randomization for transferring deep neural networks from simulation to the real world
TLDR
This paper explores domain randomization, a simple technique for training models on simulated images that transfer to real images by randomizing rendering in the simulator, and achieves the first successful transfer of a deep neural network trained only on simulated RGB images to the real world for the purpose of robotic control.
Learning Data Augmentation Strategies for Object Detection
TLDR
This work investigates how learned, specialized data augmentation policies improve generalization performance for detection models, and reveals that a learned augmentation policy is superior to state-of-the-art architecture regularization methods for object detection, even when considering strong baselines.
Smart Augmentation Learning an Optimal Data Augmentation Strategy
TLDR
Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss, and allows to learn augmentations that minimize the error of that network.
Improved Mixed-Example Data Augmentation
TLDR
This work aims to explore a new, more generalized form of this type of data augmentation in order to determine whether such linearity is necessary, and finds a much larger space of practical augmentation techniques, including methods that improve upon previous state-of-the-art.
Regularization of Neural Networks using DropConnect
TLDR
This work introduces DropConnect, a generalization of Dropout, for regularizing large fully-connected layers within neural networks, and derives a bound on the generalization performance of both Dropout and DropConnect.
...
1
2
3
...