• Corpus ID: 189762389

Visual Wake Words Dataset

@article{Chowdhery2019VisualWW,
  title={Visual Wake Words Dataset},
  author={Aakanksha Chowdhery and Pete Warden and Jonathon Shlens and Andrew G. Howard and Rocky Rhodes},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.05721}
}
The emergence of Internet of Things (IoT) applications requires intelligence on the edge. [] Key Result We anticipate the proposed dataset will advance the research on tiny vision models that can push the pareto-optimal boundary in terms of accuracy versus memory usage for microcontroller applications.

Figures and Tables from this paper

μNAS: Constrained Neural Architecture Search for Microcontrollers

TLDR
This work builds a neural architecture search (NAS) system, called μNAS, to automate the design of such small-yet-powerful MCU-level networks, and shows that μNAS represents a significant advance in resource-efficient models.

μNAS: CONSTRAINED NEURAL ARCHITECTURE SEARCH

TLDR
This work builds a neural architecture search system, called μNAS, to automate the design of such small-yet-powerful MCU-level networks, and shows that on a variety of image classification datasets μNAS is able to improve top-1 classification accuracy and reduce memory footprint.

MCUNet: Tiny Deep Learning on IoT Devices

TLDR
MCUNet, a framework that jointly designs the efficient neural architecture (T TinyNAS) and the lightweight inference engine (TinyEngine), enabling ImageNet-scale inference on microcontrollers, is proposed, suggesting that the era of always-on tiny machine learning on IoT devices has arrived.

TinyML: Enabling of Inference Deep Learning Models on Ultra-Low-Power IoT Edge Devices for AI Applications

TLDR
An overview of the revolution of TinyML and a review of tinyML studies is provided, wherein the main contribution is to provide an analysis of the type of ML models used intinyML studies and the details of datasets and the types and characteristics of the devices.

Differentiable Network Pruning for Microcontrollers

TLDR
This work presents a differentiable structured network pruning method for convolutional neural networks, which integrates a model’s MCU-specific resource usage and parameter importance feedback to obtain highly compressed yet accurate classification models.

MicroNets: Neural Network Architectures for Deploying TinyML Applications on Commodity Microcontrollers

TLDR
This paper employs differentiable NAS (DNAS) to search for models with low memory usage and low op count, where op count is treated as a viable proxy to latency, and obtains state-of-the-art results for all three TinyMLperf industry-standard benchmark tasks.

Machine Learning for Microcontroller-Class Hardware - A Review

TLDR
This paper characterize a closed-loop widely applicable work of machine learning model development for microcontroller class devices and show that several classes of applications adopt a specific instance of it.

MSNet: Structural Wired Neural Architecture Search for Internet of Things

TLDR
The preliminary experimental results on IoT applications demonstrate that the MSNet crafted by MSNAS outperforms MobileNetV2 and MnasNet by 3.0% in accuracy, with 20% less peak memory consumption and similar Multi-Adds.

Leveraging Automated Mixed-Low-Precision Quantization for Tiny Edge Microcontrollers

TLDR
An automated mixed-precision quantization flow based on the HAQ framework but tailored for the memory and computational characteristics of MCU devices is presented, showing the viability of uniform quantization, required for MCU deployments, for deep weights compression.

On-Device Training Under 256KB Memory

TLDR
This framework is the first practical solution for on-device transfer learning of visual recognition on tiny IoT devices using less than 1/100 of the memory of existing frameworks while matching the accuracy of cloud training+edge deployment for the tinyML application VWW [21].
...

References

SHOWING 1-10 OF 26 REFERENCES

SpArSe: Sparse Architecture Search for CNNs on Resource-Constrained Microcontrollers

TLDR
It is demonstrated that it is possible to automatically design CNNs which generalize well, while also being small enough to fit onto memory-limited MCUs, and the CNNs found are more accurate and up to $4.35 times smaller than previous approaches, while meeting the strict MCU working memory constraint.

Hello Edge: Keyword Spotting on Microcontrollers

TLDR
It is shown that it is possible to optimize these neural network architectures to fit within the memory and compute constraints of microcontrollers without sacrificing accuracy, and the depthwise separable convolutional neural network (DS-CNN) is explored and compared against other neural network architecture.

CMSIS-NN: Efficient Neural Network Kernels for Arm Cortex-M CPUs

TLDR
CMSIS-NN, efficient kernels developed to maximize the performance and minimize the memory footprint of neural network (NN) applications on Arm Cortex-M processors targeted for intelligent IoT edge devices are presented.

NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications

TLDR
An algorithm that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget while maximizing the accuracy, and achieves better accuracy versus latency trade-offs on both mobile CPU and mobile GPU, compared with the state-of-the-art automated network simplification algorithms.

Not All Ops Are Created Equal!

TLDR
It is shown that throughput and energy varies by up to 5X across different neural network operation types on an off-the-shelf Arm Cortex-M7 microcontroller, and memory required for activation data also need to be considered, apart from the model parameters, for network architecture exploration studies.

Rethinking Machine Learning Development and Deployment for Edge Devices

TLDR
This paper proposes a new ML development and deployment approach that is specially designed and optimized for inference-only deployment on edge devices and demonstrates that this approach can address all the deployment challenges and result in more efficient and high-quality solutions.

MnasNet: Platform-Aware Neural Architecture Search for Mobile

TLDR
An automated mobile neural architecture search (MNAS) approach, which explicitly incorporate model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and latency.

ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices

TLDR
An extremely computation-efficient CNN architecture named ShuffleNet is introduced, which is designed specially for mobile devices with very limited computing power (e.g., 10-150 MFLOPs), to greatly reduce computation cost while maintaining accuracy.

Trained Ternary Quantization

TLDR
This work proposes Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values to improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet.

Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference

TLDR
A quantization scheme is proposed that allows inference to be carried out using integer- only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware.