Local Binary Pattern Networks

  title={Local Binary Pattern Networks},
  author={Jeng-Hau Lin and Yunfan Yang and Rajesh K. Gupta and Zhuowen Tu},
  journal={2020 IEEE Winter Conference on Applications of Computer Vision (WACV)},
Emerging edge devices such as sensor nodes are increasingly being tasked with non-trivial tasks related to sensor data processing and even application-level inferences from this sensor data. These devices are, however, extraordinarily resource-constrained in terms of CPU power (often Cortex M0-3 class CPUs), available memory (in few KB to MBytes), and energy. Under these constraints, we explore a novel approach to character recognition using local binary pattern networks, or LBPNet, that can… Expand
Accelerating Local Binary Pattern Networks with Software-Programmable FPGAs
This paper implements and optimize an alternative genre of networks, local binary pattern network (LBPNet) which eliminates arithmetic operations by combinatorial operations thus substantially boosting the efficiency of hardware implementation. Expand
Vulnerability of Hardware Neural Networks to Dynamic Operation Point Variations
Robust neural network architectures, including the binarized neural network (BNN) and the local binary pattern network (LBPNet) are explored to address this variability issue that has become a major bottleneck for practical applications. Expand
Intelligence Beyond the Edge: Inference on Intermittent Embedded Systems
This paper designs and implements SONIC, an intermittence-aware software system with specialized support for DNN inference, and introduces loop continuation, a new technique that dramatically reduces the cost of guaranteeing correct intermittent execution for loop-heavy code likeDNN inference. Expand
Patch Attention Layer of Embedding Handcrafted Features in CNN for Facial Expression Recognition
A novel method based on patches of interest, the Patch Attention Layer (PAL) of embedding handcrafted features, is proposed to learn the local shallow facial features of each patch on face images. Expand
Automatic Real-Time Road Crack Identification System
This work is focused on detecting street surface cracks using Computer Vision algorithms using Convolutional Neural Network, U-Net and a Local Binary Pattern. Expand
Embedded software for robotics: challenges and future directions: special session
This paper surveys recent challenges and solutions in the design, implementation, and verification of embedded software for robotics. Emphasis is placed on mobile robots, like self-driving cars. InExpand


XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks
The Binary-Weight-Network version of AlexNet is compared with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than \(16\,\%\) in top-1 accuracy. Expand
FINN: A Framework for Fast, Scalable Binarized Neural Network Inference
FINN, a framework for building fast and flexible FPGA accelerators using a flexible heterogeneous streaming architecture that implements fully connected, convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements is presented. Expand
Binarized Neural Networks
A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. Expand
Sparse Convolutional Neural Networks
This work shows how to reduce the redundancy in these parameters using a sparse decomposition, and proposes an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Expand
Resource-efficient Machine Learning in 2 KB RAM for the Internet of Things
Bonsai can make predictions in milliseconds even on slow microcontrollers, can fit in KB of memory, has lower battery consumption than all other algorithms, and achieves prediction accuracies that can be as much as 30% higher than state-of-the-art methods for resource-efficient machine learning. Expand
Dynamic Network Surgery for Efficient DNNs
A novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning by proving that it outperforms the recent pruning method by considerable margins. Expand
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN. Expand
Accelerating Binarized Convolutional Neural Networks with Software-Programmable FPGAs
The design of a BNN accelerator is presented that is synthesized from C++ to FPGA-targeted Verilog and outperforms existing FPGAs-based CNN accelerators in GOPS as well as energy and resource efficiency. Expand
Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration
This paper proposes BCNN with Separable Filters (BCNNw/SF), which applies Singular Value Decomposition (SVD) on BCNN kernels to further reduce computational and storage complexity. Expand
Local Binary Convolutional Neural Networks
Empirically, CNNs with LBC layers, called local binary convolutional neural networks (LBCNN), achieves performance parity with regular CNNs on a range of visual datasets while enjoying significant computational savings. Expand