• Publications
  • Influence
FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search
TLDR
This work proposes a differentiable neural architecture search (DNAS) framework that uses gradient-based methods to optimize ConvNet architectures, avoiding enumerating and training individual architectures separately as in previous methods. Expand
SqueezeSeg: Convolutional Neural Nets with Recurrent CRF for Real-Time Road-Object Segmentation from 3D LiDAR Point Cloud
TLDR
An end-to-end pipeline called SqueezeSeg based on convolutional neural networks (CNN), which takes a transformed LiDAR point cloud as input and directly outputs a point-wise label map, which is then refined by a conditional random field (CRF) implemented as a recurrent layer. Expand
SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud
TLDR
This work introduces a new model SqueezeSegV2, which is more robust against dropout noises in LiDAR point cloud and therefore achieves significant accuracy improvement, and a domain-adaptation training pipeline consisting of three major components: learned intensity rendering, geodesic correlation alignment, and progressive domain calibration. Expand
SqueezeDet: Unified, Small, Low Power Fully Convolutional Neural Networks for Real-Time Object Detection for Autonomous Driving
TLDR
SqueezeDet is a fully convolutional neural network for object detection that aims to simultaneously satisfy all of the above constraints, and is very accurate, achieving state-of-the-art accuracy on the KITTI benchmark. Expand
Shift: A Zero FLOP, Zero Parameter Alternative to Spatial Convolutions
TLDR
A parameter-free, FLOP-free "shift" operation as an alternative to spatial convolutions, which fuse shifts and point-wise convolutions to construct end-to-end trainable shift-based modules, with a hyperparameter characterizing the tradeoff between accuracy and efficiency. Expand
ChamNet: Towards Efficient Network Design Through Platform-Aware Model Adaptation
TLDR
The results show that adapting computation resources to building blocks is critical to model performance, and a novel algorithm to search for optimal architectures aided by efficient accuracy and resource (latency and/or energy) predictors is proposed. Expand
SqueezeNext: Hardware-Aware Neural Network Design
TLDR
SqueezeNext is introduced, a new family of neural network architectures whose design was guided by considering previous architectures such as SqueezeNet, as well as by simulation results on a neural network accelerator. Expand
FBNetV2: Differentiable Neural Architecture Search for Spatial and Channel Dimensions
TLDR
This work proposes a memory and computationally efficient DNAS variant, DMaskingNAS, that expands the search space by up to 10^14x over conventional DNAS, supporting searches over spatial and channel dimensions that are otherwise prohibitively expensive: input resolution and number of filters. Expand
Mixed Precision Quantization of ConvNets via Differentiable Neural Architecture Search
TLDR
A novel differentiable neural architecture search (DNAS) framework is proposed to efficiently explore its exponential search space with gradient-based optimization and surpass the state-of-the-art compression of ResNet on CIFAR-10 and ImageNet. Expand
Visual Transformers: Token-based Image Representation and Processing for Computer Vision
TLDR
This work represents images as a set of visual tokens and applies visual transformers to find relationships between visual semantic concepts to densely model relationships between them, and finds that this paradigm of token-based image representation and processing drastically outperforms its convolutional counterparts on image classification and semantic segmentation. Expand
...
1
2
3
4
5
...