Corpus ID: 166228098

SpecNet: Spectral Domain Convolutional Neural Network

  title={SpecNet: Spectral Domain Convolutional Neural Network},
  author={Bochen Guan and Jinnian Zhang and William A. Sethares and Richard Kijowski and Fang Liu},
The memory consumption of most Convolutional Neural Network (CNN) architectures grows rapidly with increasing depth of the network, which is a major constraint for efficient network training and inference on modern GPUs with yet limited memory. [...] Key Method SpecNet exploits a configurable threshold to force small values in the feature maps to zero, allowing the feature maps to be stored sparsely.Expand
A Low-complexity Complex-valued Activation Function for Fast and Accurate Spectral Domain Convolutional Neural Network
A complex-valued activation function for spectral domain CNNs that only transmits input values that have positive-valued real or imaginary component is proposed that is computationally inexpensive in both forward and backward propagation and provides sufficient nonlinearity that ensures high classification accuracy. Expand
SMOF: Squeezing More Out of Filters Yields Hardware-Friendly CNN Pruning
A CNN pruning framework called SMOF is developed, which Squeezes More Out of Filters by reducing both kernel size and the number of filter channels, which is friendly to standard hardware devices without any customized low-level implementations and can be deployed effortlessly with significant running time reduction. Expand
Robust Learning with Frequency Domain Regularization
A new regularization method is introduced by constraining the frequency spectra of the filter of the model by defensing to adversarial perturbations and improving the generalization ability in transfer learning scenario without fine-tune. Expand
Image-Based River Water Level Estimation for Redundancy Information Using Deep Neural Network
This work proposes to automate the monitoring and management of water levels by using image processing methods of the staff gauge to measure and deep neural network to estimate the water level by using three models of neural networks. Expand
Stress Classification Using Photoplethysmogram-Based Spatial and Frequency Domain Images
The use of the frequency domain images that are generated from the spatial domain images of the IBI extracted from the PPG signal is used to classify the stress state of the individual by building person-specific models and calibrated generic models. Expand
Physics-informed machine learning: case studies for weather and climate modelling
This work surveys systematic approaches to incorporating physics and domain knowledge into ML models and distill these approaches into broad categories, and shows how these approaches have been used successfully for emulating, downscaling, and forecasting weather and climate processes. Expand
Deep learning risk assessment models for predicting progression of radiographic medial joint space loss over a 48-MONTH follow-up period.
DL models using baseline knee X-rays had higher diagnostic performance for predicting the progression of radiographic joint space loss than the traditional model using demographic and radiographic risk factors. Expand


Beyond Filters: Compact Feature Map for Portable Deep Model
This paper focuses on the redundancy in the feature maps derived from the large number of filters in a layer of CNNs, and proposes to extract intrinsic representation of the feature Maps and preserve the discriminability of the features. Expand
Spectral Representations for Convolutional Neural Networks
This work proposes spectral pooling, which performs dimensionality reduction by truncating the representation in the frequency domain, and demonstrates the effectiveness of complex-coefficient spectral parameterization of convolutional filters. Expand
Spectral-based convolutional neural network without multiple spatial-frequency domain switchings
An efficient spectral-based CNN model is presented that uses only the lower frequency components by way of fusing the convolutional and sub-sampling layers and provides and utilizes a frequency domain equivalent of the conventional batch normalization layer that results in improving the accuracy of the network. Expand
Gist: Efficient Data Encoding for Deep Neural Network Training
This paper investigates widely used DNNs and finds that the major contributors to memory footprint are intermediate layer outputs (feature maps), and introduces a framework for DNN-layer-specific optimizations that significantly reduce this source of main memory pressure on GPUs. Expand
Fast Training of Convolutional Networks through FFTs
This work presents a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. Expand
FCNN: Fourier Convolutional Neural Networks
The proposed Fourier Convolution Neural Network (FCNN) is proposed whereby training is conducted entirely within the Fourier domain and shows a significant speed up in training time without loss of effectiveness. Expand
Sparse Convolutional Neural Networks
This work shows how to reduce the redundancy in these parameters using a sparse decomposition, and proposes an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Expand
The Power of Sparsity in Convolutional Neural Networks
2D convolution is generalized to use a channel-wise sparse connection structure and it is shown that this leads to significantly better results than the baseline approach for large networks including VGG and Inception V3. Expand
Exploiting Kernel Sparsity and Entropy for Interpretable CNN Compression
  • Yuchao Li, Shaohui Lin, +5 authors R. Ji
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
KSE is capable of simultaneously compressing each layer in an efficient way, which is significantly faster compared to previous data-driven feature map pruning methods, and significantly outperforms state-of-the-art methods. Expand
Learning Structured Sparsity in Deep Neural Networks
The results show that for CIFAR-10, regularization on layer depth can reduce 20 layers of a Deep Residual Network to 18 layers while improve the accuracy from 91.25% to 92.60%, which is still slightly higher than that of original ResNet with 32 layers. Expand