Deep convolutional neural networks based on semi-discrete frames

  title={Deep convolutional neural networks based on semi-discrete frames},
  author={Thomas Wiatowski and Helmut B{\"o}lcskei},
  journal={2015 IEEE International Symposium on Information Theory (ISIT)},
  • Thomas Wiatowski, H. Bölcskei
  • Published 21 April 2015
  • Computer Science, Mathematics
  • 2015 IEEE International Symposium on Information Theory (ISIT)
Deep convolutional neural networks have led to breakthrough results in practical feature extraction applications. The mathematical analysis of these networks was pioneered by Mallat [1]. Specifically, Mallat considered so-called scattering networks based on identical semi-discrete wavelet frames in each network layer, and proved translation-invariance as well as deformation stability of the resulting feature extractor. The purpose of this paper is to develop Mallat's theory further by allowing… 

Figures from this paper

A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction

This paper complements Mallat’s results by developing a theory that encompasses general convolutional transforms, or in more technical parlance, general semi-discrete frames, and establishes deformation sensitivity bounds that apply to signal classes such as, e.g., band-limited functions, cartoon functions, and Lipschitz functions.

Geometric Scattering on Measure Spaces

A general, unified model for geometric scattering on measure spaces is introduced and a new criterion that identifies to which groups a useful representation should be invariant is proposed and shown to guarantee that the scattering transform has desirable stability and invariance properties.

Gabor frames and deep scattering networks in audio processing

Numerical evidence is given by evaluation on a synthetic and a "real" data set, that the invariances encoded by the Gabor scattering transform lead to higher performance in comparison with just using Gabor transform, especially when few training samples are available.

DCFNet: Deep Neural Network with Decomposed Convolutional Filters

This paper proposes to decompose convolutional filters in CNN as a truncated expansion with pre-fixed bases, namely the Decomposed Convolutional Filters network (DCFNet), where the expansion coefficients remain learned from data.

Lipschitz Properties for Deep Convolutional Networks

The stability properties of convolutional neural networks are discussed, and a formula for computing the Lipschitz bound is given, and it is compared with other methods to show it is closer to the optimal value.

Proposal for Qualifying Exam

The application of the scattering transform and a new variant to understanding the classification of objects using sonar, using the shearlet frame instead of Morlet wavelets that are often used in the standard scattering transform are explored.

Stability of the scattering transform for deformations with minimal regularity

. Within the mathematical analysis of deep convolutional neural networks, the wavelet scattering transform introduced by St´ephane Mallat is a unique example of how the ideas of multiscale analysis

Pedestrian detection algorithms using shearlets

The designed shearlets can characterize edge points in R2 and their type by the decay rates and the limits of the shearlet transform for decreasing scales, and are adapted according to the requirements of the practical application of a pedestrian detection algorithm.

Geometric Wavelet Scattering Networks on Compact Riemannian Manifolds

A geometric scattering transform on manifolds is defined, based on a cascade of wavelet filters and pointwise nonlinearities, which generalize the deformation stability and local translation invariance of Euclidean scattering and demonstrates the importance of linking the used filter structures to the underlying geometry of the data.

Underwater object classification using scattering transform of sonar signals

This paper applies the scattering transform (ST)—a nonlinear map based off of a convolutional neural network (CNN)—to classification of underwater objects using sonar signals, achieving effective binary classification both on a real dataset of Unexploded Ordinance and synthetically generated examples.



ImageNet classification with deep convolutional neural networks

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Invariant Scattering Convolution Networks

  • Joan BrunaS. Mallat
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2013
The mathematical analysis of wavelet scattering networks explains important properties of deep convolution networks for classification.

Convolutional networks and applications in vision

New unsupervised learning algorithms, and new non-linear stages that allow ConvNets to be trained with very few labeled samples are described, including one for visual object recognition and vision navigation for off-road mobile robots.

A Wavelet Tour of Signal Processing - The Sparse Way, 3rd Edition

The central concept of sparsity is explained and applied to signal compression, noise reduction, and inverse problems, while coverage is given to sparse representations in redundant dictionaries, super-resolution and compressive sensing applications.

Continuous curvelet transform: II. Discretization and frames

Ridgelet-type Frame Decompositions for Sobolev Spaces related to Linear Transport

In this paper we study stability properties of ridgelet and curvelet frames for mixed-smoothness Sobolev spaces with norm $\|f\|_{s} = \|f\|_{L_{2}(\mathbb{R}^{d})} +\|s\cdot\nabla

Deep Scattering Spectrum

A scattering transform defines a locally translation invariant representation which is stable to time-warping deformation. It extends MFCC representations by computing modulation spectrum

Representation Learning: A Review and New Perspectives

Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.

Group Invariant Scattering

This paper constructs translation-invariant operators on L 2 .R d /, which are Lipschitz-continuous to the action of diffeomorphisms, and extendsScattering operators are extended on L2 .G/, where G is a compact Lie group, and are invariant under theaction of G.

Cartoon Approximation with -Curvelets

This paper considers the more general case of piecewise C -functions, separated by a C singularity curve for 2 (1; 2), and introduces -curvelets, which are systems that interpolate between wavelet systems on the one hand and curvelet system on the other hand.