Corpus ID: 219573367

NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity

@article{Lee2020NanoFlowSN,
  title={NanoFlow: Scalable Normalizing Flows with Sublinear Parameter Complexity},
  author={Sang-gil Lee and Sungwon Kim and Sungroh Yoon},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.06280}
}
Normalizing flows (NFs) have become a prominent method for deep generative models that allow for an analytic probability density estimation and efficient synthesis. However, a flow-based network is considered to be inefficient in parameter complexity because of reduced expressiveness of bijective mapping, which renders the models prohibitively expensive in terms of parameters. We present an alternative of parameterization scheme, called NanoFlow, which uses a single neural density estimator to… Expand
Flow-based Generative Models for Learning Manifold to Manifold Mappings
TLDR
The main result is the design of a two-stream version of GLOW (flow-based invertible generative models) that can synthesize information of a field of one type of manifold-valued measurements given another. Expand
Distilling the Knowledge from Conditional Normalizing Flows
TLDR
This work investigates whether one can distill flow-based models into more efficient alternatives and provides a positive answer by proposing a simple distillation approach and demonstrating its effectiveness on state-of-the-art conditional flow- based models for image super-resolution and speech synthesis. Expand
Distilling the Knowledge from Normalizing Flows
TLDR
A positive answer to the question whether one can distill knowledge from flow-based models to more efficient alternatives is provided by proposing a simple distillation approach and demonstrating its effectiveness on state-of-the-art conditional flow- based models for image super-resolution and speech synthesis. Expand
FlowVocoder: A small Footprint Neural Vocoder based Normalizing flow for Speech Synthesis
TLDR
This paper proposes a new type of autoregressive neural vocoder called FlowVocoder, which has a small memory footprint and is able to generate high-fidelity audio in real-time and is more suitable for real- time text-to-speech applications. Expand
Improving Continuous Normalizing Flowsusing a Multi-Resolution Framework
Recent work has shown that Continuous Normalizing Flows (CNFs) can serve as generative models of images with exact likelihood calculation and invertible generation/density estimation. In this work weExpand
Multi-Resolution Continuous Normalizing Flows
TLDR
A Multi-Resolution variant of Neural Ordinary Differential Equations (MRCNF), by characterizing the conditional distribution over the additional information required to generate a fine image that is consistent with the coarse image, and introducing a transformation between resolutions that allows for no change in the log likelihood. Expand
PortaSpeech: Portable and High-Quality Generative Text-to-Speech
  • Yi Ren, Jinglin Liu, Zhou Zhao
  • Engineering, Computer Science
  • ArXiv
  • 2021
TLDR
PortaSpeech is proposed, a portable and high-quality generative text-to-speech model that outperforms other TTS models in both voice quality and prosody modeling in terms of subjective and objective evaluation metrics, and shows only a slight performance degradation when reducing the model parameters to 6.7M. Expand
CAN KERNEL TRANSFER OPERATORS HELP FLOW
  • 2020
Flow-based generative models refer to deep generative models with tractable likelihoods, and offer several attractive properties including efficient density estimation and sampling. Despite manyExpand

References

SHOWING 1-10 OF 32 REFERENCES
WaveFlow: A Compact Flow-based Model for Raw Audio
TLDR
WaveFlow provides a unified view of likelihood-based models for 1-D data, including WaveNet and WaveGlow as special cases, while synthesizing several orders of magnitude faster as it only requires a few sequential steps to generate very long waveforms with hundreds of thousands of time-steps. Expand
Flow++: Improving Flow-Based Generative Models with Variational Dequantization and Architecture Design
TLDR
Flow++ is proposed, a new flow-based model that is now the state-of-the-art non-autoregressive model for unconditional density estimation on standard image benchmarks, and has begun to close the significant performance gap that has so far existed between autoregressive models and flow- based models. Expand
Glow: Generative Flow with Invertible 1x1 Convolutions
TLDR
Glow, a simple type of generative flow using an invertible 1x1 convolution, is proposed, demonstrating that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images. Expand
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
TLDR
This work presents two parameter-reduction techniques to lower memory consumption and increase the training speed of BERT, and uses a self-supervised loss that focuses on modeling inter-sentence coherence. Expand
Improved Variational Inference with Inverse Autoregressive Flow
TLDR
A new type of normalizing flow, inverse autoregressive flow (IAF), is proposed that, in contrast to earlier published flows, scales well to high-dimensional latent spaces and significantly improves upon diagonal Gaussian approximate posteriors. Expand
Masked Autoregressive Flow for Density Estimation
TLDR
This work describes an approach for increasing the flexibility of an autoregressive model, based on modelling the random numbers that the model uses internally when generating data, which is called Masked Autoregressive Flow. Expand
NICE: Non-linear Independent Components Estimation
We propose a deep learning framework for modeling complex high-dimensional densities called Non-linear Independent Component Estimation (NICE). It is based on the idea that a good representation isExpand
How to Train Your Neural ODE: the World of Jacobian and Kinetic Regularization
TLDR
This paper introduces a theoretically-grounded combination of both optimal transport and stability regularizations which encourage neural ODEs to prefer simpler dynamics out of all the dynamics that solve a problem well, leading to faster convergence and to fewer discretizations of the solver. Expand
TraDE: Transformers for Density Estimation
TLDR
TraDE, an attention-based architecture for auto-regressive density estimation that employs a Maximum Likelihood loss and a Maximum Mean Discrepancy two-sample loss to ensure that samples from the estimate resemble the training data. Expand
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks. Expand
...
1
2
3
4
...