Vadim Lebedev

Learn More
We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discrim-inative fine-tuning. Given a layer, we use non-linear least squares to compute a low-rank CP-decomposition of the 4D convolution kernel tensor into a sum of a small number of rank-one tensors. At the(More)
Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However , their methods require a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our(More)
  • 1