Author pages are created from data sourced from our academic publisher partnerships and public sources.

- Publications
- Influence

Tensor-Train Decomposition

- I. Oseledets
- Mathematics, Computer Science
- SIAM J. Sci. Comput.
- 1 September 2011

A simple nonrecursive form of the tensor decomposition in $d$ dimensions is presented. It does not inherently suffer from the curse of dimensionality, it has asymptotically the same number of… Expand

Speeding-up Convolutional Neural Networks Using Fine-tuned CP-Decomposition

- V. Lebedev, Yaroslav Ganin, Maksim Rakhuba, I. Oseledets, V. Lempitsky
- Computer Science
- ICLR
- 19 December 2014

We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning. Given a layer, we use… Expand

TT-cross approximation for multidimensional arrays

- I. Oseledets, E. Tyrtyshnikov
- Mathematics
- 2010

Abstract As is well known, a rank- r matrix can be recovered from a cross of r linearly independent columns and rows, and an arbitrary matrix can be interpolated on the cross entries. Other entries… Expand

Breaking the Curse of Dimensionality, Or How to Use SVD in Many Dimensions

- I. Oseledets, E. Tyrtyshnikov
- Mathematics, Computer Science
- SIAM J. Sci. Comput.
- 1 August 2009

For $d$-dimensional tensors with possibly large $d>3$, an hierarchical data structure, called the Tree-Tucker format, is presented as an alternative to the canonical decomposition. It has… Expand

Unifying time evolution and optimization with matrix product states

- J. Haegeman, C. Lubich, I. Oseledets, Bart Vandereycken, F. Verstraete
- Physics
- 21 August 2014

We show that the time-dependent variational principle provides a unifying framework for time-evolution methods and optimization methods in the context of matrix product states. In particular, we… Expand

How to find a good submatrix

- S. A. Goreinov, I. Oseledets, D. Savostyanov, E. Tyrtyshnikov, N. Zamarashkin
- Mathematics
- 2010

Pseudoskeleton approximation and some other problems require the knowledge of sufficiently well-conditioned submatrix in a large-scale matrix. The quality of a submatrix can be measured by modulus of… Expand

Solution of Linear Systems and Matrix Inversion in the TT-Format

- I. Oseledets, S. Dolgov
- Computer Science, Mathematics
- SIAM J. Sci. Comput.
- 11 October 2012

Tensors arise naturally in high-dimensional problems in chemistry, financial mathematics, and many other areas. The numerical treatment of such problems is difficult due to the curse of… Expand

Tensor Networks for Dimensionality Reduction and Large-scale Optimization: Part 1 Low-Rank Tensor Decompositions

- A. Cichocki, N. Lee, I. Oseledets, A. Phan, Q. Zhao, D. Mandic
- Computer Science
- Found. Trends Mach. Learn.
- 19 December 2016

Modern applications in engineering and data science are increasinglybased on multidimensional data of exceedingly high volume, variety,and structural richness. However, standard machine learning… Expand

A projector-splitting integrator for dynamical low-rank approximation

- C. Lubich, I. Oseledets
- Mathematics
- 6 January 2013

The dynamical low-rank approximation of time-dependent matrices is a low-rank factorization updating technique. It leads to differential equations for factors of the matrices, which need to be solved… Expand

Computation of extreme eigenvalues in higher dimensions using block tensor train format

- S. Dolgov, B. Khoromskij, I. Oseledets, D. Savostyanov
- Mathematics, Physics
- Comput. Phys. Commun.
- 10 June 2013

We consider approximate computation of several minimal eigenpairs of large Hermitian matrices which come from high-dimensional problems. We use the tensor train (TT) format for vectors and matrices… Expand