• Publications
  • Influence
Hyperbolic Image Embeddings
TLDR
It is demonstrated that in many practical scenarios, hyperbolic embeddings provide a better alternative to linear hyperplanes, Euclidean distances, or spherical geodesic distances.
Art of Singular Vectors and Universal Adversarial Perturbations
TLDR
This work proposes a new algorithm based on computing the so-called (p, q)-singular vectors of the Jacobian matrices of hidden layers of a network that is able to construct universal perturbations with more than 60 % fooling rate on the dataset consisting of 50000 images.
Tensorized Embedding Layers for Efficient Model Compression
TLDR
This work introduces a novel way of parametrizing embedding layers based on the Tensor Train (TT) decomposition, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance.
Revisiting Deep Learning Models for Tabular Data
TLDR
An overview of the main families of DL architectures for tabular data is performed and the bar of baselines in tabular DL is raised by identifying two simple and powerful deep architectures, which turns out to be a strong baseline that is often missing in prior works.
Geometry Score: A Method For Comparing Generative Adversarial Networks
TLDR
A novel measure of performance of a GAN is constructed by comparing geometrical properties of the underlying data manifold and the generated one, which provides both qualitative and quantitative means for evaluation.
Tensor Train decomposition on TensorFlow (T3F)
TLDR
A library that aims to fix Tensor Train decomposition and makes machine learning papers that rely on Tensor train decomposition easier to implement and includes 92% test coverage, examples, and API reference documentation.
Expressive power of recurrent neural networks
TLDR
The expressive power theorem is proved (an exponential lower bound on the width of the equivalent shallow network) for a class of recurrent neural networks -- ones that correspond to the Tensor Train (TT) decomposition, meaning that even processing an image patch by patch with an RNN can be exponentially more efficient than a (shallow) convolutional network with one hidden layer.
Generalized Tensor Models for Recurrent Neural Networks
TLDR
This work attempts to reduce the gap between theory and practice by extending the theoretical analysis to RNNs which employ various nonlinearities, such as Rectified Linear Unit (ReLU), and shows that they also benefit from properties of universality and depth efficiency.
Tensorized Embedding Layers
TLDR
A novel way of parameterizing embedding layers based on the Tensor Train decomposition is introduced, which allows compressing the model significantly at the cost of a negligible drop or even a slight gain in performance.
Understanding DDPM Latent Codes Through Optimal Transport
TLDR
It is shown that, perhaps surprisingly, the DDPM encoder map coincides with the optimal transport map for common distributions; this claim theoretically and by extensive numerical experiments is supported.
...
...