Alexander Novikov

Learn More
Deep neural networks currently demonstrate state-of-the-art performance in several domains. At the same time, models of this class are very demanding in terms of computational resources. In particular, a large amount of memory is required by commonly used fully-connected layers, making it hard to use the models on low-end devices and stopping the further(More)
CMP-sialic acid:GM3 sialyltransferase (GD3 synthase; EC 2.4.99.8) was characterized in a membrane-enriched preparation (P2 pellet) from mouse embryos at embryonic day 12 (E-12). Gangliosides GD3 and GM3 were the major radiolabeled products of the reaction. Optimum GD3 synthase activity was obtained atpH 6.0 using 0.1% detergent Triton CF-54. TheK m values(More)
Quantitation by mass spectrometry is increasingly used to monitor protein levels in biological samples. Most of the current methods are based on the relative comparison of protein quantities but are not suited for the determination of the absolute amount of a given protein. Here we describe a method for the absolute quantitation of proteins that is based on(More)
The in vitro activity of sialyltransferase IV (SAT-IV), which catalyzes the transfer of sialic acid to the terminal galactose of different gangliotetraosylceramides (GA1, GM1a and GD1b), was examined in membrane-enriched preparations from mouse embryos at embryonic day 12 (E-12). Gangliosides GD1a and GT1b were the only reaction products using GM1a and GD1b(More)
Ganglioside profile was evaluated in 19 samples of tumor tissue obtained from 13 surgical patients with various morphological patterns of neuroblastoma. In six of those cases, two samples from each proving most different in terms of cell maturity were selected for examination. The relative content of GD2 gangliosides was 27.0-37.6% in sympathoblastoma and(More)
In the paper we present a new framework for dealing with probabilistic graphical models. Our approach relies on the recently proposed Tensor Train format (TT-format) of a tensor that while being compact allows for efficient application of linear algebra operations. We present a way to convert the energy of a Markov random field to the TT-format and show how(More)
Convolutional neural networks excel in image recognition tasks, but this comes at the cost of high computational and memory complexity. To tackle this problem, [1] developed a tensor factorization framework to compress fully-connected layers. In this paper, we focus on compressing convolutional layers. We show that while the direct application of the tensor(More)