• Publications
  • Influence
Learning Sum-Product Networks with Direct and Indirect Variable Interactions
TLDR
We present ID-SPN, a new algorithm for learning SPN structure that unifies the two approaches. Expand
Learning Markov Networks With Arithmetic Circuits
TLDR
We introduce ACMN, a new method for learning efficient Markov networks with arbitrary conjunctive features, as long as they admit efficient inference. Expand
The Libra toolkit for probabilistic models
TLDR
The Libra Toolkit is a collection of algorithms for learning and inference with discrete probabilistic models, including Bayesian networks, Markov networks, dependency networks, and sum-product networks. Expand
Discriminative Structure Learning of Arithmetic Circuits
TLDR
We present the first discriminative structure learning algorithm for ACs, DACLearn (Discriminative AC Learner), which optimizes conditional log-likelihood. Expand
Reducing the data transmission in Wireless Sensor Networks using the Principal Component Analysis
TLDR
We introduce a distributed method for computing the Principal Component Analysis (PCA). Expand
Energy-Based Reranking: Improving Neural Machine Translation Using Energy-Based Models
TLDR
The discrepancy between maximum likelihood estimation (MLE) and task measures such as BLEU score has been studied before for autoregressive neural machine translation (NMT). Expand
Learning Tractable Graphical Models Using Mixture of Arithmetic Circuits
TLDR
In recent years, there has been a growing interest in learning tractable graphical models in which exact inference is efficient. Expand
Search-Guided, Lightly-supervised Training of Structured Prediction Energy Networks
TLDR
We use efficient truncated randomized search in this reward function to train structured prediction energy networks, which provide efficient test-time inference using gradient-based search on a smooth, learned representation of the score landscape, and have previously yielded state-of-the-art results in structured prediction. Expand
Training Structured Prediction Energy Networks with Indirect Supervision
TLDR
This paper introduces rank-based training of structured prediction energy networks using gradient descent and minimizes the ranking violation of the sampled structures with respect to a scalar scoring function defined with domain knowledge. Expand
Learning Compact Neural Networks Using Ordinary Differential Equations as Activation Functions
TLDR
We introduce differential equation units (DEUs), an improvement to modern neural networks, which enables each neuron to learn a particular nonlinear activation function from a family of solutions to an ordinary differential equation. Expand
...
1
2
...