• Publications
  • Influence
LIBLINEAR: A Library for Large Linear Classification
TLDR
LIBLINEAR is an open source library for large-scale linear classification. Expand
A dual coordinate descent method for large-scale linear SVM
TLDR
This paper presents a novel dual coordinate descent method for linear SVM with L1-and L2-loss functions. Expand
Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent
TLDR
We study a D-PSGD algorithm and provide a theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. Expand
ZOO: Zeroth Order Optimization Based Black-box Attacks to Deep Neural Networks without Training Substitute Models
TLDR
We propose an effective black-box attack that also only has access to the input (images) and the output (confidence scores) of a targeted DNN. Expand
VisualBERT: A Simple and Performant Baseline for Vision and Language
TLDR
We propose VisualBERT, a simple and flexible framework for modeling a broad range of vision and language tasks. Expand
Sparse inverse covariance matrix estimation using quadratic approximation
TLDR
We propose a novel algorithm for solving the resulting optimization problem which is a regularized log-determinant program. Expand
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
TLDR
In this paper, we formulate the process of attacking DNNs via adversarial examples as an elastic-net regularized optimization problem. Expand
Towards Fast Computation of Certified Robustness for ReLU Networks
TLDR
In this paper, we exploit the special structure of ReLU networks and provide two computationally efficient algorithms Fast-Lin and Fast-Lip that are able to certify non-trivial lower bounds of minimum distortions, by bounding the ReLU units with appropriate linear functions. Expand
Scalable Coordinate Descent Approaches to Parallel Matrix Factorization for Recommender Systems
TLDR
We show that coordinate descent based methods have a more efficient update rule compared to ALS, and are faster and have more stable convergence than SGD. Expand
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks
TLDR
In this paper, we propose Cluster-GCN, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure. Expand
...
1
2
3
4
5
...