High-resolution de novo structure prediction from primary sequence
OmegaFold is introduced, the first computational method to successfully predict high-resolution protein structure from a single primary sequence alone, using a new combination of a protein language model that allows us to make predictions from single sequences and a geometry-inspired transformer model trained on protein structures.
Deep learning guided optimization of human antibody against SARS-CoV-2 variants with broad neutralization
- S. Shan, Shitong Luo, Jian Peng
- BiologyProceedings of the National Academy of Sciences…
- 1 March 2022
A deep learning approach was introduced to redesign the complementarity-determining regions (CDRs) to target multiple virus variants and obtained an antibody that broadly neutralizes SARS-CoV-2 variants.
Stein Variational Inference for Discrete Distributions
- Jun Han, Fan Ding, Xianglong Liu, L. Torresani, Jian Peng, Qiang Liu
- Computer ScienceInternational Conference on Artificial…
- 1 March 2020
The proposed framework that transforms discrete distributions to equivalent piecewise continuous distributions, on which the gradient-free SVGD is applied to perform efficient approximate inference outperforms existing GOF test methods for intractable discrete distributions.
Proximal Exploration for Model-guided Protein Sequence Design
- Zhizhou Ren, Jiahan Li, Fan Ding, Yuanshuo Zhou, Jianzhu Ma, Jian Peng
- Computer SciencebioRxiv
- 19 June 2022
This paper proposes Proximal Exploration (PEX) algorithm, which prioritizes the evolutionary search for high-fitness mutants with low mutation counts, and develops a specialized model architecture, called Mutation Factorization Network (MuFacNet), to predict low-order mutational effects, which further improves the sample efficiency of model-guided evolution.
X-MEN: Guaranteed XOR-Maximum Entropy Constrained Inverse Reinforcement Learning
- Fan Ding, Yeiang Xue
- Computer ScienceConference on Uncertainty in Artificial…
- 22 March 2022
This paper proposes XOR-Maximum Entropy Constrained Inverse Reinforcement Learning (X-MEN), which is guaranteed to converge to the global optimal reward function in linear rate w.r.t. the number of learning iterations and guarantees the learned IRL agent will never generate trajectories that violate constraints.
Progressive Generative Hashing for Image Retrieval
- Yuqing Ma, Yue He, Fan Ding, Sheng Hu, Jun Yu Li, Xianglong Liu
- Computer ScienceInternational Joint Conference on Artificial…
- 1 July 2018
A novel progressive generative hashing (PGH) framework to help learn a discriminative hashing network in an unsupervised way, and simultaneously feeds the original image and its codes into the generative adversarial networks (GANs).
Towards Efficient Discrete Integration via Adaptive Quantile Queries
- Fan Ding, Hanjing Wang, Ashish Sabharwal, Yexiang Xue
- Computer ScienceEuropean Conference on Artificial Intelligence
- 13 October 2019
AdaWISH is proposed, which is able to obtain the same guarantee but accesses only a small subset of queries of WISH, and has a regret of only O(log n) relative to an idealistic oracle that issues queries at data-dependent optimal points.
Contrastive Divergence Learning with Chained Belief Propagation
- Fan Ding, Yexiang Xue
- Computer ScienceEuropean Workshop on Probabilistic Graphical…
- 2020
This work proposes contrastive divergence learning with chained belief propagation (BPChain-CD), which learns better models compared with BP-CD and CD on a range of maximum-likelihood learning experiments.
XOR-CD: Linearly Convergent Constrained Structure Generation
- Fan Ding, Jianzhu Ma, Jinbo Xu, Yexiang Xue
- Computer ScienceInternational Conference on Machine Learning
- 2021
XOR-CD harnesses XOR-Sampling to generate samples from the model distribution in CD learning and is guaranteed to generate valid structures and has a linear convergence rate towards the global maximum of the likelihood function within a vanishing constant in learning exponential family models.
XOR-SGD: provable convex stochastic optimization for decision-making under uncertainty
- Fan Ding, Yexiang Xue
- Computer ScienceConference on Uncertainty in Artificial…
- 2021
This work presents XOR-SGD, a stochastic gradient descent (SGD) approach guaranteed to converge to solutions that are at most a constant away from the true optimum in linear number of iterations, and shows that this approach finds better solutions with drastically fewer samples needed compared to a couple of state-ofthe-art solvers.
...
...