• Corpus ID: 173187964

End to end learning and optimization on graphs

@article{Wilder2019EndTE,
  title={End to end learning and optimization on graphs},
  author={Bryan Wilder and Eric Ewing and Bistra N. Dilkina and Milind Tambe},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.13732}
}
Real-world applications often combine learning and optimization problems on graphs. For instance, our objective may be to cluster the graph in order to detect meaningful communities (or solve other common graph optimization problems such as facility location, maxcut, and so on). However, graphs or related attributes are often only partially observed, introducing learning problems such as link prediction which must be solved prior to optimization. Standard approaches treat learning and… 

Figures and Tables from this paper

OpenGraphGym-MG: Using Reinforcement Learning to Solve Large Graph Optimization Problems on MultiGPU Systems
TLDR
An extensible, high performance framework that uses deep reinforcement learning and graph embedding to solve large graph optimization problems with multiple GPUs and a comprehensive performance analysis on parallel efficiency and memory cost that proves the parallel RL training and inference algorithms are efficient and highly scalable on a number of GPUs.
The Perils of Learning Before Optimizing
TLDR
It is shown that the performance gap between a two-stage and end-to-end approach is closely related to the \emph{price of correlation} concept in stochastic optimization and the implications of some existing POC results for the predict-then-optimize problem are shown.
LEO: Learning Energy-based Models in Factor Graph Optimization
TLDR
A novel approach, LEO, for learning observation models end-to-end with graph optimizers that may be non-differentiable, and it is shown that LEO is able to learn complex observation models with lower errors and fewer samples.
A Trade-Off Algorithm for Solving p-Center Problems with a Graph Convolutional Network
TLDR
A new paradigm is proposed that combines a graph convolution network and greedy algorithm to solve the p-center problem through direct training and realizes that the efficiency is faster than the exact algorithm and the accuracy is superior to the heuristic algorithm.
Task-Based Learning via Task-Oriented Prediction Network with Applications in Finance
TLDR
The proposed Task-Oriented Prediction Network (TOPNet), an end-to-end learning scheme that automatically integrates task-based evaluation criteria into the learning process via a learnable surrogate loss function, which directly guides the model towards the task- based goal.
Task-Based Learning via Task-Oriented Prediction Network
TLDR
The proposed Task-Oriented Prediction Network (TOPNet), an end-to-end learning scheme that automatically integrates task-based evaluation criteria into the learning process via a task-oriented estimator and directly learns a model with respect to the task- based goal, is validated.
Pointspectrum: Equivariance Meets Laplacian Filtering for Graph Representation Learning
TLDR
PointSpectrum is proposed, a spectral method that incorporates a set equivariant network to account for a graph’s structure and enhances the efficiency and expressiveness of spectral methods, while it outperforms or competes with state-of-the-art GRL methods.
An end-to-end predict-then-optimize clustering method for intelligent assignment problems in express systems
TLDR
Results show that this kind of one-stage end-to-end predict-then-optimize clustering method is beneficial to improve the performance of optimization results, namely the clustering results.
Neural Algorithms for Graph Navigation
TLDR
This work presents a framework for graph meta-learning, and proposes an agent equipped with external memory and local action priors adapted to the underlying graphs, showing substantially improvement in one-shot performance over baseline agents.
Graph neural network based coarse-grained mapping prediction
TLDR
This work presents a graph neural network based CG mapping predictor called Deep Supervised Graph Partitioning Model (DSGPM) that treats mapping operators as a graph segmentation problem and finds that predicted CG mapping operators indeed result in good CG MD models when used in simulation.
...
...

References

SHOWING 1-10 OF 72 REFERENCES
Learning Combinatorial Optimization Algorithms over Graphs
TLDR
This paper proposes a unique combination of reinforcement learning and graph embedding that behaves like a meta-algorithm that incrementally constructs a solution, and the action is determined by the output of agraph embedding network capturing the current state of the solution.
GAP: Generalizable Approximate Graph Partitioning Framework
TLDR
This work proposes GAP, a Generalizable Approximate Partitioning framework that takes a deep learning approach to graph partitioning, and defines a differentiable loss function that represents the partitioning objective and use backpropagation to optimize the network parameters.
Hierarchical Graph Representation Learning with Differentiable Pooling
TLDR
DiffPool is proposed, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion.
Attention, Learn to Solve Routing Problems!
TLDR
A model based on attention layers with benefits over the Pointer Network is proposed and it is shown how to train this model using REINFORCE with a simple baseline based on a deterministic greedy rollout, which is more efficient than using a value function.
Learning Role-based Graph Embeddings
TLDR
The Role2Vec framework is introduced, which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks.
node2vec: Scalable Feature Learning for Networks
TLDR
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.
Melding the Data-Decisions Pipeline: Decision-Focused Learning for Combinatorial Optimization
TLDR
This work focuses on combinatorial optimization problems and introduces a general framework for decision-focused learning, where the machine learning model is directly trained in conjunction with the optimization algorithm to produce highquality decisions, and shows that decisionfocused learning often leads to improved optimization performance compared to traditional methods.
Inductive Representation Learning on Large Graphs
TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
Learning Deep Representations for Graph Clustering
TLDR
This work proposes a simple method, which first learns a nonlinear embedding of the original graph by stacked autoencoder, and then runs $k$-means algorithm on the embedding to obtain the clustering result, which significantly outperforms conventional spectral clustering.
Stochastic Submodular Maximization: The Case of Coverage Functions
TLDR
This model captures situations where the discrete objective arises as an empirical risk, or is given as an explicit stochastic model, and yields solutions that are guaranteed to match the optimal approximation guarantees, while reducing the computational cost by several orders of magnitude, as demonstrated empirically.
...
...