# Exact Representation of Sparse Networks with Symmetric Nonnegative Embeddings

@inproceedings{Chanpuriya2021ExactRO, title={Exact Representation of Sparse Networks with Symmetric Nonnegative Embeddings}, author={Sudhanshu Chanpuriya and Ryan A. Rossi and Anup B. Rao and Tung Mai and Nedim Lipka and Zhao Song and Cameron Musco}, year={2021} }

Many models for undirected graphs are based on factorizing the graph’s adjacency matrix; these models ﬁnd a vector representation of each node such that the predicted probability of a link between two nodes increases with the similarity (dot product) of their associated vectors. Recent work has shown that these models are unable to capture key structures in real-world graphs, particularly heterophilous structures, wherein links occur between dissimilar nodes. In contrast, a factorization with…

## References

SHOWING 1-10 OF 53 REFERENCES

### Node Embeddings and Exact Low-Rank Representations of Complex Networks

- Computer ScienceNeurIPS
- 2020

This work proves that a minor relaxation of their model can generate sparse graphs with high triangle density and gives a simple algorithm based on logistic principal component analysis (LPCA) that succeeds in finding such exact embeddings.

### The impossibility of low-rank representations for triangle-rich complex networks

- Computer ScienceProceedings of the National Academy of Sciences
- 2020

It is mathematically proved that low-dimensional embeddings cannot generate graphs with both low average degree and large clustering coefficients, and any embedding that can successfully create these two properties must have a rank that is nearly linear in the number of vertices.

### Capacity and Bias of Learned Geometric Embeddings for Directed Graphs

- Computer ScienceNeurIPS
- 2021

A novel variant of box embeddings is introduced that uses a learned smoothing parameter to achieve better representational capacity than vector models in low dimensions, while also avoiding performance saturation common to other geometric models in high dimensions.

### An Attract-Repel Decomposition of Undirected Networks

- Computer ScienceArXiv
- 2021

The attract-repel (AR) decomposition is demonstrated in real social networks and it can be used to measure the amount of latent homophily and heterophily, and applied to co-occurrence networks to discover roles in teams and find substitutable ingredients in recipes.

### Inductive Representation Learning on Large Graphs

- Computer ScienceNIPS
- 2017

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

### Symmetric Nonnegative Matrix Factorization for Graph Clustering

- Computer ScienceSDM
- 2012

Symmetric NMF is proposed as a general framework for graph clustering, which inherits the advantages of NMF by enforcing nonnegativity on the clustering assignment matrix, and serves as a potential basis for many extensions.

### Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs

- Computer ScienceNeurIPS
- 2020

This work identifies a set of key designs -- ego- and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations -- that boost learning from the graph structure under heterophily and combines them into a graph neural network, H2GCN, which is used as the base method to empirically evaluate the effectiveness of the identified designs.

### Network Embedding as Matrix Factorization: Unifying DeepWalk, LINE, PTE, and node2vec

- Computer ScienceWSDM
- 2018

The NetMF method offers significant improvements over DeepWalk and LINE for conventional network mining tasks and provides the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian.

### Two Sides of the Same Coin: Heterophily and Oversmoothing in Graph Convolutional Neural Networks

- Computer ScienceArXiv
- 2021

This work takes a new uniﬁed perspective to understand the performance degradation of GCNs at the node level and shows the effectiveness of two strategies: degree correction, which learns to adjust degree coefﬂcients, and signed messages, which may be useful (under conditions) by learning to optionally negate the messages.

### NetGAN without GAN: From Random Walks to Low-Rank Approximations

- Computer ScienceICML
- 2020

This paper investigates the implicit bias of NetGAN and finds that the root of its generalization properties does not lie in the GAN architecture, but in an inconspicuous low-rank approximation of the logits random walk transition matrix.