Deep Neural Networks for Learning Graph Representations

  title={Deep Neural Networks for Learning Graph Representations},
  author={Shaosheng Cao and Wei Lu and Qiongkai Xu},
  booktitle={AAAI Conference on Artificial Intelligence},
In this paper, we propose a novel model for learning graph representations, which generates a low-dimensional vector representation for each vertex by capturing the graph structural information. [] Key Method The advantages of our approach will be illustrated from both theorical and empirical perspectives.

Figures and Tables from this paper

Semi-AttentionAE: An Integrated Model for Graph Representation Learning

This paper integrates a supervised information extraction graph attention network to capture both node features and network structure, with an unsupervised feature extraction autoencoder to reduce dimension while preserving structure information.

Learning Edge Representations via Low-Rank Asymmetric Projections

This work proposes a new method for embedding graphs while preserving directed edge information, and explicitly model an edge as a function of node embeddings, and proposes a novel objective, the graph likelihood, which contrasts information from sampled random walks with non-existent edges.

Multiple Kernel Representation Learning on Networks

A weighted matrix factorization model that encodes random walk-based information about nodes of the network and extends the approach with a multiple kernel learning formulation that provides the ability to utilize kernel functions without realizing the exact proximity matrix.

Sequence to Sequence Network for Learning Network Representation

  • Qi LiangMeilin ZhouLu MaDan LuoPeng ZhangBin Wang
  • Computer Science
    2019 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)
  • 2019
A novel method is proposed which is not only preserve both the local and global network structure information, but also capture the nonlinear information from network to achieve more discriminative node representation called SSNR.

Symmetric Graph Convolutional Autoencoder for Unsupervised Graph Representation Learning

A new numerically stable form of the Laplacian sharpening by incorporating the signed graphs is proposed and a new cost function which finds a latent representation and a latent affinity matrix simultaneously is devised to boost the performance of image clustering tasks.

Nonuniform Hyper-Network Embedding with Dual Mechanism

This article proposes a flexible model called Hyper2vec and combines the features of hyperedges by considering the dual hyper-networks to build a further model called NHNE based on 1D convolutional neural networks, and train a tuplewise similarity function for the nonuniform relationships in hyper-nets.

Feature-Dependent Graph Convolutional Autoencoders with Adversarial Training Methods

A framework using autoencoder for graph embedding (GED) and its variational version (VEGD) and the Graph Convolutional Network (GCN) decoder of the proposed framework reconstructs both structural characteristics and node features, which naturally possesses the interaction between these two sources of information while learning the embedding.

Interpretable Feature Learning of Graphs using Tensor Decomposition

Two novel tensor decomposition-based node embedding algorithms, that can learn node features from arbitrary types of graphs: undirected, directed, and/or weighted, without relying on eigendecomposition or word embedding-based hyperparameters are presented.



Learning Deep Representations for Graph Clustering

This work proposes a simple method, which first learns a nonlinear embedding of the original graph by stacked autoencoder, and then runs $k$-means algorithm on the embedding to obtain the clustering result, which significantly outperforms conventional spectral clustering.

GraRep: Learning Graph Representations with Global Structural Information

A novel model for learning vertex representations of weighted graphs that integrates global structural information of the graph into the learning process and significantly outperforms other state-of-the-art methods in such tasks.

DeepWalk: online learning of social representations

DeepWalk is an online learning algorithm which builds useful incremental results, and is trivially parallelizable, which make it suitable for a broad class of real world applications such as network classification, and anomaly detection.

Neural Word Embedding as Implicit Matrix Factorization

It is shown that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks, and conjecture that this stems from the weighted nature of SGNS's factorization.

Relational learning via latent social dimensions

This work proposes to extract latent social dimensions based on network information, and then utilize them as features for discriminative learning, and outperforms representative relational learning methods based on collective inference, especially when few labeled data are available.

GloVe: Global Vectors for Word Representation

A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.

Extracting and composing robust features with denoising autoencoders

This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.

Noise-Contrastive Estimation of Unnormalized Statistical Models, with Applications to Natural Image Statistics

The basic idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise and it is shown that the new method strikes a competitive trade-off in comparison to other estimation methods for unnormalized models.

Scalable learning of collective behavior based on sparse social dimensions

This work proposes an edge-centric clustering scheme to extract sparse social dimensions that can efficiently handle networks of millions of actors while demonstrating comparable prediction performance as other non-scalable methods.

Reducing the Dimensionality of Data with Neural Networks

This work describes an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.