Tag2Vec: Learning Tag Representations in Tag Networks

@article{Wang2019Tag2VecLT,
  title={Tag2Vec: Learning Tag Representations in Tag Networks},
  author={Junshan Wang and Zhicong Lu and Guojie Song and Yue Fan and Lun Du and Wei Lin},
  journal={The World Wide Web Conference},
  year={2019}
}
Network embedding is a method to learn low-dimensional representation vectors for nodes in complex networks. In real networks, nodes may have multiple tags but existing methods ignore the abundant semantic and hierarchical information of tags. This information is useful to many network applications and usually very stable. In this paper, we propose a tag representation learning model, Tag2Vec, which mixes nodes and tags into a hybrid network. Firstly, for tag networks, we define semantic… 
DANE: Domain Adaptive Network Embedding
TLDR
A novel Domain Adaptive Network Embedding framework, which applies graph convolutional network to learn transferable embeddings and which outperforms other state-of-the-art network embedding baselines in cross-network domain adaptation tasks.
Embedding Heterogeneous Information Network in Hyperbolic Spaces
TLDR
This article proposes a novel HIN embedding model HHNE, which employs the meta-path guided random walk to capture the structure and semantic relations between nodes, and derives an effective optimization strategy to update the hyperbolic embeddings iteratively.
Is a Single Model Enough? MuCoS: A Multi-Model Ensemble Learning Approach for Semantic Code Search
TLDR
This paper proposes MuCoS, a multi-model ensemble learning architecture for semantic code search that combines several individual learners, each of which emphasizes a specific perspective of code snippets.
Is a Single Model Enough? MuCoS: A Multi-Model Ensemble Learning for Semantic Code Search
TLDR
This paper proposes MuCoS, a multimodel ensemble learning architecture for semantic code search that combines several individual learners, each of which emphasizes a specific perspective of code snippets.
Understanding and Improvement of Adversarial Training for Network Embedding from an Optimization Perspective
TLDR
This paper explains AdvTNE theoretically from an optimization perspective, considering the Power-law property of networks and the optimization objective, and proposes a new activation to enhance the performance of AdvT NE.
Neuron Campaign for Initialization Guided by Information Bottleneck Theory
TLDR
This work designs two criteria for better initializing DNN and further design a neuron campaign initialization algorithm to efficiently select a good initialization for a neural network on a given dataset.
Neuron with Steady Response Leads to Better Generalization
  • Qiang Fu, Lun Du, +4 authors Dongmei Zhang
  • Computer Science
    ArXiv
  • 2021
TLDR
This paper proposes a new regularization method called Neuron Steadiness Regularization to reduce neuron intra-class response variance and conducts extensive experiments on Multilayer Perceptron, Convolutional Neural Network, and Graph Neural Network with popular benchmark datasets, which show that this method consistently outperforms the vanilla version of models with significant gain and low additional overhead.

References

SHOWING 1-10 OF 37 REFERENCES
HIN2Vec: Explore Meta-paths in Heterogeneous Information Networks for Representation Learning
TLDR
Empirical results show that HIN2Vec soundly outperforms the state-of-the-art representation learning models for network data, including DeepWalk, LINE, node2vec, PTE, HINE and ESim, by 6.6% to 23.8% of $micro$-$f_1$ in multi-label node classification and 5% to 70.8%, in link prediction.
node2vec: Scalable Feature Learning for Networks
TLDR
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.
Network Representation Learning with Rich Text Information
TLDR
By proving that DeepWalk, a state-of-the-art network representation method, is actually equivalent to matrix factorization (MF), this work proposes text-associated DeepWalk (TADW), which incorporates text features of vertices into network representation learning under the framework of Matrix factorization.
Label Informed Attributed Network Embedding
TLDR
A novel Label informed Attributed Network Embedding (LANE) framework that can smoothly incorporate label information into the attributed network embedding while preserving their correlations is proposed and achieves significantly better performance compared with the state-of-the-art embedding algorithms.
A General Framework for Content-enhanced Network Representation Learning
TLDR
This paper proposes content-enhanced network embedding (CENE), which is capable of jointly leveraging the network structure and the content information, and shows that its models outperform all existing network embeddedding methods, demonstrating the merits of content information and joint learning.
Dynamic Network Embedding : An Extended Approach for Skip-gram based Network Embedding
TLDR
This paper proposes a stable dynamic embedding framework for network embedding that can keep the optimality of the objective in the Skip-gram based methods in theory and can update the most affected original vertex representations during the evolvement of the network.
metapath2vec: Scalable Representation Learning for Heterogeneous Networks
TLDR
Two scalable representation learning models, namely metapath2vec and metapATH2vec++, are developed that are able to not only outperform state-of-the-art embedding models in various heterogeneous network mining tasks, but also discern the structural and semantic correlations between diverse network objects.
PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks
TLDR
A semi-supervised representation learning method for text data, which is called the predictive text embedding (PTE), which is comparable or more effective, much more efficient, and has fewer parameters to tune.
LINE: Large-scale Information Network Embedding
TLDR
A novel network embedding method called the ``LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted, and optimizes a carefully designed objective function that preserves both the local and global network structures.
GraRep: Learning Graph Representations with Global Structural Information
TLDR
A novel model for learning vertex representations of weighted graphs that integrates global structural information of the graph into the learning process and significantly outperforms other state-of-the-art methods in such tasks.
...
1
2
3
4
...