# Scalable Hypergraph Learning and Processing

@article{Huang2015ScalableHL,
title={Scalable Hypergraph Learning and Processing},
author={Jin Huang and Rui Zhang and Jeffrey Xu Yu},
journal={2015 IEEE International Conference on Data Mining},
year={2015},
pages={775-780}
}
• Published 2015
• Computer Science
• 2015 IEEE International Conference on Data Mining
A hypergraph allows a hyperedge to connect more than two vertices, using which to capture the high-order relationships, many hypergraph learning algorithms are shown highly effective in various applications. When learning large hypergraphs, converting them to graphs to employ the distributed graph frameworks is a common approach, yet it results in major efficiency drawbacks including an inflated problem size, the excessive replicas, and the unbalanced workloads. To avoid such drawbacks, we take… Expand
HyperX: A Scalable Hypergraph Framework
• Computer Science
• IEEE Transactions on Knowledge and Data Engineering
• 2019
This paper proposes HyperX, a general-purpose distributed hypergraph processing framework built on top of Spark that achieves an order of magnitude improvement for running hypergraph learning algorithms compared with graph conversion based approaches in terms of running time, network communication costs, and memory consumption. Expand
Distributed Hypergraph Processing Using Intersection Graphs
This paper proposes to convert a hypergraph into an intersection graph before partitioning by leveraging the inherent shared relationships among hypergraphs and designs a distributed processing framework named Hyraph that can directly run hypergraph analysis algorithms on the authors' intersection graphs. Expand
HYPE: Massive Hypergraph Partitioning with Neighborhood Expansion
• Computer Science
• 2018 IEEE International Conference on Big Data (Big Data)
• 2018
HYPE is proposed, a hypergraph partitionier that exploits the neighborhood relations between vertices in the hypergraph using an efficient implementation of neighborhood expansion and improves partitioning quality and reduces runtime compared to streaming partitioning. Expand
Hypercore Maintenance in Dynamic Hypergraphs
• Qi Luo, Dongxiao Yu
• Computer Science
• 2021 IEEE 37th International Conference on Data Engineering (ICDE)
• 2021
The proposed algorithms can pinpoint the vertices and hyperedges whose hypercore numbers have to be updated by only traversing a small sub-hypergraph by demonstrating the superiority of the algorithms in terms of efficiency. Expand
High-Quality Hypergraph Partitioning
KaHyPar, already without the memetic component, computes better solutions than all competing algorithms for both the cut-net and the connectivity metric, while being faster than Zoltan-AlgD and equally fast as hMETIS. Expand
TR 19-003 MESH : A Flexible Distributed Hypergraph Processing System
With the rapid growth of large online social networks, the ability to analyze large-scale social structure and behavior has become critically important, and this has led to the development of severalExpand
MESH: A Flexible Distributed Hypergraph Processing System
• Computer Science
• 2019 IEEE International Conference on Cloud Engineering (IC2E)
• 2019
MESH provides an easy-to-use and expressive application programming interface that naturally extends the "think like a vertex" model common to many popular graph processing systems, and is competitive in performance to HyperX, another hypergraph processing system based on Spark. Expand
Scalable hypergraph partitioning
Hypergraph partitioning is investigated since hypergraphs provide a better level of abstraction than normal graphs and restreaming approaches are examined because the partitioning results of real time strategies are often not satisfiable. Expand
How Much and When Do We Need Higher-order Information in Hypergraphs? A Case Study on Hyperedge Prediction
• Computer Science
• WWW
• 2020
This work proposes a method of incrementally representing group interactions using a notion of n-projected graph whose accumulation contains information on up to n-way interactions, and quantifies the accuracy of solving a task as n grows for various datasets. Expand
Augmented Sparsifiers for Generalized Hypergraph Cuts
• Computer Science, Mathematics
• ArXiv
• 2020
A new framework of sparsifying hypergraph-to-graph reductions is introduced, where a hypergraph cut defined by submodular cardinality-based splitting functions is $(1+\varepsilon)$-approximated by a cut on a directed graph. Expand

#### References

SHOWING 1-10 OF 22 REFERENCES
Hypergraph partitioning for document clustering: a unified clique perspective
• Computer Science
• SIGIR '08
• 2008
The experimental results show that, with shared (reverse) nearest neighbor based hyperedges, the clustering performance can be improved significantly in terms of various external validation measures without the need for fine tuning of parameters. Expand
Learning with Hypergraphs: Clustering, Classification, and Embedding
• Computer Science, Mathematics
• NIPS
• 2006
This paper generalizes the powerful methodology of spectral clustering which originally operates on undirected graphs to hypergraphs, and further develop algorithms for hypergraph embedding and transductive classification on the basis of the spectral hypergraph clustering approach. Expand
Balanced graph edge partition
• Computer Science, Mathematics
• KDD
• 2014
This paper describes the expected costs of vertex and edge partitions with and without aggregation of messages, and obtains the first approximation algorithms for the balanced edge-partition problem, which for the case of no aggregation matches the best known approximation ratio. Expand
Trinity: a distributed graph engine on a memory cloud
• Computer Science
• SIGMOD '13
• 2013
The introduction of Trinity, a general purpose graph engine over a distributed memory cloud that leverages graph access patterns in both online and offline computation to optimize memory and communication for best performance, which supports fast graph exploration as well as efficient parallel computing. Expand
Pregel: a system for large-scale graph processing
A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier. Expand
Hypergraph with sampling for image retrieval
• Mathematics, Computer Science
• Pattern Recognit.
• 2011
A new transductive learning framework for image retrieval is proposed, in which images are taken as vertices in a weighted hypergraph and the task of image search is formulated as the problem of hypergraph ranking. Expand
Spectral Analysis for Billion-Scale Graphs: Discoveries and Implementation
• Computer Science
• PAKDD
• 2011
The proposed HEIGEN algorithm is carefully design to be accurate, efficient, and able to run on the highly scalable MAPREDUCE (HADOOP) environment, which enables Heiden to handle matrices more than 1000× larger than those which can be analyzed by existing algorithms. Expand
PowerGraph: Distributed Graph-Parallel Computation on Natural Graphs
• Computer Science
• OSDI
• 2012
This paper describes the challenges of computation on natural graphs in the context of existing graph-parallel abstractions and introduces the PowerGraph abstraction which exploits the internal structure of graph programs to address these challenges. Expand
Modeling video hyperlinks with hypergraph for web video reranking
• Computer Science
• ACM Multimedia
• 2008
Experiments show that hypergraph reranking can improve web video retrieval up to 45% over the initial ranked result by the video sharing websites and 8.3% overThe pair-wise based graph reranking in mean average precision (MAP). Expand
Partitioning graphs into balanced components
• Computer Science, Mathematics
• SODA
• 2009
This work considers the k-balanced partitioning problem, where the goal is to partition the vertices of an input graph G into k equally sized components, while minimizing the total weight of the edges connecting different components, and presents a (bi-criteria) approximation algorithm achieving an approximation of O(log n log k), which matches or improves over previous algorithms for all relevant values of k. Expand