• Corpus ID: 233481923

Triangle Centrality

@article{Burkhardt2021TriangleC,
  title={Triangle Centrality},
  author={Paul Burkhardt},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.00110}
}
Triangle centrality is introduced for finding important vertices in a graph based on the concentration of triangles surrounding each vertex. An important vertex in triangle centrality is at the center of many triangles, and therefore it may be in many triangles or none at all. Given a simple, undirected graph G = (V,E), with n = |V | vertices and m = |E| edges, where N(v) is the neighborhood set of v, N△(v) is the set of neighbors that are in triangles with v, and N + △(v) is the closed set… 

A GraphBLAS Implementation of Triangle Centrality

  • Fuhuan LiD. Bader
  • Computer Science
    2021 IEEE High Performance Extreme Computing Conference (HPEC)
  • 2021
TLDR
This paper describes the rapid implementation of triangle centrality using Graph-BLAS, an API specification for describing graph algorithms in the language of linear algebra, and uses Triangle centrality’s algebraic algorithm to implement it using the SuiteSparse GraphBLAS library.

Triangle Centrality in Arkouda

TLDR
This work presents the implementation of triangle centrality in Arkouda with several different triangle counting methods that are compared against each other and another shared memory implementation.

Arachne: An Arkouda Package for Large-Scale Graph Analytics

TLDR
A novel graph package, Arachne, is proposed to make large-scale graph analytics more effortless and efficient based on the open-source Arkouda framework to allow users to perform massively parallel computations on distributed data with an interface similar to NumPy.

References

SHOWING 1-10 OF 79 REFERENCES

Introduction to parallel algorithms

The Anatomy of a Large-Scale Hypertextual Web Search Engine

Fast sparse matrix multiplication

TLDR
The new algorithm is obtained using a surprisingly straightforward combination of a simple combinatorial idea and existing fast matrix multiplication algorithms, and is faster than the best known matrix multiplication algorithm for dense matrices.

Parallelism in random access machines

TLDR
A model of computation based on random access machines operating in parallel and sharing a common memory is presented and can accept in polynomial time exactly the sets accepted by nondeterministic exponential time bounded Turing machines.

A Graph-theoretic perspective on centrality

Bounds and algorithms for graph trusses

TLDR
A simplified and faster algorithm, based on approach discussed in Wang & Cheng (2012), and a theoretical algorithm based on fast matrix multiplication that converts a triangle-generation algorithm of Bjorklund et al. (2014) into a dynamic data structure are presented.

A Refined Laser Method and Faster Matrix Multiplication

TLDR
This paper is a refinement of the laser method that improves the resulting value bound for most sufficiently large tensors, and obtains the best bound on $\omega$ to date.

Distributed non-negative matrix factorization with determination of the number of latent features

TLDR
This paper introduces a distributed NMF algorithm coupled with distributed custom clustering followed by a stability analysis on dense data, which it is called DnMFk, to determine the number of latent variables.
...