• Corpus ID: 233481923

# Triangle Centrality

@article{Burkhardt2021TriangleC,
title={Triangle Centrality},
author={Paul Burkhardt},
journal={ArXiv},
year={2021},
volume={abs/2105.00110}
}
Triangle centrality is introduced for finding important vertices in a graph based on the concentration of triangles surrounding each vertex. An important vertex in triangle centrality is at the center of many triangles, and therefore it may be in many triangles or none at all. Given a simple, undirected graph G = (V,E), with n = |V | vertices and m = |E| edges, where N(v) is the neighborhood set of v, N△(v) is the set of neighbors that are in triangles with v, and N + △(v) is the closed set…

## Figures and Tables from this paper

### A GraphBLAS Implementation of Triangle Centrality

• Computer Science
2021 IEEE High Performance Extreme Computing Conference (HPEC)
• 2021
This paper describes the rapid implementation of triangle centrality using Graph-BLAS, an API specification for describing graph algorithms in the language of linear algebra, and uses Triangle centrality’s algebraic algorithm to implement it using the SuiteSparse GraphBLAS library.

### Triangle Centrality in Arkouda

• Computer Science
• 2022
This work presents the implementation of triangle centrality in Arkouda with several different triangle counting methods that are compared against each other and another shared memory implementation.

### Arachne: An Arkouda Package for Large-Scale Graph Analytics

• Computer Science
• 2022
A novel graph package, Arachne, is proposed to make large-scale graph analytics more effortless and efficient based on the open-source Arkouda framework to allow users to perform massively parallel computations on distributed data with an interface similar to NumPy.

## References

SHOWING 1-10 OF 79 REFERENCES

### Introduction to parallel algorithms

• Computer Science
Wiley series on parallel and distributed computing
• 1998

### Fast sparse matrix multiplication

• Computer Science
TALG
• 2005
The new algorithm is obtained using a surprisingly straightforward combination of a simple combinatorial idea and existing fast matrix multiplication algorithms, and is faster than the best known matrix multiplication algorithm for dense matrices.

### Parallelism in random access machines

• Computer Science
STOC
• 1978
A model of computation based on random access machines operating in parallel and sharing a common memory is presented and can accept in polynomial time exactly the sets accepted by nondeterministic exponential time bounded Turing machines.

### Bounds and algorithms for graph trusses

• Computer Science
J. Graph Algorithms Appl.
• 2020
A simplified and faster algorithm, based on approach discussed in Wang & Cheng (2012), and a theoretical algorithm based on fast matrix multiplication that converts a triangle-generation algorithm of Bjorklund et al. (2014) into a dynamic data structure are presented.

### A Refined Laser Method and Faster Matrix Multiplication

• Computer Science
SODA
• 2021
This paper is a refinement of the laser method that improves the resulting value bound for most sufficiently large tensors, and obtains the best bound on $\omega$ to date.

### Distributed non-negative matrix factorization with determination of the number of latent features

• Computer Science
The Journal of Supercomputing
• 2020
This paper introduces a distributed NMF algorithm coupled with distributed custom clustering followed by a stability analysis on dense data, which it is called DnMFk, to determine the number of latent variables.