Learning Hierarchical Graph Neural Networks for Image Clustering

@article{Xing2021LearningHG,
  title={Learning Hierarchical Graph Neural Networks for Image Clustering},
  author={Yifan Xing and Tong He and Tianjun Xiao and Yongxin Wang and Yuanjun Xiong and Weihao Xia and David Wipf Paul and Zheng Zhang and Stefano Soatto},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={3447-3457}
}
We propose a hierarchical graph neural network (GNN) model that learns how to cluster a set of images into an unknown number of identities using a training set of images annotated with labels belonging to a disjoint set of identities. Our hierarchical GNN uses a novel approach to merge connected components predicted at each level of the hierarchy to form a new graph at the next level. Unlike fully unsupervised hierarchical clustering, the choice of grouping and complexity criteria stems… 

Figures and Tables from this paper

Neural Trees for Learning on Graphs

It is proved that any continuous G -invariant/equivariant function can be approximated by a nonlinear combination of such probability distribution functions over G, and it is shown that the neural tree architecture can approximate any smooth probability distribution function over an undirected graph.

Equivariant Graph Hierarchy-Based Neural Networks

Equivariant Hierarchy-based Graph Networks (EGHNs) which consist of the three key components: generalized Equivariant Matrix Message Passing (EMMP), E-Pool and E-UnPool are proposed which are able to improve the expressivity of conventional equivariant message passing.

Face Clustering via Adaptive Aggregation of Clean Neighbors

The proposed novel method named Adaptive Aggregation of Clean Neighbors (AACN) has two stages of preparation before inputting the graph into GCN, which enables nodes to learn more robust features through the GCN module.

FaceMap: Towards Unsupervised Face Clustering via Map Equation

Inspired by observations on the ranked transition probabilities in the affinity graph constructed from facial images, an outlier detection strategy to adaptively adjust transition probabilities among images is developed.

PSS: Progressive Sample Selection for Open-World Visual Representation Learning

Experiments indicate that the proposed novel progressive approach outperforms the state-of-the-art semi-supervised learning methods and novel class discovery methods in natural image retrieval and face verification benchmarks.

Comprehensive Relationship Reasoning for Composed Query Based Image Retrieval

This work proposes a comprehensive relationship reasoning network by fully exploring the four types of information for CQBIR, which mainly includes two key designs, including a memory-augmented cross-modal attention module and a multi-scale matching strategy to optimize the network.

A Meta-learning based Graph-Hierarchical Clustering Method for Single Cell RNA-Seq Data

MeHi-SCC, a method which utilized meta-learning protocol and brought in multi scRNA-seq datasets’ information in order to assist graph-based hierarchical sub-clustering process outperformed current-prevailing scRNA clustering methods and successfully identified cell subtypes in two large scale cell atlas.

References

SHOWING 1-10 OF 65 REFERENCES

Hierarchical Graph Representation Learning with Differentiable Pooling

DiffPool is proposed, a differentiable graph pooling module that can generate hierarchical representations of graphs and can be combined with various graph neural network architectures in an end-to-end fashion.

Hierarchical Graph Convolutional Networks for Semi-supervised Node Classification

A novel deep Hierarchical Graph Convolutional Network (H-GCN) for semi-supervised node classification, which first repeatedly aggregates structurally similar nodes to hyper-nodes and then refines the coarsened graph to the original to restore the representation for each node.

Hierarchical Representation Learning in Graph Neural Networks With Node Decimation Pooling

The Node Decimation Pooling (NDP), a pooling operator for GNNs that generates coarser graphs while preserving the overall graph topology, is proposed and it is shown that it is possible to remove many edges without significantly altering the graph structure.

HARP: Hierarchical Representation Learning for Networks

HARP is a general meta-strategy to improve all of the state-of-the-art neural algorithms for embedding graphs, including DeepWalk, LINE, and Node2vec, and it is demonstrated that applying HARP’s hierarchical paradigm yields improved implementations for all three of these methods.

Learning to Cluster Faces via Confidence and Connectivity Estimation

This paper proposes a fully learnable clustering framework without requiring a large number of overlapped subgraphs, and transforms the clustering problem into two sub-problems, designed to estimate the confidence of vertices and the connectivity of edges, respectively.

Linkage Based Face Clustering via Graph Convolution Network

This paper presents an accurate and scalable approach to the face clustering task, and shows that the proposed method does not need the number of clusters as prior, is aware of noises and outliers, and can be extended to a multi-view version for more accurate clustering accuracy.

Semi-Supervised Classification with Graph Convolutional Networks

A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.

AttPool: Towards Hierarchical Feature Representation in Graph Convolutional Networks via Attention Mechanism

AttPool, which is a novel graph pooling module based on attention mechanism, is proposed, able to select nodes that are significant for graph representation adaptively, and generate hierarchical features via aggregating the attention-weighted information in nodes.

FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling

Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference, and is orders of magnitude more efficient while predictions remain comparably accurate.

Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
...