CMIB: Unsupervised Image Object Categorization in Multiple Visual Contexts

@article{Yan2020CMIBUI,
  title={CMIB: Unsupervised Image Object Categorization in Multiple Visual Contexts},
  author={Xiaoqiang Yan and Yangdong Ye and Xueying Qiu and Milos Manic and Hui Yu},
  journal={IEEE Transactions on Industrial Informatics},
  year={2020},
  volume={16},
  pages={3974-3986}
}
Object categorization in images is fundamental to various industrial areas, such as automated visual inspection, fast image retrieval, and intelligent surveillance. Most existing methods treat visual features (e.g., scale-invariant feature transform) as content information of the objects, while regarding image tags as their contextual information. However, the image tags can hardly be acquired in completely unsupervised settings, especially when the image volume is too large to be marked. In… 

Deep Mutual Information Maximin for Cross-Modal Clustering

A novel deep mutual information maximin (DMIM) method for cross-modal clustering is proposed to maximally preserve the shared information of multiple modalities while eliminating the superfluous information of individual modalities in an end-to-end manner.

Machine vision-based intelligent manufacturing using a novel dual-template matching: a case study for lithium battery positioning

A novel dual-template matching algorithm is proposed to properly locate and segment each battery for fast and precise mass production and the positioning accuracy of the proposed method is significantly increased, and the matching robustness is improved in spite of large battery inclination angle.

References

SHOWING 1-10 OF 51 REFERENCES

Unsupervised video categorization based on multivariate information bottleneck method

Combining Feature Context and Spatial Context for Image Pattern Discovery

This work forms the problem as a regularized k-means clustering, and proposes an iterative bottom-up/top-down self-learning procedure to gradually refine the result until it converges, which can better handle the ambiguities of visual primitives, by leveraging these co-occurrences.

Heterogeneous image feature integration via multi-modal spectral clustering

This paper proposes a novel approach to unsupervised integrate such heterogeneous features by performing multi-modal spectral clustering on unlabeled images and unsegmented images using a commonly shared graph Laplacian matrix.

Exploiting hierarchical context on a large database of object categories

This paper introduces a new dataset with images that contain many instances of different object categories and proposes an efficient model that captures the contextual information among more than a hundred ofobject categories and shows that the context model can be applied to scene understanding tasks that local detectors alone cannot solve.

Object-Graphs for Context-Aware Visual Category Discovery

This work introduces two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co--occurrence patterns relative to an unfamiliar region and shows that by using them to model the interaction between an image's known and unknown objects, it can better detect new visual categories.

Joint multi-feature spatial context for scene recognition in the semantic manifold

This paper exploits local contextual relations to address the problem of discovering consistent co-occurrence patterns and removing noisy ones in a semantic multinomial framework, and shows that larger datasets benefit more from the proposed method, leading to very competitive performance.

Unsupervised High-level Feature Learning by Ensemble Projection for Semi-supervised Image Classification and Image Clustering

Ensemble Projection learns a new image representation by exploiting the distribution patterns of all available data for the task at hand, and captures not only the characteristics of individual images, but also the relationships among images.

Distinctive Image Features from Scale-Invariant Keypoints

  • D. Lowe
  • Computer Science
    International Journal of Computer Vision
  • 2004
This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene and can robustly identify objects among clutter and occlusion while achieving near real-time performance.

Adapting Visual Category Models to New Domains

This paper introduces a method that adapts object models acquired in a particular visual domain to new imaging conditions by learning a transformation that minimizes the effect of domain-induced changes in the feature distribution.

Sampling Strategies for Bag-of-Features Image Classification

It is shown experimentally that for a representative selection of commonly used test databases and for moderate to large numbers of samples, random sampling gives equal or better classifiers than the sophisticated multiscale interest operators that are in common use.
...