Corpus ID: 237532694

Learning to Aggregate and Refine Noisy Labels for Visual Sentiment Analysis

  title={Learning to Aggregate and Refine Noisy Labels for Visual Sentiment Analysis},
  author={Wei Zhu and Zihe Zheng and Haitian Zheng and Hanjia Lyu and Jiebo Luo},
  • Wei Zhu, Zihe Zheng, +2 authors Jiebo Luo
  • Published 15 September 2021
  • Computer Science
  • ArXiv
Visual sentiment analysis has received increasing attention in recent years. However, the quality of the dataset is a concern because the sentiment labels are crowd-sourcing, subjective, and prone to mistakes. This poses a severe threat to the data-driven models including the deep neural networks which would generalize poorly on the testing cases if they are trained to over-fit the samples with noisy sentiment labels. Inspired by the recent progress on learning with noisy labels, we propose a… Expand

Figures and Tables from this paper


NLWSNet: a weakly supervised network for visual sentiment analysis in mislabeled web images
Quantitative and qualitative evaluations on well- and mislabeled web image datasets demonstrate that the proposed algorithm outperforms the related methods. Expand
Weakly Supervised Coupled Networks for Visual Sentiment Analysis
This paper presents a weakly supervised coupled convolutional network with two branches to leverage the localized information and integrates the sentiment detection and classification branches into a unified deep framework and optimize the network in an end-to-end manner. Expand
WSCNet: Weakly Supervised Coupled Networks for Visual Sentiment Classification and Detection
This paper introduces a weakly supervised coupled convolutional network (WSCNet), dedicated to automatically selecting relevant soft proposals given weak annotations, thereby significantly reducing the annotation burden, and encompasses the following contributions. Expand
Robust Image Sentiment Analysis Using Progressively Trained and Domain Transferred Deep Networks
The proposed CNN can achieve better performance in image sentiment analysis than competing algorithms and is able to improve the performance on Twitter images by inducing domain transfer with a small number of manually labeled Twitter images. Expand
How does Disagreement Help Generalization against Label Corruption?
A robust learning paradigm called Co-teaching+, which bridges the "Update by Disagreement" strategy with the original Co-Teaching, which is much superior to many state-of-the-art methods in the robustness of trained models. Expand
Robust Curriculum Learning: from clean label detection to noisy label self-correction
This paper starts with learning from clean data and then gradually move to learn noisy-labeled data with pseudo labels produced by a time-ensemble of the model and data augmentations, resulting in more precise detection of both clean labels and correct pseudo labels. Expand
Training Deep Neural Networks on Noisy Labels with Bootstrapping
A generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency is proposed, which considers a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data. Expand
Symmetric Cross Entropy for Robust Learning With Noisy Labels
The proposed Symmetric cross entropy Learning (SL) approach simultaneously addresses both the under learning and overfitting problem of CE in the presence of noisy labels, and empirically shows that SL outperforms state-of-the-art methods. Expand
DeepSentiBank: Visual Sentiment Concept Classification with Deep Convolutional Neural Networks
Performance evaluation shows the newly trained deep CNNs model SentiBank 2.0 (or called DeepSentiBank) is significantly improved in both annotation accuracy and retrieval performance, compared to its predecessors which mainly use binary SVM classification models. Expand
DivideMix: Learning with Noisy Labels as Semi-supervised Learning
This work proposes DivideMix, a novel framework for learning with noisy labels by leveraging semi-supervised learning techniques, which models the per-sample loss distribution with a mixture model to dynamically divide the training data into a labeled set with clean samples and an unlabeled set with noisy samples. Expand