• Corpus ID: 237420906

Deep Saliency Prior for Reducing Visual Distraction

  title={Deep Saliency Prior for Reducing Visual Distraction},
  author={Kfir Aberman and Junfeng He and Yossi Gandelsman and Inbar Mosseri and David E. Jacobs and Kai Kohlhoff and Yael Pritch and Michael Rubinstein},
Using only a model that was trained to predict where people look at images, and no additional training data, we can produce a range of powerful editing effects for reducing distraction in images. Given an image and a mask specifying the region to edit, we backpropagate through a state-of-the-art saliency model to parameterize a differentiable editing operator, such that the saliency within the masked region is reduced. We demonstrate several operators, including: a recoloring operator, which… 


Look here! A parametric learning based approach to redirect visual attention
This paper introduces an automatic method to make an image region more attention-capturing via subtle image edits that maintain realism and fidelity to the original through the use of the GazeShiftNet model.
Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet
This work presents a novel way of reusing existing neural networks that have been pretrained on the task of object recognition in models of fixation prediction that significantly outperforms all state-of-the-art models on the MIT Saliency Benchmark.
SALICON: Saliency in Context
A mouse-contingent multi-resolutional paradigm based on neurophysiological and psychophysical studies of peripheral vision, to simulate the natural viewing behavior of humans is designed, thus enabling large-scale data collection.
Attention Retargeting by Color Manipulation in Images
This paper proposes a method that modifies the color of a selected region in an image to increase its saliency and draw attention towards it and applies it on a set of natural images to confirm its effectiveness in guiding attention through eye-tracking.
Paying More Attention to Saliency: Image Captioning with Saliency and Context Attention
This work proposes an image captioning approach in which a generative recurrent neural network can focus on different parts of the input image during the generation of the caption, by exploiting the conditioning given by a saliency prediction model on whichParts of the image are salient and which are contextual.
Guiding human gaze with convolutional neural networks
A new model for manipulating images to change the distribution of human fixations in a controlled fashion is presented, using the state-of-the-art model for fixation prediction to train a convolutional neural network to transform images so that they satisfy a given fixation distribution.
Saliency Prediction in the Deep Learning Era: Successes and Limitations
  • A. Borji
  • Computer Science, Medicine
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2021
A large number of image and video saliency models are reviewed and compared over two image benchmarks and two large scale video datasets and factors that contribute to the gap between models and humans are identified.
Saliency-based image editing for guiding visual attention
The most important part of an information system that assists human activities is a natural interface with human beings. Gaze information strongly reflects the human interest or their attention, and
EML-NET: An Expandable Multi-Layer NETwork for Saliency Prediction
  • Sen Jia
  • Computer Science
    Image Vis. Comput.
  • 2020
A scalable system to leverage multiple powerful deep CNN models to better extract visual features for saliency prediction and achieves the state-of-the-art results on the public saliency benchmarks, SALICON, MIT300 and CAT2000 is proposed.
Saliency Driven Image Manipulation
An approach that considers the internal color and saliency properties of the image via an optimization framework that relies on patch-based manipulation using only patches from within the same image to maintain its appearance characteristics is proposed.