Controllable Continuous Gaze Redirection
@article{Xia2020ControllableCG, title={Controllable Continuous Gaze Redirection}, author={Weihao Xia and Yujiu Yang and Jing-Hao Xue and Wensen Feng}, journal={Proceedings of the 28th ACM International Conference on Multimedia}, year={2020} }
In this work, we present interpGaze, a novel framework for controllable gaze redirection that achieves both precise redirection and continuous interpolation. Given two gaze images with different attributes, our goal is to redirect the eye gaze of one person into any gaze direction depicted in the reference image or to generate continuous intermediate results. To accomplish this, we design a model including three cooperative components: an encoder, a controller and a decoder. The encoder mapsโฆย
Figures and Tables from this paper
7 Citations
CUDA-GR: Controllable Unsupervised Domain Adaptation for Gaze Redirection
- Computer ScienceArXiv
- 2021
This paper proposes an unsupervised domain adaptation framework, called CUDA-GR, that learns to disentangle gaze representations from the labeled source domain and transfers them to an unlabeled target domain, and shows that the generated image-labels pairs in the target domain are effective in knowledge transfer and can boost the performance of the downstream tasks.
CUDA-GHR: Controllable Unsupervised Domain Adaptation for Gaze and Head Redirection
- Computer Science
- 2021
The proposed CUDA-GHR framework simultaneously learns to adapt to new domains and disentangle image attributes such as appearance, gaze direction, and head orientation by utilizing a label-rich source domain and an unlabeled target domain.
GazeChat: Enhancing Virtual Conferences with Gaze-aware 3D Photos
- Computer ScienceUIST
- 2021
GazeChat is introduced, a remote communication system that visually represents users as gaze-aware 3D profile photos that satisfies usersโ privacy needs while keeping online conversations engaging and efficient and provides a greater level of engagement than to audio conferencing.
TediGAN: Text-Guided Diverse Image Generation and Manipulation
- Computer ScienceArXiv
- 2020
This work proposes TediGAN, a novel framework for multi-modal image generation and manipulation with textual descriptions, and proposes the Multi-Modal CelebA-HQ, a large-scale dataset consisting of real face images and corresponding semantic segmentation map, sketch, and textual descriptions.
Towards Open-World Text-Guided Face Image Generation and Manipulation
- Computer ScienceArXiv
- 2021
This work proposes a unified framework for both face image generation and manipulation that produces diverse and high-quality images with an unprecedented resolution at 1024 from multimodal inputs and supports open-world scenarios, including both image and text, without any re-training, fine-tuning, or post-processing.
Art Creation with Multi-Conditional StyleGANs
- Computer ScienceArXiv
- 2022
This paper introduces a multi-conditional Generative Adversarial Network approach trained on large amounts of human paintings to synthesize realistic-looking paintings that emulate human art.
GAN Inversion: A Survey
- Computer ScienceArXiv
- 2021
This paper provides a survey of GAN inversion with a focus on its representative algorithms and its applications in image restoration and image manipulation, and discusses the trends and challenges for future research.
References
SHOWING 1-10 OF 46 REFERENCES
Photo-Realistic Monocular Gaze Redirection Using Generative Adversarial Networks
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
This work presents a novel method by leveraging generative adversarial training to synthesize an eye image conditioned on a target gaze direction that outperforms state-of-the-art approaches in terms of both image quality and redirection precision.
DeepWarp: Photorealistic Image Resynthesis for Gaze Manipulation
- Computer ScienceECCV
- 2016
In this work, we consider the task of generating highly-realistic images of a given face with a redirected gaze. We treat this problem as a specific instance of conditional image generation andโฆ
GazeDirector: Fully Articulated Eye Gaze Redirection in Video
- Computer ScienceComput. Graph. Forum
- 2018
GazeDirector allows us to change where people are looking without personโspecific training data, and with full articulation, i.e. the authors can precisely specify new gaze directions in 3D.
Improving Few-Shot User-Specific Gaze Adaptation via Gaze Redirection Synthesis
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This work addresses the problem of person-specific gaze model adaptation from only a few reference training samples by generating additional training samples through the synthesis of gaze-redirected eye images from existing reference samples and designs the gaze redirection framework from synthetic data.
Deep Pictorial Gaze Estimation
- Computer ScienceECCV
- 2018
This paper introduces a novel deep neural network architecture specifically designed for the task of gaze estimation from single eye input that achieves higher accuracies than the state-of-the-art and is robust to variation in gaze, head pose and image quality.
Learning an appearance-based gaze estimator from one million synthesised images
- Computer ScienceETRA
- 2016
The UnityEyes synthesis framework combines a novel generative 3D model of the human eye region with a real-time rendering framework and shows that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles.
Gaze manipulation for one-to-one teleconferencing
- Computer ScienceProceedings Ninth IEEE International Conference on Computer Vision
- 2003
A novel algorithm for the temporal maintenance of a background model to enhance the rendering of occlusions and reduce temporal artefacts (flicker) and a cost aggregation algorithm that acts directly on the three-dimensional matching cost space.
Rendering of Eyes for Eye-Shape Registration and Gaze Estimation
- Computer Science2015 IEEE International Conference on Computer Vision (ICCV)
- 2015
The benefits of the synthesized training data (SynthesEyes) are demonstrated by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild.
Monocular Neural Image Based Rendering With Continuous View Control
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
The experiments show that both proposed components, the transforming encoder-decoder and depth-guided appearance mapping, lead to significantly improved generalization beyond the training views and in consequence to more accurate view synthesis under continuous 6-DoF camera control.
Learning-by-Synthesis for Appearance-Based 3D Gaze Estimation
- Computer Science2014 IEEE Conference on Computer Vision and Pattern Recognition
- 2014
This paper presents a learning-by-synthesis approach to accurate image-based gaze estimation that is person- and head pose-independent and outperforms existing methods that use low-resolution eye images.