• Corpus ID: 148571665

OpenEDS: Open Eye Dataset

@article{Garbin2019OpenEDSOE,
  title={OpenEDS: Open Eye Dataset},
  author={Stephan J. Garbin and Yiru Shen and Immo Schuetz and Robert Cavin and Gregory Hughes and Sachin S. Talathi},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.03702}
}
We present a large scale data set, OpenEDS: Open Eye Dataset, of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-region collected from 152 individual participants and is divided into four subsets: (i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil and sclera (ii) 252,690 unlabelled eye… 

Semi-Supervised Learning for Eye Image Segmentation

This work presents two semi-supervised learning frameworks to identify eye-parts by taking advantage of unlabeled images where labeled datasets are scarce, leveraging the domain-specific augmentation and novel spatially varying transformations for image segmentation.

EyeNet: Attention Based Convolutional Encoder-Decoder Network for Eye Region Segmentation

  • Priya KansalS. Nathan
  • Computer Science
    2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
  • 2019
The present work proposes a robust and computationally efficient attention-based convolutional encoder-decoder network for segmenting all the eye regions, named EyeNet, which demonstrates superior results compared to the baseline methods.

Periocular Biometrics in Head-Mounted Displays: A Sample Selection Approach for Better Recognition

A new normalization scheme to align the ocular images and then, a new reference sample selection protocol to achieve higher verification accuracy is proposed and is exemplified using two handcrafted feature extraction methods and two deep-learning strategies.

Semantic Segmentation of the Eye With a Lightweight Deep Network and Shape Correction

This paper presents a method to address the multi-class eye segmentation problem which is an essential step for gaze tracking or applying a biometric system in the virtual reality environment and deployed the system with three major stages: obtain a grayscale image from the input, divide the image into three distinct eye regions with a deep network, and refine the results with image processing techniques.

A High-Level Description and Performance Evaluation of Pupil Invisible

It is demonstrated that Pupil Invisible glasses, without the need of a calibration, provide gaze estimates which are robust to perturbations, including outdoor lighting conditions and slippage of the headset.

Eye-MMS: Miniature Multi-Scale Segmentation Network of Key Eye-Regions in Embedded Applications

This work presents a miniature multi-scale segmentation network consisting of inter-connected convolutional modules and modify it to reduce its parameters by more than 80 times, while reducing its accuracy by less than 3%, resulting in the Eye-MMS model containing only 80k parameters.

On Benchmarking Iris Recognition within a Head-mounted Display for AR/VR Applications

A new iris quality metric that is termed as Iris Mask Ratio (IMR) is defined to quantify the iris recognition performance and is proposed for continuous authentication of users in a non-collaborative capture setting in HMD.

An Eye Tracking System for a Virtual Reality Headset by

A novel method to determine the correspondence between the corneal reflections and the LEDs is determined using a fully convolutional neural network based on the UNET architecture, which correctly identifies and matches 91% of reflections in tests.

EyeSeg: Fast and Efficient Few-Shot Semantic Segmentation

EyeSeg is proposed, an encoder-decoder architecture designed for accurate pixel-wise few-shot semantic segmentation with limited annotated data, and results demonstrate state-of-the-art performance while preserving a low latency framework.

BiOcularGAN: Bimodal Synthesis and Annotation of Ocular Images

The experimental results show that BiOcularGAN is able to produce high-quality matching bimodal images and annotations that can be used to train highly competitive (deep) segmentation models that perform well across multiple real-world datasets.

References

SHOWING 1-10 OF 49 REFERENCES

Labelled pupils in the wild: a dataset for studying pupil detection in unconstrained environments

Labelled pupils in the wild (LPW), a novel dataset of 66 high-quality, high-speed eye region videos for the development and evaluation of pupil detection algorithms, provides valuable insights into the general pupil detection problem and allows us to identify key challenges for robust pupil detection on head-mounted eye trackers.

Rendering of Eyes for Eye-Shape Registration and Gaze Estimation

The benefits of the synthesized training data (SynthesEyes) are demonstrated by out-performing state-of-the-art methods for eye-shape registration as well as cross-dataset appearance-based gaze estimation in the wild.

The iBUG Eye Segmentation Dataset

This work is the first to focus on low resolutions image that can be expected from a consumer-grade camera under conventional human-computer interaction and / or video-chat scenarios and it shows potential abilities on low-resolution eye segmentation task.

NVGaze: An Anatomically-Informed Dataset for Low-Latency, Near-Eye Gaze Estimation

This work creates a synthetic dataset using anatomically-informed eye and face models with variations in face shape, gaze direction, pupil and iris, skin tone, and external conditions, and trains neural networks performing with sub-millisecond latency.

An eye tracking dataset for point of gaze detection

A set of videos recording the eye motion of human test subjects as they were looking at, or following, a set of predefined points of interest on a computer visual display unit is presented.

Eye gaze estimation from a single image of one eye

The two key contributions are that the possibility of finding the unique eye gaze direction from a single image of one eye is shown and that one can obtain better accuracy as a consequence of this.

Learning an appearance-based gaze estimator from one million synthesised images

The UnityEyes synthesis framework combines a novel generative 3D model of the human eye region with a real-time rendering framework and shows that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles.

Appearance-based gaze estimation in the wild

An extensive evaluation of several state-of-the-art image-based gaze estimation algorithms on three current datasets, including the MPIIGaze dataset, which contains 213,659 images collected from 15 participants during natural everyday laptop use over more than three months.

EYEDIAP: a database for the development and evaluation of gaze estimation algorithms from RGB and RGB-D cameras

This paper intends to overcome the lack of a common benchmark for the evaluation of the gaze estimation task from RGB and RGB-D data by introducing a novel database along with a common framework for the training and evaluation of gaze estimation approaches.

SSERBC 2017: Sclera segmentation and eye recognition benchmarking competition

The aim of this competition was to record the recent developments in sclera segmentation and eye recognition in the visible spectrum (using iris, sClera and peri-ocular, and their fusion), and also to gain the attention of researchers on this subject.