Ensuring Safety in Augmented Reality from Trade-off Between Immersion and Situation Awareness

@article{Jung2018EnsuringSI,
  title={Ensuring Safety in Augmented Reality from Trade-off Between Immersion and Situation Awareness},
  author={Jinki Jung and Hyeopwoo Lee and Jeehye Choi and Abhilasha Nanda and Uwe Gruenefeld and Tim Claudius Stratmann and Wilko Heuten},
  journal={2018 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)},
  year={2018},
  pages={70-79}
}
Although the mobility and emerging technology of augmented reality (AR) have brought significant entertainment and convenience in everyday life, the use of AR is becoming a social problem as the accidents caused by a shortage of situation awareness due to an immersion of AR are increasing. [...] Key Method From a RGB image sequence, VPE efficiently estimates the relative 3D position between a user and a car using generated convolutional neural network (CNN) model with a region-of-interest based scheme.Expand
A Research Agenda for Mixed Reality in Automated Vehicles
TLDR
This paper aims at presenting a research agenda for using mixed reality technology for automotive user interfaces (UIs) by identifying opportunities and challenges. Expand
Comparing Non-Visual and Visual Guidance Methods for Narrow Field of View Augmented Reality Displays
TLDR
It is shown that although audio-tactile guidance is generally slower than the well-performing EyeSee360 in terms of search times, it is on a par regarding the hit rate and even more so, the audio-Tactile method provides a significant improvement in situation awareness compared to the visual approach. Expand
Evaluating Mixed and Augmented Reality: A Systematic Literature Review (2009-2019)
TLDR
A systematic review of 45S papers that report on evaluations in mixed and augmented reality (MR/AR) published in ISMAR, CHI, IEEE VR, and UIST over a span of 11 years is presented to provide guidance for future evaluations of MR/AR approaches. Expand
SafeXR: alerting walking persons to obstacles in mobile XR environments
TLDR
A safety assistance system for walking smartphone users, which utilizes only the smartphone’s built-in sensors and detects obstacles by analyzing the feature points extracted from the input camera images and then alerts the users to the danger ahead. Expand
OmniView: An Exploratory Study of 360 Degree Vision using Dynamic Distortion based on Direction of Interest
TLDR
OmniView, an exploratory study to determine an optimized 360 FOV vision using dynamic distortion methods for reducing distortion and enlarging the area of the direction of interest, is introduced. Expand
Obstacle Detection and Alert System for Smartphone AR Users
TLDR
An obstacle detection and alert system for the pedestrians who use smartphone AR applications that analyzes the input camera image to extract feature points and determines whether the feature points come from obstacles ahead in the path. Expand
ARCHIE: A User-Focused Framework for Testing Augmented Reality Applications in the Wild
TLDR
It is demonstrated that ARCHIE provides no significant overhead for AR applications, and introduces at most 2% processing overhead when switching among large groups of testable profiles. Expand
SafeAR: AR Alert System Assisting Obstacle Avoidance for Pedestrians
TLDR
An obstacle alert system for the users using AR applications while walking that analyzes the input camera image to detect the obstacles and is combined into an AR application named SafeAR. Expand

References

SHOWING 1-10 OF 47 REFERENCES
EyeSee360: designing a visualization technique for out-of-view objects in head-mounted augmented reality
TLDR
This work designed a lo-fi prototype of the EyeSee360 system, and based on user feedback, subsequently implemented it, and evaluated the technique against well-known 2D off-screen object visualization techniques adapted for head-mounted Augmented Reality, and found that Eye see360 results in lowest error for direction estimation of out-of-view objects. Expand
Visualizing out-of-view objects in head-mounted augmented reality
TLDR
This work adapted and tested well-known 2D off-screen object visualization techniques (Arrow, Halo, Wedge) for head-mounted AR and found that Halo resulted in the lowest error for direction estimation while Wedge was subjectively perceived as best. Expand
Deep 360 Pilot: Learning a Deep Agent for Piloting through 360° Sports Videos
TLDR
Deep 360 pilot - a deep learning-based agent for piloting through 360° sports videos automatically, trained on domain-specific agents and achieved the best performance on viewing angle selection accuracy and users preference compared to [53] and other baselines. Expand
Camera-based vehicle velocity estimation from monocular video
TLDR
It is found that light-weight trajectory based features outperform depth and motion cues extracted from deep ConvNets, especially for far-distance predictions where current disparity and optical flow estimators are challenged significantly. Expand
Pedestrian Inattention Blindness While Playing Pokémon Go as an Emerging Health-Risk Behavior: A Case Report
TLDR
Mobile videogames that imply movement to play are an effective way to improve physical activity practice, especially in adolescents and young adults, but cases like the one presented here point out that these games could pose a significant risk to users who play while walking, cycling, or driving in unsafe areas such as city streets, because players become distracted and may ignore surrounding hazards. Expand
Sim4CV: A Photo-Realistic Simulator for Computer Vision Applications
TLDR
A photo-realistic training and evaluation simulator with extensive applications across various fields of computer vision built on top of the Unreal Engine, which provides extensive synthetic data variety through its ability to reconfigure synthetic worlds on the fly using an automatic world generation tool. Expand
Crash To Not Crash: Playing Video Games To Predict Vehicle Collisions
TLDR
This work collects a large accident data set using a popular video game named GTA V, and develops efficient prediction algorithms based on modern CNN architectures that can identify the source of danger when a collision is predicted. Expand
Vision meets robotics: The KITTI dataset
TLDR
A novel dataset captured from a VW station wagon for use in mobile robotics and autonomous driving research, using a variety of sensor modalities such as high-resolution color and grayscale stereo cameras and a high-precision GPS/IMU inertial navigation system. Expand
Augmented reality: a class of displays on the reality-virtuality continuum
In this paper we discuss augmented reality (AR) displays in a general sense, within the context of a reality-virtuality (RV) continuum, encompassing a large class of `mixed reality' (MR) displays,Expand
LookUp: Enabling Pedestrian Safety Services via Shoe Sensing
TLDR
The results from these experiments show that the shoe-mounted inertial sensors used in this work can accurately determine transitions between sidewalk and street locations to identify pedestrian risk. Expand
...
1
2
3
4
5
...