PenSight: Enhanced Interaction with a Pen-Top Camera

@article{Matulic2020PenSightEI,
  title={PenSight: Enhanced Interaction with a Pen-Top Camera},
  author={Fabrice Matulic and Riku Arakawa and Brian K. Vogel and Daniel Vogel},
  journal={Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
  year={2020}
}
We propose mounting a downward-facing camera above the top end of a digital tablet pen. This creates a unique and practical viewing angle for capturing the pen-holding hand and the immediate surroundings which can include the other hand. The fabrication of a prototype device is described and the enabled interaction design space is explored, including dominant and non-dominant hand pose recognition, tablet grip detection, hand gestures, capturing physical content in the environment, and… 

Figures and Tables from this paper

FacialPen: Using Facial Detection to Augment Pen-Based Interaction
TLDR
This work proposes FacialPen, a prototype that uses facial gestures to trigger commands for pen-based manipulation, and facilitates an elicitation study to identify natural and user-defined gestures for interactions with facial expressions.
Flashpen: A High-Fidelity and High-Precision Multi-Surface Pen for Virtual Reality
TLDR
This paper presents Flashpen, a digital pen for VR whose sensing principle affords accurately digitizing hand writing and intricate drawing, including small and quick turns, and demonstrates that Flashpen's fidelity matches the performance of state-of-the-art digitizers and approaches the fidelity of analog pens, while adding the flexibility of supporting a wide range of flat surfaces.
Portalware: Exploring Free-Hand AR Drawing with a Dual-Display Smartphone-Wearable Paradigm
TLDR
Through an autobiographical design process, three authors explored free-hand drawing and expanded the design space from a single- display smartphone format to a dual-display smartphone-wearable format (Portalware).
vMirror: Enhancing the Interaction with Occluded or Distant Objects in VR with Virtual Mirrors
Interacting with out of reach or occluded VR objects can be cumbersome. Although users can change their position and orientation, such as via teleporting, to help observe and select, doing so
PoVRPoint: Authoring Presentations in Mobile Virtual Reality
TLDR
The results indicate that the wide field of view afforded by VR results in significantly faster target slide identification times compared to a tablet-only interface for visually salient targets; and the three-dimensional view in VR enables significantly faster object reordering in the presence of occlusion compared to two baseline interfaces.
SoloFinger: Robust Microgestures while Grasping Everyday Objects
TLDR
It is established that single-finger movements are rare in everyday hand-object actions and infer a single- finger input technique resilient to false activation and validated that simple SoloFinger gestures can relieve the need for complex finger configurations or delimiting gestures and that SoloFingers is applicable to diverse hand- object actions.
Pen-based Interaction with Spreadsheets in Mobile Virtual Reality
TLDR
A tool-set for enhancing spreadsheet interaction on tablets using immersive VR headsets and pen-based input is presented and tools for the efficient creation and editing of spreadsheets functions such as off-the-screen layered menus, visualization of sheets dependencies, and gaze-and-touch-based switching between spreadsheet tabs are proposed.
ProxiMic: Convenient Voice Activation via Close-to-Mic Speech Detected by a Single Microphone
TLDR
ProxiMic, a close-to-mic (within 5 cm) speech sensing technique using only one microphone, which a user keeps a microphone-embedded device close to the mouth and speaks directly to the device without wake-up phrases or button presses.
Independent Control of Supernumerary Appendages Exploiting Upper Limb Redundancy
TLDR
A novel approach for controlling supernumerary appendages by exploiting upper limb redundancy by presenting a headphone-style visual sensing device and a recognition system to estimate shoulder movement and results indicate that participants are able to intentionally give commands through their shoulder motions.
A Review of Recent Deep Learning Approaches in Human-Centered Machine Learning
TLDR
This review paper presents an overview and analysis of existing work in HCML related to DL, and analyzes the topology of the HCML landscape by identifying research gaps, highlighting conflicting interpretations, addressing current challenges, and presenting future HCML research opportunities.
...
1
2
...

References

SHOWING 1-10 OF 77 REFERENCES
HandSee: Enabling Full Hand Interaction on Smartphone with Front Camera-based Stereo Vision
TLDR
A pipeline to extract the depth image of hands from a monocular RGB image is developed, which consists of a stereo matching algorithm to estimate the pixel-wise depth of the scene, a CNN-based online calibration algorithm to detect hand skin, and a merging algorithm that outputs thedepth image of the hands.
Unimanual Pen+Touch Input Using Variations of Precision Grip Postures
TLDR
A new pen input space is introduced by forming postures with the same hand that also grips the pen while writing, drawing, or selecting to enable detection without special sensors.
CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a Fisheye Ring
TLDR
CyclopsRing is the first ring-wearable device that supports whole-hand and context-aware interactions and a set of interaction techniques including on-finger pinch-and-slide input, in-air pinch- and-motion input, palm-writing input, and their interactions with the environ- mental contexts.
Grips and gestures on a multi-touch pen
TLDR
This study shows that both swipe and double tap gestures are comparable in performance to commonly employed barrel buttons without its disadvantages, and can be successfully and comfortably performed regardless of the rotation of the pen or how the user grips it.
Sensing techniques for tablet+stylus interaction
TLDR
G grip and motion sensing is explored to afford new techniques that leverage how users naturally manipulate tablet and stylus devices during pen + touch interaction, and can be used to impart new, previously unanticipated subtleties to pen +touch interaction on tablets.
Digits: freehand 3D interactions anywhere using a wrist-worn gloveless sensor
TLDR
Digits is a wrist-worn sensor that recovers the full 3D pose of the user's hand, which enables a variety of freehand interactions on the move and is specifically designed to be low-power and easily reproducible using only off-the-shelf hardware.
FlexStylus: Leveraging Bend Input for Pen Interaction
TLDR
A study comparing users' ability to match a changing target value using a commercial pressure stylus and the FlexStylus' absolute deformation suggests that deformation may be a useful input method for future work considering stylus augmentation.
A survey on 3D hand pose estimation: Cameras, methods, and datasets
Sensing Posture-Aware Pen+Touch Interaction on Tablets
TLDR
This work proposes sensing techniques that transition between various nuances of mobile and stationary use via postural awareness, and shows how these sensors enable posture-aware pen+touch techniques that adapt interaction and morph user interface elements to suit fine-grained contexts of body-, arm-, hand-, and grip-centric frames of reference.
GANerated Hands for Real-Time 3D Hand Tracking from Monocular RGB
TLDR
This work proposes a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network, and uses a neural network that translates synthetic images to "real" images, such that the so-generated images follow the same statistical distribution as real-world hand images.
...
1
2
3
4
5
...