A Framework for Assisted Exploration with Collaboration


We approach the problem of exploring a virtual space by exploiting positional and camera-model constraints on navigation to provide extra assistance that focuses the user's explorational wanderings on the task objectives. Our specific design incorporates not only task-based constraints on the viewer's location, gaze, and viewing parameters, but also a personal “guide” that serves two important functions: keeping the user oriented in the navigation space, and “pointing” to interesting subject areas as they are approached. The guide's cues may be ignored by continuing in motion, but if the user stops, the gaze shifts automatically toward whatever the guide was interested in. This design has the screndipitous feature that it automatically incorporates a nested collaborative paradigm simply by allowing any given viewer to be seen as the “guide” of one or more viewers following behind; the leading automated guide (we tend to select a guide dog for this avatar) can remind the leading live human guide of interesting sites to point out, while each real human collaborator down the chain has some choices about whether to follow the local leader's hints. We have chosen VRML as our initial development medium primarily because of its portability, and we have implemented a variety of natural modes for leading and collaborating, including ways for collaborators to attach to and detach from a particular leader.

Extracted Key Phrases

7 Figures and Tables


Citations per Year

53 Citations

Semantic Scholar estimates that this publication has 53 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Wernert1999AFF, title={A Framework for Assisted Exploration with Collaboration}, author={Eric A. Wernert and Andrew J. Hanson}, booktitle={IEEE Visualization}, year={1999} }