Cliplets: juxtaposing still and dynamic imagery

@article{Joshi2012ClipletsJS,
  title={Cliplets: juxtaposing still and dynamic imagery},
  author={Neel Joshi and Sisil Mehta and Steven Mark Drucker and Eric J. Stollnitz and Hugues Hoppe and Matthew Uyttendaele and Michael F. Cohen},
  journal={Proceedings of the 25th annual ACM symposium on User interface software and technology},
  year={2012}
}
We explore creating ""cliplets"", a form of visual media that juxtaposes still image and video segments, both spatially and temporally, to expressively abstract a moment. Much as in ""cinemagraphs"", the tension between static and dynamic elements in a cliplet reinforces both aspects, strongly focusing the viewer's attention. Creating this type of imagery is challenging without professional tools and training. We develop a set of idioms, essentially spatiotemporal mappings, that characterize… 

Figures from this paper

A Mixed-Initiative Interface for Animating Static Pictures
TLDR
An interactive tool to animate the visual elements of a static picture, based on simple sketch-based markup, that effectively allows illustrators and animators to add life to still images in a broad range of visual styles is presented.
Selecting Interesting Image Regions to Automatically Create Cinemagraphs
TLDR
A novel framework for automatically creating Cinemagraphs from video sequences, with specific emphasis on determining the composition of masks and layers in creating aesthetically pleasing cinemagraphs is presented.
Personalized Cinemagraphs Using Semantic Understanding and Collaborative Learning
TLDR
A new technique is presented that uses object recognition and semantic segmentation as part of an optimization method to automatically create cinemagraphs from videos that are both visually appealing and semantically meaningful.
LACES: live authoring through compositing and editing of streaming video
TLDR
This work introduces LACES, a tablet-based system enabling simple video manipulations in the midst of filming, allowing greater spontaneity and exploration of video creation.
Selectively De-animating and Stabilizing Videos
TLDR
A user-assisted video stabilization algorithm that is able to stabilize challenging videos when state-of-the-art automatic algorithms fail to generate a satisfactory result is presented.
An approach to automatic creation of cinemagraphs
TLDR
This paper views cinemagraph construction as a constrained optimization problem that seeks a sub-volume in video with the maximum cumulative flow fields and concludes that the problem can be efficiently solved by a branch and-bound search scheme.
Responsive Action-based Video Synthesis
TLDR
This work converts static-camera videos into loopable sequences, synthesizing them in response to simple end-user requests, and proposes a human-in-the-loop system where adding effort gives the user progressively more creative control.
Automatic Cinemagraph Portraits
TLDR
This work presents a completely automatic algorithm for generating portrait cinemagraphs from a short video captured with a hand‐held camera that uses a combination of face tracking and point tracking to segment face motions into two classes: gross, large‐scale motions that should be removed from the video, and dynamic facial expressions thatShould be preserved.
Dynamic Image Stacks
TLDR
This work presents dynamic image stacks, an interactive image viewer exploring what photography can become when this constraint is relaxed, and turns photograph viewing into an interactive, exploratory experience that is engaging, evocative, and fun.
Tools for Live 2D Animation
TLDR
An interactive system that addresses the problem of triggering artwork swaps in live settings, and a framework that augments the primary motion of a character by adding secondary motion subtle movement of parts like hair, foliage or cloth that complements and emphasizes thePrimary motion.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 38 REFERENCES
Dynamosaicing: Mosaicing of Dynamic Scenes
TLDR
This paper proposes to align dynamic scenes using a new notion of "dynamics constancy," which is more appropriate for this task than the traditional assumption of "brightness constancy", and formulate the problem of finding optimal time front geometry as one of finding a minimal cut in a 4D graph, and solve it using max-flow methods.
Panoramic video textures
This paper describes a mostly automatic method for taking the output of a single panning video camera and creating a panoramic video texture (PVT): a video that has been stitched into a single, wide
Animating pictures with stochastic motion textures
TLDR
A semi-automatic approach is used, in which a human user segments the scene into a series of layers to be individually animated, and a "stochastic motion texture" is automatically synthesized using a spectral method, i.e., the inverse Fourier transform of a filtered noise spectrum.
Selectively de-animating video
TLDR
A semi-automated technique for selectively deanimating video to remove the large-scale motions of one or more objects so that other motions are easier to see, which enables a number of applications such as clearer motion visualization, simpler creation of artistic cinemagraphs, and new ways to edit appearance and complicated motion paths in video by manipulating a de-animated representation.
Parallax photography: creating 3D cinematic effects from stills
TLDR
A GPU-accelerated, temporally coherent rendering algorithm is described that allows users to create more complex camera moves interactively, while experimenting with effects such as focal length, depth of field, and selective, depth-based desaturation or brightening.
Towards Moment Imagery: Automatic Cinemagraphs
TLDR
This work creates a cinema graphs authoring tool combining video motion stabilisation, segmentation, interactive motion selection, motion loop detection and selection, and cinema graph rendering to push toward the easy and versatile creation of moments that cannot be represented with still imagery.
Interactive digital photomontage
TLDR
The framework makes use of two techniques primarily: graph-cut optimization, to choose good seams within the constituent images so that they can be combined as seamlessly as possible; and gradient-domain fusion, a process based on Poisson equations, to further reduce any remaining visible artifacts in the composite.
Video textures
TLDR
This paper presents techniques for analyzing a video clip to extract its structure, and for synthesizing a new, similar looking video of arbitrary length, and combines video textures with view morphing techniques to obtain 3D video textures.
Video object annotation, navigation, and composition
We explore the use of tracked 2D object motion to enable novel approaches to interacting with video. These include moving annotations, video navigation by direct manipulation of objects, and creating
Graphcut textures: image and video synthesis using graph cuts
TLDR
A new algorithm for image and video texture synthesis where patch regions from a sample image or video are transformed and copied to the output and then stitched together along optimal seams to generate a new (and typically larger) output.
...
1
2
3
4
...