Colin J. Dalton

Learn More
Recently, there have been several attempts at creating 'video textures', that is, synthesising new (potentially infinitely long) video clips based on existing ones. One way to do this is to transform each frame of the video into an eigenspace using Principal Components Analysis so that the original sequence can be viewed as a signature through this(More)
We present an integrated system that enables the capture and synthesis of <b>3</b>D motions of small scale dynamic creatures, typically insects and arachnids, in order to drive computer generated models. The system consists of a number of stages, initially, the acquisition of a multi-view calibration scene and synchronised video footage of a subject(More)
This paper presents a novel technique for the generation of 'video textures' to display human emotion. This is achieved by a method which uses existing video footage to synthesise new sequences of coherent facial expression and head motions.An '<i>expression space</i>' which is defined by sets of emotion models is constructed using principal components(More)
We describe a system which is designed to assist in extracting high-level information from sets or sequences of images. We show that the method of principal components analysis followed by a neural network learning phase is capable of feature extraction or motion tracking, even through occlusion. Given a minimal amount of user direction for the learning(More)
We present a novel approach to motion synthesis. It is shown that by splitting sequences into segments new sequences can be created with a similar look and feel to the original. Copying segments of the original data generates a sequence which maintains detailed characteristics. By modelling each segment using an autoregressive process we can introduce new(More)
We describe a system which is designed to assist animators in extracting high-level information from sequences of images. The system is not meant to replace animators, but to be a tool to assist them in creating the first 'rough-cut' of a sequence quickly and easily. Using the system, short animations have been created in a very short space of time. We show(More)
Recently, there have been several attempts at creating 'video textures', that is, synthesising new (potentially infinitely long) video clips based on existing ones. One method for achieving this is to transform each frame of the video into an eigenspace using Principal Components Analysis so that the original sequence can be viewed as a signature through a(More)
We present two approaches for the generation of novel video textures which portray a human expressing different emotions. Here training data is provided by video sequences of an actress expressing specific emotions such as angry, happy and sad. The main challenge of modelling these video texture sequences is the high variance in head position and facial(More)