ViDaPe:Visual Dance Performance for Interactive Characters
Duration: 30 months (2012 - 2014)
Start: 1/06/2012
Project Website: https://vidape.cs.ucy.ac.cy/
Principal Investigator: Γιώργος Χρυσάνθου
Main Funding Source: Cyprus Research Promotion Foundation
Total Cost: 120,680
In this project, a novel application is proposed which will be pioneering in both the computer animation and dancing communities, since for the first time a dancing performer will interact with a virtual animated character, in real-time, to compose a contemporary dancing show. The proposed research will explore innovative topics with special interest in the area of computer animation, including methods which smoothly combine optical motion capture (mocap) data with kinematic techniques, human figure modelling, a novel methodology for motion classification and partial-body motion synthesis. Human motions will be classified and notated using Labanotation (a system of analysing human movements, mainly used for choreography).
The language of human body is complex and it is not possible to have a satisfying simulation if rough simplifications in motion notation are used and the motion classification applied is not properly structured from the outset. Laban's theory will also be utilised to establish a similarity function that automatically composes different parts of the character, with a view of creating new actions from already existing movements. Extra control will be enforced by incorporating an anatomically and physiologically constrained version of the FABRIK algorithm (a fast, iterative technique for manipulating postures in a linked chain); the humanoid skeletons will be hierarchically structured in individual chains, as noted in Laban’s theory, and FABRIK will be applied sequentially according to pre-specified weights.
A smooth transition between different mocap clips using kinematics is important in order to overcome the motion gaps in data, to unify and synthesise different captured motions and to compose new actions for enriching the motion collection. Nontrivial issues will also be investigated, such as motion modelling, interaction modelling and multiple character synthesis, attempting to provide novel solutions to the problem. The problem of automatically synthesising and synchronising motion for interactive characters, which perform in an asynchronous manner, is still in early stages and needs further investigation. A symbolic notation system will be designed where the directors can script their creative vision, driving to a semantic model of the director's creative intent. A 3D visualisation tool will be also implemented that automatically creates animated movies of the semantic model, providing the ability to the script-writers to visualise their authoring notations at the same time of writing.
The proposed 3D visualisation tool will be able to propose possible future moves of the corresponding avatar, based on its previous and current status or on feedback gained from other interactive characters. Contemporary dance has been chosen for demonstrating the proposed techniques since it is not based on any prior scenario, but on the performers' improvisation, chemistry and interlinked interactions. The system will be adjusted dynamically according to the performers' actions and responses, offering the maximum possible interaction between the natural and virtual performer. Similar techniques can be adapted to the game industry, possibly for military or local law enforcement training simulators or other virtual character animations.