Project Details
Description
Synthesizing close interactions between articulated characters and objects is an important research topic with a wide range of applications including movies, character animations, computer games, and robotics. Despite the high demand for it, automatically synthesizing movements through existing methods remains impractical nowadays. The main problem lies in the conventional representation of character states, which is based on joint angles/positions. Because spatial relations between body parts are not encoded in such a representation, switching from one posture to another using existing methods (such as motion interpolations and motion planning) requires intense random sampling and collision detection in the state space. As a result, most close interaction scenes are still designed manually by experienced animators. Due to the nature of manual editing, reusing movements in different conditions is not only difficult, but also inefficient and costly.
In this project, we propose a new framework for systems that automatically analyze and synthesize movements involving close interaction between the body parts of one or multiple articulated characters (such as dancing and wrestling) or between characters and objects (such as carrying luggage). We address the problems mentioned above by proposing a new representation that is based on the spatial relationships of different body parts, where the postures of the characters are described by their relative position and orientation with respect to adjacent entities in the scene. This new representation offers strong abstraction power and a high degree of adaptability. Motion that is captured from the interactions of one subject with other entities can easily be applied to characters and objects of different sizes, geometries, and topologies.
The motion data obtained using different subjects and objects can be interpolated and concatenated as long as the spatial relationships are similar. Using this new representation, we will build a system that automatically synthesizes novel movements for the new condition using the training data. The system will be able to learn and synthesize movements that involve close interactions much more efficiently, without any artifacts such as penetrations and body part pass-throughs, compared to existing methods that is based on joint angles. As a result, the system is applicable for various applications, such as computer animations and computer games. It can also enhance research on imitation learning, where robots mimic human movements and adapt them to its own body size, topology and geometry.
In this project, we propose a new framework for systems that automatically analyze and synthesize movements involving close interaction between the body parts of one or multiple articulated characters (such as dancing and wrestling) or between characters and objects (such as carrying luggage). We address the problems mentioned above by proposing a new representation that is based on the spatial relationships of different body parts, where the postures of the characters are described by their relative position and orientation with respect to adjacent entities in the scene. This new representation offers strong abstraction power and a high degree of adaptability. Motion that is captured from the interactions of one subject with other entities can easily be applied to characters and objects of different sizes, geometries, and topologies.
The motion data obtained using different subjects and objects can be interpolated and concatenated as long as the spatial relationships are similar. Using this new representation, we will build a system that automatically synthesizes novel movements for the new condition using the training data. The system will be able to learn and synthesize movements that involve close interactions much more efficiently, without any artifacts such as penetrations and body part pass-throughs, compared to existing methods that is based on joint angles. As a result, the system is applicable for various applications, such as computer animations and computer games. It can also enhance research on imitation learning, where robots mimic human movements and adapt them to its own body size, topology and geometry.
Status | Finished |
---|---|
Effective start/end date | 1/11/13 → 31/10/16 |
UN Sustainable Development Goals
In 2015, UN member states agreed to 17 global Sustainable Development Goals (SDGs) to end poverty, protect the planet and ensure prosperity for all. This project contributes towards the following SDG(s):
Fingerprint
Explore the research topics touched on by this project. These labels are generated based on the underlying awards/grants. Together they form a unique fingerprint.