Publications

Publications

Take a further look into our breakthroughs

Take a further look into our breakthroughs

Take a further look into our breakthroughs

December 15, 2025

Comparing 3D Data Representations for Keypoint Estimation in Humanoid Characters

Comparing 3D Data Representations for Keypoint Estimation in Humanoid Characters

JE. Lefèvre, T. Cheynel, T. Daniel

JE. Lefèvre, T. Cheynel, T. Daniel

This work investigates how the use of different 3D data representations affects the performance of learning-based models for the task of predicting keypoints from meshes of humanoid characters, used for rigging.

This work investigates how the use of different 3D data representations affects the performance of learning-based models for the task of predicting keypoints from meshes of humanoid characters, used for rigging.

February 28, 2025

ReConForM: Real-time Contact-aware Motion Retargeting for more Diverse Character Morphologies

ReConForM: Real-time Contact-aware Motion Retargeting for more Diverse Character Morphologies

T. Cheynel, T. Rossi, B. Bellot-Gurlet, D. Rohmer, MP. Cani

T. Cheynel, T. Rossi, B. Bellot-Gurlet, D. Rohmer, MP. Cani

Preserving semantics, in particular in terms of contacts, is a key challenge when retargeting motion between characters of different morphologies.

Preserving semantics, in particular in terms of contacts, is a key challenge when retargeting motion between characters of different morphologies.

December 2, 2024

Rethinking motion keyframe extraction: A greedy procedural approach using a neural control rig

Rethinking motion keyframe extraction: A greedy procedural approach using a neural control rig

T. Cheynel, O. El Khalifi, B. Bellot-Gurlet, D. Rohmer, MP. Cani

T. Cheynel, O. El Khalifi, B. Bellot-Gurlet, D. Rohmer, MP. Cani

3D animators traditionally use a "pose to pose" approach, whereas motion capture (MoCap) tools generate a pose for every frame, making the motion challenging to edit.

3D animators traditionally use a "pose to pose" approach, whereas motion capture (MoCap) tools generate a pose for every frame, making the motion challenging to edit.

November 15, 2023

Sparse Motion Semantics for Contact-Aware Retargeting

Sparse Motion Semantics for Contact-Aware Retargeting

T. Cheynel, T. Rossi, B. Bellot-Gurlet, D. Rohmer, MP. Cani

T. Cheynel, T. Rossi, B. Bellot-Gurlet, D. Rohmer, MP. Cani

This paper presents a method for retargeting motion onto a character with a completely different skeleton and mesh.

Sparse Motion Semantics for Contact-Aware Retargeting

T. Cheynel, T. Rossi, B. Bellot-Gurlet, D. Rohmer, MP. Cani

This paper presents a method for retargeting motion onto a character with a completely different skeleton and mesh.

November 15, 2023

Rethinking motion keyframe extraction: A greedy procedural approach using a neural control rig

T. Cheynel, O. El Khalifi, B. Bellot-Gurlet, D. Rohmer, MP. Cani

3D animators traditionally use a "pose to pose" approach, whereas motion capture (MoCap) tools generate a pose for every frame, making the motion challenging to edit.

November 15, 2023

ReConForM: Real-time Contact-aware Motion Retargeting for more Diverse Character Morphologies

T. Cheynel, T. Rossi, B. Bellot-Gurlet, D. Rohmer, MP. Cani

Preserving semantics, in particular in terms of contacts, is a key challenge when retargeting motion between characters of different morphologies.

February 28, 2025

ReConForM: Real-time Contact-aware Motion Retargeting for more Diverse Character Morphologies

T. Cheynel, T. Rossi, B. Bellot-Gurlet, D. Rohmer, MP. Cani

Preserving semantics, in particular in terms of contacts, is a key challenge when retargeting motion between characters of different morphologies.

February 28, 2025

Comparing 3D Data Representations for Keypoint Estimation in Humanoid Characters

JE. Lefèvre, T. Cheynel, T. Daniel

This work investigates how the use of different 3D data representations affects the performance of learning-based models for the task of predicting keypoints from meshes of humanoid characters, used for rigging.

December 15, 2025