Deformer: Dynamic Fusion Transformer for Robust Hand Pose Estimation

Qichen Fu1
Xingyu Liu1
Ran Xu2
Juan Carlos Niebles2
Kris M. Kitani1

1Carnegie Mellon University
2Salesforce Research

ICCV 2023

[Paper]
[GitHub]

Input Video Baseline (TCMR) Ours (Deformer)

Deformer, a method to robustly estimate 3D hand pose in video via learning hand deformation and visual accountability.



Abstract

Accurately estimating 3D hand pose is crucial for understanding how humans interact with the world. Despite remarkable progress, existing methods often struggle to generate plausible hand poses when the hand is heavily occluded or blurred. In videos, the movements of the hand allow us to observe various parts of the hand that may be occluded or blurred in a single frame. To adaptively leverage the visual clue before and after the occlusion or blurring for robust hand pose estimation, we propose the Deformer: a framework that implicitly reasons about the relationship between hand parts within the same image (spatial dimension) and different timesteps (temporal dimension). We show that a naive application of the transformer self-attention mechanism is not sufficient because motion blur or occlusions in certain frames can lead to heavily distorted hand features and generate imprecise keys and queries. To address this challenge, we incorporate a Dynamic Fusion Module into Deformer, which predicts the deformation of the hand and warps the hand mesh predictions from nearby frames to explicitly support the current frame estimation. Furthermore, we have observed that errors are unevenly distributed across different hand parts, with vertices around fingertips having disproportionately higher errors than those around the palm. We mitigate this issue by introducing a new loss function called maxMSE that automatically adjusts the weight of every vertex to focus the model on critical hand parts. Extensive experiments show that our method significantly outperforms state-of-the-art methods by 10%, and is more robust to occlusions (over 14%).


Method

Overview of the Deformer Architecture. Our approach uses transformers to reason spatial and temporal relationships between hand parts in an image sequence, and output frame-wise hand pose and motion. In order to overcome the challenge when the hand is heavily occluded or blurred in some frames, the Dynamic Fusion Module explicitly deforms the hand poses from neighborhood frames and fuses them toward a robust hand pose estimation.


Results

Comparsion with SOTA Video-based Method (TCMR)

Given a video (left column) where the hand is occluded or blurred in some frames, the existing state-of-the-art video-based method TCMR (middle column) fails to predict accurate hand poses. Our method (right) is able to capture the hand dynamics and leverage neighborhood frames to robustly produce plausible hand pose estimations. For each method, we visualize the 3D hand mesh with colors indicating the Mean Per Joint Position Error (MPJPE) in millimeter (mm) w.r.t the ground truth, where the red color indicates a higher error and the blue color indicates a lower error.
Input Video Baseline (TCMR) Ours (Deformer)

Qualitative Results on the HO3D Dataset

Qualitative results on the HO3D dataset. As the HO3D test set annotation is not publicly released, we only show the predicted 3D hand meshes for exemplary test sequences. Despite heavy hand-object occlusions, our method can still generate stable and plausible 3D hand pose estimations.

Qualitative Results on the DexYCB Dataset

Qualitative results on the DexYCB dataset. Our method can generalize to different hand shapes and poses, even under diverse and complicated hand-object interactions.


Analysis

We visualize the scatter plot and mean ± standard deviation of MPJPE on DexYCB test data samples within different hand-object occlusion level ranges. Compared to the baseline, our method significantly reduces the hand pose estimation error in all occlusion levels, especially when the hand is heavily occluded.
We visualize the confidence score predicted by the Dynamic Fusion Model. The proposed model can implicitly learn visual accountability and assign lower confidence to frames where hands are blurred and occluded.


Paper and Supplementary Material

Qichen Fu, Xingyu Liu, Ran Xu, Juan Carlos Niebles, Kris M. Kitani
Deformer: Dynamic Fusion Transformer for Robust Hand Pose Estimation
(hosted on ArXiv)

[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.