Despite the recent strides in video generation, state-of-the-art methods still struggle with elements of visual detail. One particularly challenging case is the class of egocentric instructional videos in which the intricate motion of the hand coupled with a mostly stable and non-distracting environment is necessary to convey the appropriate visual action instruction. To address these challenges, we introduce a new method for instructional video generation. Our diffusion-based method incorporates two distinct innovations. First, we propose an automatic method to generate the expected region of motion, guided by both the visual context and the action text. Second, we introduce a critical hand structure loss to guide the diffusion model to focus on smooth and consistent hand poses. We evaluate our method on augmented instructional datasets based on EpicKitchens and Ego4D, demonstrating significant improvements over state-of-the-art methods in terms of instructional clarity, especially of the hand motion in the target region, across diverse environments and actions.
Illustration of our proposed Hand-Centric Text-and-Image Conditioned Video Generation (HCVG). Given an image for context and an action text prompt, our method generates video frames with precisely refined hand appearance and motion, overcoming challenges like unreasonable motion in backgrounds. Unlike baselines that extend unnecessary motion to background with rough hand structure, our approach produces motion in more reasonable area, as shown in the Motion flow visualization and refined hand details.
Our method addresses the image-text to video generation problem for instructional content with a two-stage, backbone-shared approach. In Stage one, the model automatically predicts the Region of Motion (RoM)—the spatial area in the input image where task-relevant motion occurs. In Stage two, conditioned on this RoM, the model generates instructional video frames that focus on the action, avoiding distractions from cluttered backgrounds. Additionally, we introduce a hand structure loss, which ensures accurate and precise hand motions, critical for capturing subtle and task-specific fingertip movements, thereby enhancing the quality and clarity of instructional videos.
Quantitative results on EpicKitchens, Ego4D and a Motion Intensive subset of EpicKitchens. Our method outperforms all baselines across all metrics on at least one dataset
@article{li2024handi,
title={HANDI: Hand-Centric Text-and-Image Conditioned Video Generation},
author={Li, Yayuan and Cao, Zhi and Corso, Jason J},
journal={arXiv preprint arXiv:2412.04189},
year={2024}
}