AID: Adapting Image2Video Diffusion Models for Instruction-based Video Prediction

Zhen Xing 1 Qi Dai 2 Zejia Weng 1 Zuxuan Wu 1 Yu-Gang Jiang 1

 1 Fudan University  2 Microsoft Research Asia


   

Abstract

Text-guided video prediction (TVP) involves predicting the motion of future frames from the initial frame according to an instruction, which has wide applications in virtual reality, robotics, and content creation. Previous TVP methods make significant breakthroughs by adapting Stable Diffusion for this task. However, they struggle with frame consistency and temporal stability primarily due to the limited scale of video datasets. We observe that pretrained Image2Video diffusion models possess good priors for video dynamics but they lack textual control. Hence, transferring Image2Video models to leverage their video dynamic priors while injecting instruction control to generate controllable videos is both a meaningful and challenging task. To achieve this, we introduce the Multi-Modal Large Language Model (MLLM) to predict future video states based on initial frames and text instructions. More specifically, we design a dual query transformer (DQFormer) architecture, which integrates the instructions and frames into the conditional embeddings for future frame prediction. Additionally, we develop Long-Short Term Temporal Adapters and Spatial Adapters that can quickly transfer general video diffusion models to specific scenarios with minimal training costs. Experimental results show that our method significantly outperforms state-of-the-art techniques on four datasets: Something Something V2, Epic Kitchen-100, Bridge Data, and UCF-101. Notably, AID achieves 91.2% and 55.5% FVD improvements on Bridge and SSv2 respectively, demonstrating its effectiveness in various domains.



Method

The pipeline includes a 3D U-Net for diffusion and a DQFormer for text conditioning. The parameters of the original 3D U-Net are frozen, we only fine-tune the parameters of the newly added adapter during training.







Text-conditioned Video Prediction (SSthV2)

Input frames Text instruction Real Video Synthesized Video (Ours)
"Lifting flashlight
up completely
without letting
it drop down"

Input frames Text instruction Real Video Synthesized Video (Ours)
"Moving camera closer
to dry erase board"

Input frames Text instruction Real Video Synthesized Video (Ours)
"Taking remote
out of pen stand"

Input frames Text instruction Real Video Synthesized Video (Ours)
"Moving glue stick
and diskette away
from each other"

Input frames Text instruction Real Video Synthesized Video (Ours)
"Showing lemon
to the camera"

Input frames Text instruction Real Video Synthesized Video (Ours)
"Moving charger up"

Input frames Text instruction Real Video Synthesized Video (Ours)
"Pretending or failing
to wipe ink off of
a dry erase board"

 

Text-conditioned Video Prediction (BridgeData)

Input frames Text instruction Real Video AID (Ours) Seer Method
"Put can in pot"

Input frames Text instruction Real Video AID (Ours) Seer Method
"Open brown1fbox
flap"

Input frames Text instruction Real Video AID (Ours) Seer Method
"Open brown1fbox
flap"

 

Text-conditioned Video Prediction (Epic-kitchen)

Input frames Text instruction Real Video AID (Ours) Seer Method
"Open fridge"

Input frames Text instruction Real Video AID (Ours) Seer Method
"Wash meat
flap"

Input frames Text instruction Real Video AID (Ours) Seer Method
"Put plate on counter"

Input frames Text instruction Real Video AID (Ours) Seer Method
"Continue stirring food"

 

Text-conditioned Video Prediction (UCF-101)

Input frames Text instruction Real Video AID (Ours)
"A person is
blow-drying hair"

Input frames Text instruction Real Video AID (Ours)
"A person is
applying eye makeup"

Input frames Text instruction Real Video AID (Ours)
"A person is
doing push-ups."

Bibtex

            @article{AID,
                title={AID: Adapting Image2Video Diffusion Models for Instruction-guided Video Prediction},
                author={Zhen Xing and Qi Dai and Zejia Weng and Zuxuan Wu and Yu-Gang Jiang}, 
                journal={arXiv preprint arXiv:2406.06465},
                year={2024}
              }