DreamFrame: Enhancing Video Understanding via Automatically Generated QA and Style-Consistent Keyframes

1Embedded DL Lab, Fudan University 2AGI Lab, Westlake University
(✦ Corresponding Author)

IconConsistent Key Frames From DreamFrame

DreamFrame generate consistent key frames with immobilized style on various scenes

Interpolate start reference image.
Examples of generated video instruction data. We use GPT-4 and guided text-to-image generation models to generate consistent key frames of move-level video with reasonable lines and corresponding question-answer pairs. These data are used to train multimodal large language models on video understanding.

Abstract

Recent large vision-language models (LVLMs) for video understanding are primarily fine-tuned with various videos scraped from online platforms. Existing datasets, such as ActivityNet, require considerable human labor for structuring and annotation before effectively utilized for tuning LVLMs. While current LVLMs are primarily trained on existing datasets in broad, general-purpose settings, adapting them to specific downstream scenarios remains challenging, as collecting and annotating task-specific videos is highly labor-intensive and time-consuming. To address this issue, we propose a three-stage framework named DreamFrame for auto- matically generating style-consistent keyframes and corresponding question-answer (QA) pairs to support LVLM instruction tuning. DreamFrame generates datasets in a movie-like manner. First, we utilize an LLM to generate structured movie plots including movie prior information (like overview and style), frame descriptions and plot-related QA pairs, with a story expansion strategy to mitigate context length limitations. Then, to ensure visual consistency across generated frames, we design a Style Immobilization Process which maintains consistent style through an embedding learning strategy. Finally, frame descriptions and style embeddings are integrated to produce coherent keyframes. Using DreamFrame, we construct a dataset comprising approximately 1k stylized keyframe-like videos and 100k diverse QA pairs. Extensive fine-tuned experiments on various LVLM architectures demonstrate the effectiveness of the proposed dataset. Furthermore, based on the proposed dataset, we fine-tune a new LVLM named DreamFrame-7B, which significantly surpasses the previous similar-sized LVLMs (+2.2 compared with VideoLLaVA-7B on MvBench) across different benchmarks.

Pipeline

Our proposed pipeline for generating video instruction tuning datasets. With merely a simple thematic description, our pipeline is capable of generating key frames of an entire film. The pipeline can be roughly divided into three stages: (a) movie plot generation, where we generate the whole movie plot based on a theme phrase. (b) style immobilization process, where we learn a style embedding to immobilize the style-related keywords generated from the plot into the latent space of the diffusion model, guiding it to generate frames with fixed style. (c) video instruction data generation, where we integrate all the previously obtained information to ultimately generate consistent key frames.

More Results