DiCoDe: Diffusion-Compressed Deep Tokens for Autoregressive Video Generation with Language Models

1The University of Hong Kong, 2ARC Lab, Tencent PCG

With the prior knowledge of video diffusion models, DiCoDe can compress a 2-second 16-frame video clip into 16 tokens. Despite the extremely high compression ratio, DiCoDe successfully reconstructs the video clips with minimal degradation.

Abstract

Videos are inherently temporal sequences by their very nature. In this work, we explore the potential of modeling videos in a chronological and scalable manner with autoregressive (AR) language models, inspired by their success in natural language processing. We introduce DiCoDe, a novel approach that leverages Diffusion-Compressed Deep Tokens to generate videos with a language model in an autoregressive manner.

Unlike existing methods that employ low-level representations with limited compression rates, DiCoDe utilizes deep tokens with a considerable compression rate (a 1000× reduction in token count). This significant compression is made possible by a tokenizer trained through leveraging the prior knowledge of video diffusion models.

Deep tokens enable DiCoDe to employ vanilla AR language models for video generation, akin to translating one visual "language" into another. By treating videos as temporal sequences, DiCoDe fully harnesses the capabilities of language models for autoregressive generation. DiCoDe is scalable using readily available AR architectures, and is capable of generating videos ranging from a few seconds to one minute using only 4 A100 GPUs for training.

We evaluate DiCoDe both quantitatively and qualitatively, demonstrating that it performs comparably to existing methods in terms of quality while ensuring efficient training. To showcase its scalability, we release a series of DiCoDe configurations with varying parameter sizes and observe a consistent improvement in performance as the model size increases from 100M to 3B.

We believe that DiCoDe's exploration in academia represents a promising initial step toward scalable video modeling with AR language models, paving the way for the development of larger and more powerful video generation models.

Framework

Image of the method
The overall framework of DiCoDe. DiCoDe consists of a video diffusion model as the tokenizer to extract highly-compressed deep tokens and an autoregressive language model to predict the sequence of deep tokens through modeling distributions.

Text to Video Generation

Long Video Generation

BibTeX

  @misc{li2024dicodediffusioncompresseddeeptokens,
      title={DiCoDe: Diffusion-Compressed Deep Tokens for Autoregressive Video Generation with Language Models}, 
      author={Yizhuo Li and Yuying Ge and Yixiao Ge and Ping Luo and Ying Shan},
      year={2024},
      eprint={2412.04446},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2412.04446}, 
  }