🚀DiCache: Let Diffusion Model Determine Its Own Cache

Jiazi Bu1,6* Pengyang Ling2,6* Yujie Zhou1,6* Yibin Wang3,7

Yuhang Zang6 Tong Wu4 Dahua Lin5,6,8 Jiaqi Wang6,7†

1 Shanghai Jiao Tong University    2 University of Science and Technology of China    3 Fudan University
4 Stanford University    5 The Chinese University of Hong Kong    6 Shanghai AI Laboratory
7 Shanghai Innovation Institute    8 CPII under InnoHK
(* Equal Contribution   † Corresponding Author)

[Paper]      [Code]


Abstract

Recent years have witnessed the rapid development of acceleration techniques for diffusion models, especially caching-based acceleration methods. These studies seek to answer two fundamental questions: "When to cache" and "How to use cache", typically relying on predefined empirical laws or dataset-level priors to determine the timing of caching and utilizing handcrafted rules for leveraging multi-step caches. However, given the highly dynamic nature of the diffusion process, they often exhibit limited generalizability and fail on outlier samples. In this paper, a strong correlation is revealed between the variation patterns of the shallow-layer feature differences in the diffusion model and those of final model outputs. Moreover, we have observed that the features from different model layers form similar trajectories. Based on these observations, we present DiCache, a novel training-free adaptive caching strategy for accelerating diffusion models at runtime, answering both when and how to cache within a unified framework. Specifically, DiCache is composed of two principal components: (1) Online Probe Profiling Scheme leverages a shallow-layer online probe to obtain a stable prior for the caching error in real time, enabling the model to autonomously determine caching schedules. (2) Dynamic Cache Trajectory Alignment combines multi-step caches based on shallow-layer probe feature trajectory to better approximate the current feature, facilitating higher visual quality. Extensive experiments validate DiCache's capability in achieving higher efficiency and improved visual fidelity over state-of-the-art methods on various leading diffusion models including WAN 2.1, HunyuanVideo for video generation, and Flux for image generation. Our code is available at DiCache Repo.

Motivation

In this paper, we uncover two key correlations regarding diffusion transformers (DiT):
(1) Shallow-layer feature differences of diffusion models exhibit dynamics highly correlated with those of the final output, enabling them to serve as an accurate proxy for model output evolution. Since the optimal moment to reuse cached features is governed by the difference between model outputs at consecutive timesteps, it is possible to employ an online shallow-layer probe to efficiently obtain a prior of output changes at runtime, thereby adaptively adjusting the caching strategy.

(2) Features from different DiT blocks form similar trajectories, which allows for dynamic combination of multistep caches based on the shallow-layer probe information, facilitating better approximation of the current feature.

Methodology

DiCache consists of Online Probe Profiling Strategy and Dynamic Cache Trajectory Alignment. The former dynamically determines the caching timing with an online shallow-layer probe at runtime, while the latter combines multi-step caches based on the probe feature trajectory to adaptively approximate the feature at the current timestep. By integrating the above two techniques, DiCache intrinsically answers when and how to cache in a unified framework.

Qualitative Comparison

Qualitative comparisons with existing caching-based methods. DiCache consistently outperforms the baselines in terms of both visual quality and similarity to the original results across diverse scenarios and generation backbones.

Quantitative Evaluation

Quantitative assessments of the proposed DiCache and other baselines. Unlike existing methods, DiCache dynamically determines its caching timings and effectively utilizes multi-step caches based on online probes, achieving a unification of rapid inference speed and high visual fidelity. "OOM" indicates CUDA out of memory on the A800 80GB GPU.

BibTex

If you find this work helpful, please cite the following paper:

    @article{bu2025dicache,
      title={DiCache: Let Diffusion Model Determine Its Own Cache},
      author={Bu, Jiazi and Ling, Pengyang and Zhou, Yujie and Wang, Yibin and Zang, Yuhang and Wu, Tong and Lin, Dahua and Wang, Jiaqi},
      journal={arXiv preprint arXiv:2508.17356},
      year={2025}
    }
  

Project page template is borrowed from FreeScale.