View-Consistent Diffusion Representations for 3D-Consistent Video Generation

1University of Edinburgh, 2University of Bristol
[TL;DR]
  • We uncover a strong correlation between 3D consistency of video generation and multi-view consistency of video diffusion representations.
  • We propose ViCoDR, a method for improving 3D consistency of video generation via multi-view consistent diffusion representations.


CameraCtrl shows significant visual artifacts on the front wheel and frame of the bike. In contrast, trained with our method ViCoDR, its output is more 3D consistent, as shown by the MEt3R reprojection error map.

Abstract

Video generation models have made significant progress in generating realistic content, enabling applications in simulation, gaming, and film making. However, current generated videos still contain visual artifacts arising from 3D inconsistencies, e.g., objects and structures deforming under changes in camera pose, which can undermine user experience and simulation fidelity. Motivated by recent findings on representation alignment for diffusion models, we hypothesize that improving the multi-view consistency of video diffusion representations will yield more 3D-consistent video generation. Through detailed analysis on multiple recent camera-controlled video diffusion models we reveal strong correlations between 3D-consistent representations and videos. We also propose ViCoDR, a new approach for improving the 3D consistency of video models by learning multi-view consistent diffusion representations. We evaluate ViCoDR on camera controlled image-to-video, text-to-video, and multi-view generation models, demonstrating significant improvements in the 3D consistency of the generated videos.

Analysis of 3D Consistency

We analyzed seven recent camera-controlled video diffusion models and observe a strong correlation between 3D consistency of video generation (measured by MEt3R) and view consistency of VDM representations (measured by geometric correspondence).

ViCoDR

During video diffusion training, our proposed method ViCoDR additionally supervises internal diffusion representations extracted from frame pairs with a 3D correspondence loss, so as to learn view-consistent representations.

Video Demo

BibTeX


@inproceedings{danier2025view,
  author    = {Danier, Duolikun and Gao, Ge and McDonagh, Steven and Li, Changjian and Bilen, Hakan and Mac Aodha, Oisin},
  title     = {View-Consistent Diffusion Representations for 3D-Consistent Video Generation},
  booktitle = {arXiv:2511.18991},
  year      = {2025},
} 
    

Acknowledgments

Funding was provided by ELIAI (the Edinburgh Laboratory for Integrated Artificial Intelligence), EPSRC (grant no. EP/W002876/1).