Analysis of 3D Consistency
Video generation models have made significant progress in generating realistic content, enabling applications in simulation, gaming, and film making. However, current generated videos still contain visual artifacts arising from 3D inconsistencies, e.g., objects and structures deforming under changes in camera pose, which can undermine user experience and simulation fidelity. Motivated by recent findings on representation alignment for diffusion models, we hypothesize that improving the multi-view consistency of video diffusion representations will yield more 3D-consistent video generation. Through detailed analysis on multiple recent camera-controlled video diffusion models we reveal strong correlations between 3D-consistent representations and videos. We also propose ViCoDR, a new approach for improving the 3D consistency of video models by learning multi-view consistent diffusion representations. We evaluate ViCoDR on camera controlled image-to-video, text-to-video, and multi-view generation models, demonstrating significant improvements in the 3D consistency of the generated videos.
We analyzed seven recent camera-controlled video diffusion models and observe a strong correlation between 3D consistency of video generation (measured by MEt3R) and view consistency of VDM representations (measured by geometric correspondence).
During video diffusion training, our proposed method ViCoDR additionally supervises internal diffusion representations extracted from frame pairs with a 3D correspondence loss, so as to learn view-consistent representations.
@inproceedings{danier2025view,
author = {Danier, Duolikun and Gao, Ge and McDonagh, Steven and Li, Changjian and Bilen, Hakan and Mac Aodha, Oisin},
title = {View-Consistent Diffusion Representations for 3D-Consistent Video Generation},
booktitle = {arXiv:2511.18991},
year = {2025},
}
Funding was provided by ELIAI (the Edinburgh Laboratory for Integrated Artificial Intelligence), EPSRC (grant no. EP/W002876/1).