*equal contribution


paper arXiv

Abstract

Traditional 3D content creation tools empower users to bring their imagination to life by giving them direct control over a scene's geometry, appearance, motion, and camera path. Creating computer-generated videos, however, is a tedious manual process, which can be automated by emerging text-to-video diffusion models. Despite great promise, video diffusion models are difficult to control, hindering a user to apply their own creativity rather than amplifying it. To address this challenge, we present a novel approach that combines the controllability of dynamic 3D meshes with the expressivity and editability of emerging diffusion models. For this purpose, our approach takes an animated, low-fidelity rendered mesh as input and injects the ground truth correspondence information obtained from the dynamic mesh into various stages of a pre-trained text-to-image generation model to output high-quality and temporally consistent frames. We demonstrate our approach on various examples where motion can be obtained by animating rigged assets or changing the camera path.


Method Overview

Our system takes as input a set of UV and depths maps rendered from an animated 3D scene. We use a depth-conditioned ControlNet to generate corresponding frames while using the UV correspondences to preserve the consistency. We initialize the noise in the UV space of each object which we then render into each image. For each diffusion step, we first use extended attention for a set of keyframes and extract their pre- and post-attention features. The post-attention features are projected to the UV space and unified. Finally, all frames are generated using a weighted combination of the outputs of the extended attention with the pre-attention features of the keyframe, and the UV-composed post-attention features from the keyframes.


Gallery

We show a diversity of generated results below. The rendered UV map (left) is used to define the structure of the generated video clips, while a text prompt defines the style and appearance of the clips.








Rotations

Our method is able to perform static object rotations or camera rotations with constant backgrounds.

Qualitative Comparisons

We compare with per-frame editing, adapted versions of SOTA video editing works Pix2Video and TokenFlow, and SOTA video diffusion model Gen1.

a basketball bouncing in a chamber under light

a Swarovski blue fox running

 

Bibtex


                        @inproceedings{cai2023genren,
                            author={Cai, Shengqu and Ceylan, Duygu and Gadelha, Matheus
                                    and Huang, Chun-Hao and Wang, Tuanfeng and Wetzstein, Gordon.},
                            title={Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models},
                            booktitle={CVPR},
                            year={2024}
                        }