Shengqu Cai 「蔡盛曲」
I'm a 1st-year CS PhD student at Stanford Computer Science and the Computational Imaging Lab,
advised by/rotating with Prof. Gordon Wetzstein and Prof. Leonidas Guibas.
I am partly supported by a Stanford School of Engineering Fellowship.
Before Stanford, I was a CS master student at ETH Zürich
supervised by Prof. Luc Van Gool.
I obtained my Bachelor degree in Computer Science with first honour from King's College London in United Kingdom, working on information theory.
In 2022, I spent a wonderful half a year working on scene extrapolation with Eric Chan and Songyou Peng.
I started my research career back in 2021 by working on single-view novel view synthesis with Anton Obukhov. I consider them as my mentors and who I try my best to learn from.
I am interested in solving graphics or inverse graphics tasks that are fundamentally ill-posed via traditional methods, slay the unslayable.
I have been working primarily around neural rendering, including but not limited to
generative models, inverse rendering, unsupervised learning,
scene representations, etc. I like making cool theory, videos and demos.
Email  / 
CV  / 
Google Scholar  / 
Semantic Scholar  / 
Github  / 
Twitter  / 
Linkedin
|
* This is me prior-COVID. Since then I gained 40 pounds and lost my cool ;(
|
News Saga!
-
2024-02: Generative Rendering is accepted to CVPR 2024, see you in Seattle!
-
2023-09: I joined Stanford University for PhD in Computer Science!
-
2023-07: DiffDreamer is accepted by ICCV 2023, looking forward to Paris!
-
2023-05: I graduated from ETH Zürich!
-
2023-01: I will be working as a research intern at Adobe this summer!
-
2022-03: Pix2NeRF is accepted by CVPR 2022. First submission first accept!
-
2022-03: Started my master thesis at Stanford University!
|
Publications
* indicates equal contribution
|
|
Generative Rendering: Controllable 4D-Guided Video Generation with 2D Diffusion Models
Shengqu Cai,
Duygu Ceylan*,
Matheus Gadelha*,
Chun-Hao Paul Huang,
Tuanfeng Y. Wang,
Gordon Wetzstein
In CVPR, 2024
[Project Page][Paper]
Render low fidelity animated mesh directly into animation using pre-trained 2D diffusion models, without the need of any further training/distillation.
|
|
DiffDreamer: Towards Consistent Unsupervised Single-view Scene Extrapolation with Conditional Diffusion Models
Shengqu Cai,
Eric Ryan Chan,
Songyou Peng,
Mohamad Shahbazi,
Anton Obukhov,
Luc Van Gool,
Gordon Wetzstein
In ICCV, 2023
[Project Page][Paper][Code]
A diffusion-model based unsupervised framework capable of synthesizing novel views depicting a long camera trajectory flying into an input image.
|
|
Pix2NeRF: Unsupervised Conditional π-GAN for Single Image to Neural Radiance Fields Translation
Shengqu Cai,
Anton Obukhov,
Dengxin Dai,
Luc Van Gool
In CVPR, 2022
[Paper][Code]
3D-free unsupervised Single view NeRF-based novel view synthesis via conditional NeRF-GAN training and inversion.
|
Misc
Conference Review: CVPR, ICCV, ECCV, NeurIPS, ICLR, ICML, Eurographics, SIGGRAPH
Journal Review: IJCV, Computing Surveys
|
|