My research interest lies in the intersection of computer vision and computer graphics. I am particularly interested in relighting and inverse rendering.
I completed my masters in Machine Learning at the University of Tübingen and I worked in the Autonomous Vision Group for my masters thesis, supervised by Andreas Geiger. My thesis focused on improving 3D reconstruction of static urban environments by exploring the benefits of depth information for radiance fields.
Manipulating the illumination within a single image represents a fundamental challenge in computer vision and graphics. This problem has been traditionally addressed using inverse rendering techniques, which require explicit 3D asset reconstruction and costly ray tracing simulations. Meanwhile, recent advancements in visual foundation models suggest that a new paradigm could soon be practical and possible – one that replaces explicit physical models with networks that are trained on massive amounts of image and video data. In this paper, we explore the potential of exploiting video diffusion models, and in particular Stable Video Diffusion (SVD), in understanding the physical world to perform relighting tasks given a single image. Specifically, we introduce GenLit, a framework that distills the ability of a graphics engine to perform light manipulation into a video generation model, enabling users to directly insert and manipulate a point light in the 3D world within a given image and generate the results directly as a video sequence. We find that a model fine-tuned on only a small synthetic dataset (270 objects) is able to generalize to real images, enabling single-image relighting with realistic ray tracing effects and cast shadows. These results reveal the ability of video foundation models to capture rich information about lighting, material, and shape. Our findings suggest that such models, with minimal training, can be used for physically-based rendering without explicit physically asset reconstruction and complex ray tracing. This further suggests the potential of such models for controllable and physically accurate image synthesis tasks.
@article{bharadwaj2023flare,title={GenLit: Reformulating Single Image Relighting as Video Generation},author={Bharadwaj*, Shrisha and Feng*, Haiwen and Abrevaya, Victoria Fernandez and Black, Michael J.},year={ArXiv 2025, *equal contribution, listed alphabetically},booktitle={ArXiv 2025},}
SPARK: Self-supervised Personalized Real-time Monocular Face Capture
Kelian Baert, Shrisha Bharadwaj, Fabien Castan, and
4 more authors
In SIGGRAPH Asia 2024 Conference Proceedings, Dec 2024
Feedforward monocular face capture methods seek to reconstruct posed faces from a single image of a person. Current state of the art approaches have the ability to regress parametric 3D face models in real-time across a wide range of identities, lighting conditions and poses by leveraging large image datasets of human faces. These methods however suffer from clear limitations in that the underlying parametric face model only provides a coarse estimation of the face shape, thereby limiting their practical applicability in tasks that require precise 3D reconstruction (aging, face swapping, digital make-up, ...).
In this paper, we propose a method for high-precision 3D face capture taking advantage of a collection of unconstrained videos of a subject as prior information. Our proposal builds on a two stage approach. We start with the reconstruction of a detailed 3D face avatar of the person, capturing both precise geometry and appearance from a collection of videos. We then use the encoder from a pre-trained monocular face reconstruction method, substituting its decoder with our personalized model, and proceed with transfer learning on the video collection. Using our pre-estimated image formation model, we obtain a more precise self-supervision objective, enabling improved expression and pose alignment. This results in a trained encoder capable of efficiently regressing pose and expression parameters in real-time from previously unseen images, which combined with our personalized geometry model yields more accurate and high fidelity mesh inference.
Through extensive qualitative and quantitative evaluation, we showcase the superiority of our final model as compared to state-of-the-art baselines, and demonstrate its generalization ability to unseen pose, expression and lighting.
@inproceedings{baert2024spark,title={{SPARK}: Self-supervised Personalized Real-time Monocular Face Capture},author={Baert, Kelian and Bharadwaj, Shrisha and Castan, Fabien and Maujean, Benoit and Christie, Marc and Abrevaya, Victoria and Boukhayma, Adnane},booktitle={SIGGRAPH Asia 2024 Conference Proceedings},month=dec,year={2024},day={December 3-6},doi={10.1145/3680528.3687704},grant_archive={false},event_name={SIGGRAPH Asia 2024},event_place={Tokyo, Japan},isbn={979-8-4007-1131-2/24/12},state={Accepted},url={https://kelianb.github.io/SPARK/},}
FLARE: Fast learning of Animatable and Relightable Mesh Avatars
Shrisha Bharadwaj, Yufeng Zheng, Otmar Hilliges, and
2 more authors
Our goal is to efficiently learn personalized animatable 3D head avatars from videos that are geometrically accurate, realistic, relightable, and compatible with current rendering systems. While 3D meshes enable efficient processing and are highly portable, they lack realism in terms of shape and appearance. Neural representations, on the other hand, are realistic but lack compatibility and are slow to train and render. Our key insight is that it is possible to efficiently learn high-fidelity 3D mesh representations via differentiable rendering by exploiting highly-optimized methods from traditional computer graphics and approximating some of the components with neural networks. To that end, we introduce FLARE, a technique that enables the creation of animatable and relightable mesh avatars from a single monocular video. First, we learn a canonical geometry using a mesh representation, enabling efficient differentiable rasterization and straightforward animation via learned blendshapes and linear blend skinning weights. Second, we follow physically-based rendering and factor observed colors into intrinsic albedo, roughness, and a neural representation of the illumination, allowing the learned avatars to be relit in novel scenes. Since our input videos are captured on a single device with a narrow field of view, modeling the surrounding environment light is non-trivial. Based on the split-sum approximation for modeling specular reflections, we address this by approximating the pre-filtered environment map with a multi-layer perceptron (MLP) modulated by the surface roughness, eliminating the need to explicitly model the light. We demonstrate that our mesh-based avatar formulation, combined with learned deformation, material, and lighting MLPs, produces avatars with high-quality geometry and appearance, while also being efficient to train and render compared to existing approaches.
@article{bharadwaj2023flarf,title={{FLARE}: Fast learning of Animatable and Relightable Mesh Avatars},author={Bharadwaj, Shrisha and Zheng, Yufeng and Hilliges, Otmar and Black, Michael J. and Abrevaya, Victoria Fernandez},journal={ACM Transactions on Graphics},volume={42},number={6},pages={204:1-204:15},month=dec,year={2023},article_number={204},doi={https://doi.org/10.1145/3618401},url={https://dl.acm.org/doi/10.1145/3618401},grant_archive={false},state={Accepted},}