Mono-HDR-3D: High Dynamic Range Novel View Synthesis with Single Exposure

Kaixuan Zhang1, Hu Wang2, Minxian Li1, Mingwu Ren1, Mao Ye2, Xiatian Zhu3
1Nanjing University of Science and Technology, China, 2University of Electronic Science and Technology of China, China, 3Surrey University, UK
Accepted by ICML 2025.

Overview of our proposed Mono-HDR-3D. (a) Given single exposure LDR training images with camera poses, we learn an LDR 3D scene model (e.g., NeRF or 3DGS). (b) Importantly, this LDR model is lifted up to an HDR counterpart via a camera imaging aware LDR-toHDR Color Converter (L2H-CC). (c) Further, a closed-loop design is formed by converting HDR images back to LDR counterparts with a latent HDR-to-LDR Color Converter (H2L-CC). This enables optimizing the HDR model even with LDR training images, particularly useful in case of no access to HDR training data. During inference, only the HDR or LDR 3D scene model is needed, taking the novel camera view as the input and outputting the corresponding image rendering.

Abstract

High Dynamic Range Novel View Synthesis (HDR-NVS) aims to establish a 3D scene HDR model from Low Dynamic Range (LDR) imagery. Typically, multiple-exposure LDR images are employed to capture a wider range of brightness levels in a scene, as a single LDR image cannot represent both the brightest and darkest regions simultaneously. While effective, this multiple-exposure HDR-NVS approach has significant limitations, including susceptibility to motion artifacts (e.g., ghosting and blurring), high capture and storage costs. To overcome these challenges, we introduce, for the first time, the single-exposure HDR-NVS problem, where only single exposure LDR images are available during training. We further introduce a novel approach, Mono-HDR-3D, featuring two dedicated modules formulated by the LDR image formation principles, one for converting LDR colors to HDR counterparts and the other for transforming HDR images to LDR format so that unsupervised learning is enabled in a closed loop. Designed as a meta-algorithm, our approach can be seamlessly integrated with existing NVS models. Extensive experiments show that Mono-HDR-3D significantly outperforms previous methods. Source code is released at https://github.com/ prinasi/Mono-HDR-3D.


Method

Overview of our proposed Mono-HDR-3D.
Structure of our camera imaging aware LDR-to-HDR Color Converter (L2H-CC).
Structure of our camera imaging aware HDR-to-LDR Color Converter (H2L-CC).

Results

Quantitative results on the synthetic datasets. All results are averaged over all scenes.

Quantitative results on the real datasets. We report the results averaged across all scenes and exposure times.

Visualizations

Comparison of HDR-NVS on both (a/b) synthetic and (c) real datasets.

HDR reconstruction comparison on synthetic datasets.


BibTeX

@inproceedings{zhang2025high,
  title={High Dynamic Range Novel View Synthesis with Single Exposure},
  author={Zhang, Kaixuan and Wang, Hu and Li, Minxian and Ren, Mingwu and Ye, Mao and Zhu, Xiatian},
  booktitle={Forty-second International Conference on Machine Learning},
  year={2025}
  }