Image harmonization aims at adjusting the appearance of the foreground to make it more compatible with the background. Without exploring background illumination and its effects on the foreground elements, existing works are incapable of generating a realistic foreground shading. In this paper, we decompose the image harmonization task into two sub-problems: 1) illumination estimation of the background image and 2) re-rendering of foreground objects under background illumination. Before solving these two sub-problems, we first learn a shading-aware illumination descriptor via a well-designed neural rendering framework, of which the key is a shading bases module that generates multiple shading bases from the foreground image. Then we design a background illumination estimation module to extract the illumination descriptor from the background. Finally, the Shading-aware Illumination Descriptor is used in conjunction with the neural rendering framework (SIDNet) to produce the harmonized foreground image containing a novel harmonized shading. Moreover, we construct a large-scale photo-realistic synthetic image harmonization dataset (IllumHarmony-Dataset) that contains numerous shading variations. Extensive experiments on both synthetic and real data demonstrate the superiority of the proposed method, especially in dealing with foreground shadings.
@article{hu2024sidnet,
title={{SIDNet}: Learning Shading-Aware Illumination Descriptor for Image Harmonization},
author={Hu, Zhongyun and Nsampi, Ntumba Elie and Wang, Xue and Wang, Qing},
journal={IEEE Transactions on Emerging Topics in Computational Intelligence},
year={2024},
volume={8},
number={2},
pages={1290-1302}
}