In image processing, computer graphics, and photography, exposure fusion is a technique for blending multiple exposures of the same scene (bracketing) into a single image. As in high dynamic range imaging (HDRI or just HDR), the goal is to capture a scene with a higher dynamic range than the camera is capable of capturing with a single exposure. [1][2]
By using different exposure parameters on the same scene, a wider dynamic range can be represented and later merged into an image with better dynamic range. After correcting for small shifts that may inadvertently happen with hand-held devices, the full-image can be fused in two ways:[3]
The former method assumes a linear response from the camera, which may be provided by DNG or other raw formats. Some variants can take developed images, but the process of reconstructing the intensities is complicated and noisy, compromising the effective dynamic range.[5]
The latter method [Mertens-Kautz-Van Reeth (MKVr)] only cares about aligning features and taking the best parts, automatically (by contrast, saturation, and proper exposure) or manually, so it is immune to this drawback. However, it cannot be considered a true HDR technique because no HDR image is ever created. The image does look better on displays, but the resulting bit depth of the image is equal to the input depth, unlike on a true HDR image where a greater bit depth allows storing more detailed intensity changes.[1] Flexibility being its strength, this method can be extended to perform focus stacking by using contrast as the sole criteria.[7]
In photomicrography, the exposure fusion is often the only way to acquire properly exposed images from stereomicroscopes. One of the software solutions designed for photomicrography is the HDR module for QuickPHOTO microscopy software. This module can be also combined with Deep Focus focus stacking module to solve another problem, which is shallow depth of field of stereomicroscopes.
Similar imaging techniques are used in other fields. For example, in THz computational imaging, due to the weak signal of THz radiation, synchronous amplifiers coupled with a detector are used. The spatial distribution of THz radiation reflected from the object under study has large brightness differences that do not allow it to be registered by a single ADC. To solve this problem, two ADCs are connected to the synchronous amplifier, allowing two sets of data to be received simultaneously. ADCs have complementary sensitivity settings: one ADC allows measuring weak signals at the noise level on the periphery of the registration area, and information from the second ADC with the settings allows registering powerful signals is used in the central areas, where intense THz radiation reflected from the object prevails.[8]
HDRMerge merges raw images directly, without development. In fact, it can safely assume a linear response function of the camera.