HDRI or High Dynamic Range Imagery is the process by which multiple panoramic images are combined in order to increase the output image’s dynamic color range and luminance.
The goal of HDRI is to add levels of luminance that are perceived by the human eye. The human eye functions by continuously adjusting and adapting to a broad range of luminance that’s present in a specific environment.
HDRI vs Regular Images (LDR)
Most if not all digital photographs you encounter are saved in file formats like JPG, BMP, and PNG. These images are classified as bitmap formats and are considered LDR images or Low Dynamic Range.
Despite being considered low-range, bitmaps can store a lot of data. For instance, if each pixel is represented by 24 bits then any given pixel can be assigned one of 16,777,216 colors. While this number can seem like a lot, it’s actually quite small in comparison to the colors we can actually perceive.
In contrast, HDR Images can hold up to 32 bits of data, including luminance or “dynamic range”.
HDRI images are essentially snapshots of the real world. They offer detailed lighting information through which you can blend CG objects into virtual environments. As a result of accurate luminance, they can be seen in backgrounds and reflections which makes them more immersive.
In short, an HDRI is essentially a series of panoramic photographs that cover the entire field of vision (FOV) and contains a large amount brightness data.
Humans See in A High Dynamic Range
To illustrate this concept, let’s look at a few examples that everyone has experienced.
Assume you’re in a dark house and you decide to go outside. As you walk outside you’re welcomed by the sun. You then start to notice that you have a hard time seeing objects due to the light distorting the details. Only until your eyes adjust are you able to resolve these shapes.
The same could be said when you transition from being outside and go inside. As you go from light to dark your eye’s iris opens and the cones in your eyes adjust to the new level of light.
An LDR image, like your eyes, can’t focus on both the light and dark areas of an image at the same level of clarity.
However, an HDRI image takes a snapshot of the given environment and offers both the inside and outside levels of light. Meaning, you’re able to see the details and luminance of both.
Why Does This Matter?
If you were to take a regular bitmap image and raise the light levels in the picture, the details in the whites of the photograph wash out. In the same way, if you were to lower the brightness the dark areas wash out and quality is degraded in those areas.
When you render a 3D image, you’re goal is to make the details look and appear photorealistic. In order to achieve this, it is recommended that you also determine the many differences between photography and 3D rendering the minute details within. This is extensively discussed in this informative 3D rendering and photography comparison.
How HDRI Improves 3D Renders
- Accurate representations of shadows and colors
- Maintaining proper contrast in low exposure light sources
- Light source contrast is translated into surface reflections
- Every surface that uses reflective properties gets the benefit of a dynamic range increase
- The effects of depth of field will be affected by the range increase
- Sharper bokeh due to higher contrast
- Mesh lighting is improved due to the increase in dynamic range
- Dynamic range remains unchanged despite gamma level modifications
- More realistic highlights by using HDRI for mesh Lights and Skydomes
- HDRI helps in bridging the gap between CGI and the real world as it properly represents light and color the way the human eyes perceive it
Tonal Values and Tone Mapping
In order to sharpen flat HDR images, renderers make use of dynamic tone mapping. Tone mapping is the process by which tonal values found within an image are adjusted so that they can be viewed on a digital screen.
However, keep in mind that tone mapping can be performed in the same way with LDR images. The process is the same, you take a lot of photographs with different levels of exposure. The images are then combined and so that all parts of the scene are properly exposed.
In order to accomplish this, algorithms are used. The two standard algorithms used for this process are global and local operators.
In global operation, every pixel in the image is mapped based on its global characteristics. This means that the position of the pixel in either the dark or light portion of the image is not taken into consideration. While this type of tone mapping can be performed quickly, it can leave you with a flat picture.
Local operations, on the other hand, track a pixel’s position in the image’s dark and light areas. The pixels get treated in accordance with their specific spatial characteristics. This means that there will be more details shown in the image. Local tone mapping is the preferred option because of the added detail, however it requires a longer render time.