Making HDR Photos with Luminance HDR

This article describes how to create high dynamic range (HDR) photos from a set of photos of a scene taken with different exposures, and shows that it is possible to obtain very good results even with a simple consumer point-and-shoot camera. I first got into the whole concept of HDR photography when I figured out that my relatively cheap and simple point-and-shoot Panasonic DMC-FX37 had a bracketing mode. This means it takes three successive photos, one at normal exposure, one at a preset exposure offset lower and one higher. While trying this out, I figured that it should be possible to do more with the bracketed images than just pick the best one out of the three. When I discovered Luminance HDR, I became very glad that this simple camera had that pretty advanced feature.

My first true HDR image

I will first explain the whole idea behind HDR photography. Next, I will describe how I use Luminance HDR (and some external editing tools) to produce very good-looking HDR photos.

Exposure and Dynamic Range

When one takes a photo with a bog standard digital camera in automatic mode, it will try to set the exposure such that on average, every subject within the scene is exposed properly. This mostly boils down to estimating the brightness (luminance) measured by every pixel on the image sensor, and adjusting the exposure such that the average value of all pixels is in the middle of the brightness range that can be represented. Or in summary, the camera will simply try to make the photo look neither under- nor over-exposed.

Plain simple photo

For normal scenes like the one in the above photo this works well because the sensitivity range of the camera's CCD sensor is able to capture the relevant dynamic range of such scenes. To avoid exceeding this range, there are the usual golden rules of photography like ensuring that bright light sources (e.g. the sun) are always behind the photographer. In cases where there are too extreme intensity differences, it may be possible to reduce the dynamic range by employing additional lighting (most often, by using a flash) to brighten up the underexposed objects in the scene. Of course, this is a hack that alters the appearance of the scene. What if using a flash is impossible or we want to capture a scene with a very large dynamic range without altering it?

Underexposed vs. overexposedWhen photographing a scene with a dynamic range that exceeds what the sensor can capture, it is inevitable that parts in the photo will be either underexposed (too dark), overexposed (too bright), or both. The photos at the right show a typical example: it is impossible to simultaneously capture the colours in the sky of this sundown and the details on the ground, because this photo was taken right into the direction where the sun went down just a minute earlier. Therefore if we adjust the exposure to capture the sky, the much darker ground becomes a dark blob, and if we try to capture the ground, the sky becomes a washed-out white. If we simply let the camera do its thing, the automatic exposure will make something in between, with both the sky washed out and the ground too dark. My first ever attempt at combining the best of the under- and overexposed photos was to simply cut the sky out of the underexposed photo and paste it into the overexposed photo. This looked vaguely OK, but it did not feel right and is highly unpractical, even when the under- and overexposed parts are not too intermingled.

The proper way to combine the information from both the under- and overexposed photos would be to decide for each pixel in which of the two photos it is represented the best, and then collect those pixels with proper intensity scaling into a single high-dynamic-range photo. This can be extended to any number of photos, the only requirement is that the exposure value (EV) for each photo must be known. If a pixel is properly exposed in multiple photos, an average can be taken to obtain an even better result. There has been a lot of research in ways to do this kind of combining and weighting and this has resulted in an open-source project that offers many of the resulting algorithms in a relatively easy-to-use interface. The project originally had the awkward name “Qtpfsgui” but was later renamed to Luminance HDR.

Luminance HDR

I will not go into the details here on how to use Luminance HDR. The interface is in a state of constant improvement therefore any detailed description would become outdated quickly. I will stick to general principles. You should easily find many tutorials on the web for specific instructions on how to use the interface of a particular release. Even without a tutorial I find the interface quite self-explanatory on itself.

Luminance HDR offers a pipeline of operations that turns a set of photos with different exposures into a HDR image. This image can optionally be ‘tonemapped’ into a classic low-dynamic-range format like JPEG. The steps in this pipeline are the following:

  1. Collect the multiple-exposure photos.
  2. Align the photos such that they match pixel-by-pixel (in case the camera was moved between photos).
  3. Combine the photos into a single HDR photo.
  4. Tonemap the HDR photo into a low-dynamic-range format.
Pipeline

The first two steps are easy. In case the EV values are not stored inside the photos, you can enter them manually. The alignment can be done automatically, I recommend using Hugin's align_image_stack. Of course, the best way to ensure alignment is to take your photos with a tripod, in that case you can skip the alignment altogether.

The catch with Luminance HDR as compared to expensive commercial software, is that the two last steps are not exact science. There is a multitude of ways in which they can be performed, and Luminance HDR does not restrict the user to one particular method or try to automate the whole process with the risk of making the wrong choices. This makes it a powerful tool for people who have an idea of how each method works, but it can make the program look daunting for people who quickly want to get results without having to know what's under the hood. If you follow my workflow as explained further on however, you should get good results without having to delve too deep into all the possibilities of Luminance HDR.

As for the combination method, I recommend to stick with Profile 5 which for me works best overall, although Profile 2 often works better for shots with bright light sources. For tonemapping however it is not that clear-cut. First, a little information on what tonemapping is in the first place.

Tonemapping

Tonemapping is the operation of mapping a HDR photo to a reduced dynamic range in such a way that it can be displayed on a low-dynamic-range device like a computer screen or photo paper, while still giving a sensation similar to the original high-dynamic-range scene. In a certain sense, squeezing the large dynamic range photo into a low-dynamic-range medium is similar to applying dynamic range compression on a sound recording. When done right, the recording still gives a very good impression of the live act. When done wrong (as is often the case nowadays), the recording is a poor shadow of the original. The same applies to tonemapping: it will only preserve the impression of the scene that was photographed if it is done right.

HDR without tonemappingIt may seem trivial: just rescale the intensity values in the HDR image to the smaller LDR range. Unfortunately that does not work. An example is shown at the right: although it does contain the full range of intensities from the two previous images, the sky is still too bright and the ground too dark. Any attempt to correct this by tweaking gamma curves will cause the colours to become dull and washed out. This is perfectly normal given the knowledge of how the human eye perceives colour, but I won't go into detail about that here. The bottom line is that something special must be done to map the full range of intensities to a smaller range without losing the overall impression of the original scene.

There has been quite a bit of research on this ‘something special’. This has resulted in a plethora of quite differing tonemapping methods. Luminance HDR offers implementations of many of those methods with names like Mantiuk, Fattal, Drago, and Asikhmin. Each method seems to try to optimise a specific aspect but the only way to figure out what it really does is read its scientific paper or experiment with it. The series of thumbnails below show an overview of what each method produces with standard settings: as you can see the results vary wildly. My overall conclusion is that none of these methods work perfectly on themselves, but very good results can be obtained by mixing their outputs.

Tonemapped thumbnail images

My Workflow

Take photos → (Preprocess →) Create HDRTonemapPostprocess

Taking the Photos

Having a camera that can do automatic bracketing is not essential if you want to make HDR photos, but it will make it much easier and reduce problems with the scene changing in between the shots. Bracketing on FX37 The bracketing mode on my camera can take three photos within about one second in normal lighting conditions. More professional cameras may be able to take five or even seven photos and do it faster as well. I always use a step size of 1 EV. Of course you can manually take photos with differing EV values, but it will be much slower and if anything in your scene is moving, the differences between the photos may be large. If I want to get a larger range than the ±1 EV my camera offers, I take two sets of bracketed shots at a different base EV value. This can give me a whopping total range of -3 EV to +3 EV which is overkill in most situations.

If you can fix the aperture of your lens while taking a set of bracketed photos, do it. Otherwise your camera will likely take the photos at different apertures to obtain the different EVs, and this could cause slight or even major differences between the photos.

Using more photos may help to reduce noise, however it also increases the risk of artefacts caused by small differences between the photos. If two photos look very similar despite their different exposures, there is little point in using them both. I generally use all photos, but sometimes it helps to drop some of them to reduce artefacts.

Thanks to Luminance HDR's automatic alignment, it is not essential to ensure your camera is perfectly steady while taking the different exposures. Nevertheless, using a tripod will allow to skip the often slow and not always entirely correct alignment procedure. Especially if you are moving forward and backward while taking the photos, there will be subtle scaling effects between the photos that the alignment procedure often cannot compensate for.

Preprocessing

PreprocessingThis step is optional and intended to avoid problems due to noise and non-linearity of the CCD's response. In other words, for a high-end camera it may never be necessary at all except perhaps for photos taken with extreme settings. For a cheap camera with a small noisy sensor however (like my Panasonic DMC-FX37) the step is often required. Without this preprocessing step, ugly artefacts may end up in the HDR result. These artefacts manifest themselves as coloured noise, stains in dark areas or halos around light sources. The problem in my case appears to be that the sensor is noisy and deviates from a perfect linear response for the lowest and highest intensities. When the HDR algorithms try to connect those distorted responses together, they often end up with a distorted overall curve. The solution is to clip the intensity range of the photos such that each photo only contains pixels in the range that is best represented by that particular exposure.

This requires a graphics program like GIMP or Photoshop. The image below shows the way I do it: for the lowest exposure image, use the ‘curves’ dialog to clip the lowest values to black (leftmost graph). For the highest exposure image, clip the highest values to white (rightmost graph). For all other photos, do both (middle graph). If you're lazy and don't mind losing the very lowest and highest intensities, just apply the middle curve to all photos. If you look carefully at the histograms in these graphs, you'll see that there are peaks in the regions that are clipped away: these peaks are the noise we want to remove. The goal of this procedure is to avoid that any rubbish ends up in the overlapping zones between intensity ranges. This might be a useful feature to add to Luminance's workflow.

Preprocessing curves

Creating the HDR image

Next, choose “New HDR image” in Luminance HDR and import the (optionally preprocessed) photos. If Exif data was lost during preprocessing, copy the EV values from the originals, using the Tools menu. Enable autoalign with Hugin unless you know the photos are already aligned. Next, choose a profile. In earlier versions I often had problems with all profiles except 5. In the newer versions however, these problems seem to have been fixed and it is safe to simply stick with either profile 1 or 2. The only major difference I can now find between the profiles is that the even-numbered profiles (Gamma response) produce a darker and more saturated result than the odd-numbered ones, so this is mostly a matter of preference.

Save iconIt is possible to save the intermediate images, which allows to experiment with different profiles without having to go through the alignment step multiple times. To do this, enable the “Advanced editing tools” (in versions before 2.4 these are always enabled). After the images have been aligned, you will be shown an interface where you can further tweak the alignment. Use the “Save Images” button and enter a name prefix.

Ghost iconThe most recent versions of Luminance have an “anti-ghosting” feature (older versions had this too, but it never worked). The idea is to mask certain parts in those photos where an object moved relative to the other photos. Dropping these masked parts avoids the typical “ghosted” appearances of moving objects. The object should be left visible in the photo where it has the best exposure, and be erased from all others. The automatic mode tries to do this all by itself and seems to work well for simple cases (a few moving parts in a stationary scene). Unfortunately at this time (version 2.4.0) the entire anti-ghosting algorithm has a bug that will mess up the exposure of the entire photo, so we'll have to wait a little more until this feature becomes truly usable.

Now the actual HDR photo will be generated and you'll see a preview. You may notice that it will look weird and probably washed out, because your computer screen is unable to represent the true dynamic range in the photo. That is why we need tonemapping. If you wish, you can save the HDR photo itself in a format like OpenEXR, but this is unnecessary if you're only interested in making a tonemapped image.

Tonemapping and Postprocessing

As I said before, there are many tonemapping operators with many settings each. I found that to get the results I want, I need to mix the results from two or three of the operators. Which ones to use depends a bit on the specific scene that was photographed and the desired effect. Photography and especially HDR photography is not exact science. It borders more on the side of art, therefore the following is only a recipe that will get you OK results most of the time, but you will probably need to add your own personal touch to get really great results.

I noticed that the Mantiuk operators are generally very good at getting the overall intensities right, but they make the colours look dull. The Asikhmin operator on the other hand has excellent colour reproduction while having trouble with the intensities. Therefore combining these two is the basis of most of my HDR photos.

For most daytime shots, I generally use Mantiuk '08 with default settings (LCD office, saturation 1.0, contrast 1.0), and Asikhmin ‘Eqn 4’ with contrast threshold set to 0. Sometimes better results can be obtained by trying higher threshold values if they do not lead to a washed-out result.
Save both these images to a lossless format (e.g. PNG) and open them in a graphics program. If necessary, stretch the intensity range of the Asihkmin image such that it goes all the way to the maximum (sometimes, what should be pure white is light gray). Overlay the Asikhmin image on the Mantiuk image in ‘multiply’ mode and blend at about 67%, depending on how vivid you want the colours to be. If you went for an even-numbered profile to generate the HDR image, it is often necessary to reduce the contribution of the Asikhmin image to 50% or less to avoid that the result looks too garish.

Mixing the tonemapped images

For some scenes, the Mantiuk '06 operator seems better to obtain extra local contrast and to lighten up dark areas. It produces ‘grittier’ photos. It can be used as-is but I often blend it with the '08 operator at 20% to 40%.

For nighttime shots or shots with extremely bright light sources, it is often better to work directly with the non-tonemapped HDR preview instead of the Mantiuk operator. First tweak the gamma and histogram range such that the preview looks slightly too bright. Save the preview with “Save HDR image preview” in the File menu. (Beware! There is a bug in versions 2.2.x that ignores the gamma setting. Use version 2.1 or 2.3 to avoid this bug.) Run Asikhmin Eqn 4 as above. In your graphics program, overlay Asikhmin onto the preview image with multiply at ±50%. Sometimes it can help to add a bit of Mantiuk '08 to the mix to brighten up too dark areas.

An alternative to this entire procedure is to use the ‘Fattal’ operator. In older versions it was unreliable but since Luminance 2.3.0, the Fattal operator has been upgraded to produce consistent results regardless of resolution, and is often good enough to use as-is. For not too demanding scenes it can produce instant results that are very close to the manual procedure described above.

Generally it will still be necessary to do some brightness, contrast, and/or gamma tweaking to get the result really right. If there is anything you should remember from this: there is no single recipe that will consistently produce awesome HDR photos. Also, the way you would like the tonemapped HDR photo to look will depend on how you personally perceived the scene. You will need to tweak and test and sometimes try something new for a specific photo, but the above workflow should be a good starting point.

Examples and Conclusion

On my blog you can find some examples of results I obtained with the above method. Remember that they are all made with a Panasonic Lumix DMC-FX37, a tiny compact camera whose CCD is so small that even photos in broad daylight are sometimes noisy. In the meantime I have bought a Panasonic Lumix DMC-GX1 which has a more advanced bracketing function and a much less noisy sensor, allowing me to skip the preprocessing step almost always. On my blog you can find some HDR photos created with this camera as well.

When looking at other people's HDR shots, they often give me either a washed-out or garish oversaturated, hence unrealistic result. Apparently the software or method they use has a tendency to flatten all intensities and compress them to the same range, making the scene appear as if it was illuminated with a big ass flash or worse, with a set of multicoloured floodlights. Some people diss at HDR probably due to these kind of photos. The goal of the workflow described above however is to produce realistic impressions of what the photographed scene really looked like.

Mind that HDR is not a miracle tool that will make all your photos look stunning. It is useless for perhaps 95% of all photographs. Only when photographing something with extreme intensity differences it can be useful or downright essential. There have been moments where I thought “this will look awesome in HDR” but the result hardly looked different from a plain photo, and also “this will work fine without HDR” but it proved impossible to take a normal photo with no severely under- or overexposed parts.

Some recent cameras have a HDR function built-in. The cheaper ones (like those in smartphones and tablets) do nothing more than a heavily simplified version of the entire procedure described above. This means two things: first, they will also produce smeared-out or ‘ghosted’ images if the camera or subject is moving while the successive photos are taken. Second, they will use a fixed algorithm that will not always produce good results. Often it is not possible to store the raw HDR data, only the final tonemapped image. More advanced cameras like the ones from Sony have a sensor that can capture true HDR in a single shot. This avoids the risk of ghosting and allows to actually film in HDR. Still I would not trust this for anything else than producing the initial raw high-dynamic range image. The tonemapping step seems to have too many subtleties to be consistently done right by an automated algorithm. As with all types of photography, what sets a good photographer apart from an amateur is the skill, not the tool.

If you like Luminance HDR and use it on a regular basis (or just want to support its development), a donation will encourage its developers to further improve it. If you liked this article, you may also consider a donation to encourage me to maintain this website.

Go to the blog post that announces this article if you would like to comment.

©2012/06-2014/02 Alexander Thomas