These are some hints and tips on how to ensure that images in Adobe Photoshop Lightroom 2.1 are displayed on the monitor with good quality that also matches the way Photoshop renders images. It is based on personal experiences and problems with equipments I have used.
[Harald E Brandt, Stockholm, November 2008 (updated December 2008)]
If you have a profiled monitor and you experience that Adobe Photoshop Lightroom 2.1 renders the image in its Develop module significantly different from the way Photoshop renders it, or that the Library and Slideshow modules render the image very different from the way it is rendered by the Develop module, chances are that this can be solved by re-profiling your monitor and saving the new profile as a matrix-based profile rather than a LUT-based profile.
I have SpectraView monitor, which is a high-end monitor that is hardware calibratable. This means that it is able to have calibration tables — so called lookup tables (LUTs) — downloaded into the actual monitor (which operates in more than 8 bits per channel). All of the gray balance and white point settings are done inside the monitor, leaving the full 8 bits per channel from the computer to be used for "real" image data.
As a result, this monitor can really be calibrated to D50 with good quality. Since the monitor is hardware calibratable, most of the "work" is done by the LUTs downloaded into the monitor, whereas the ICC profile in the computer doesn't have to do so much compensation. The profiling software that accompanies this monitor will first calibrate the monitor and download lookup tables (LUTs) into the monitor, and then it will create an ICC profile that is used by the computer and all its color managed applications. In my case, the computer runs Mac OS X 10.5.
The profiling software that accompanies this monitor is SpectraView Profiler, which is a version of basICColor. This software offers a choice to create either 16-bit LUT-based profiles or matrix-based profiles. The vendor recommends LUT-based profiles since they are claimed to be the most precise and accurate type of profiles. LUT-based profiles can be quite large, whereas matrix-based profiles are simpler and much smaller (only a few kB). The vendor claims that most of the common applications support LUT-based profiles, which is the default type in SpectraView Profiler 4.1.9. Most advanced profiling software packages offer to create either LUT-based or matrix based profiles, but it varies which one is set as the default. (For instance, Eye-One Match uses matrix is its default.)
Shadows differed between Photoshop vs Lightroom's Develop module. In addition, the slideshow module performed very bad, where deep shadows often got weird nasty blotchy colors.
A recent re-calibration made the situation even worse, where shadows in Lightroom, even in the Develop module, got pitch black! As an alternative, I tried Eye-One Match with LUT-based profiles, but that could even make shadows fully corrupt!
Another problem was computer sleep: If my computer (a MacBook Pro) went to sleep, while Lightroom was running, and I then woke it up, colors in the Library module would show up super saturated! The only solution was to restart Lightroom.
As it turns out, Lightroom can obviously not fully handle these LUT-based profiles, whereas Photoshop can. The solution is simply to re-profile the monitor and save the new profile as a matrix-based profile rather than a LUT-based profile!
Although this is the solution, this does not necessarily by itself explain everything, since there are things that complicates the issue: Lightroom always uses perceptual rendering intent for the rendering to the display profile, whereas Photoshop always uses relative colorimetric for that. So these two programs are not using the profile in the same way.
With SpectraView Profiler, I can validate the profile it creates (which means it measures test patches and compares them with a reference file). The matrix-based profile turned out to be just as good as the LUT-based profile, so I really don't think the acclaimed accuracy of LUT-based profiles has any significance in this case.
I recommend you to install background images that contain special gamma test patterns plus a grayscale: Download and install from: Color Solutions, Karl Koch. It enables you to check that the gamma is perfect so that you can see a difference between L = 0, 1 and 2 as equidistant, and likewise L = 100, 99 and 98 as equidistant.
When I used LUT-based profiles, I could not see a difference between L = 0 and 1, which means that not even the native PDF-rendering engine in Mac OS X could render the PDF background image fully correct! This supports my recommendation to stay away from LUT-based profiles.
With SpectraView Profiler 4.1.9 and a matrix-based profile, the gamma pattern is perfect, and also the grayscale is clean without color variations. I also experimented with Eye-One Match, but it was not anywhere near as good in making the grayscale neutral and clean.
There is one remaining issue, however, but that is nothing you can do much about: Since the Develop module in Lightroom renders the image in a linear 16-bit version of ProPhoto RGB, whereas the Library and Slideshow modules render from 8-bit jpg-compressed previews, there will be a difference. In addition, when the Preview Quality in Catalog Settings is set to High, it uses ProPhoto RGB, whereas if set to Medium it uses Adobe RGB.
Images rendered by the Library module usually look very similar to the same imgages rendered by the Develop module, but since previews in the Library module are compressed by jpg, based on only 8 bits, and might even be in the ProPhoto RGB color space, they can become blotchy and with posterization, especially in deep shadows. And because of this limitation, the blotches can appear to have a slightly different color than in the Develop module simply because in Develop the "blotch area" contains a variety of pixel values (almost like dithering), whereas in Loupe or in Slideshow they have the same pixel value. The net effect can sometimes appear as an off-neutral "black". Sometimes ugly with severe posterization, sometimes not.
With the preview quality set to Medium, the posterization is primarily caused by a much too strong jpg compression in the deep shadow areas, but the colors will usually be fairly okay. With the preview quality set to High, the posterization is primarily caused by 8 bits being way too little for the extremely wide gamut of ProPhoto RGB. It is made even worse by the fact that ProPhoto RGB has a gamma of 1.8 making the bit steps in deep shadows in this wide gamut space even larger; this generally causes colors in deep shadows to be way off, in addition to posterization.
Lightroom's Develop module renders deep shadows extremely similar to what Photoshop produces, but in my system, really dark pixels are slightly darker in Photoshop than in Develop, so a small difference in contrast can be perceived in tricky images.
The tonal response curve is popularly called gamma, although gamma is actually just one type of tonal response curve. SpectraView Profiler (and thereby basICColor) strongly recommends L* as the tonal response curve. It is somewhat similar to gamma 2.2, but with a different curve, primarily in the low end near the blacks, so as to make more levels available for rendering blacks, thereby improving shadow definition. L* is said to equal the characteristics of human perception, where grayscales appear visually equidistant if the RGB values are changed in equal steps.
Most images and animations published on the web, images used in various applications, and regular film, use a tonal response curve called sRGB. In the shadows, it resembles an L* curve, whereas the midtones and highlights follow the gamma 2.2 curve. Media in this category that do not use sRGB usually use gamma 2.2 instead.
While L* probably is a much superior tonal response curve, I suspect that the advantages are mainly seen in a complete "closed" color managed workflow from start to end. A photographer will usually only use the spaces Adobe RGB (gamma 2.2), ProPhoto RGB (gamma 1.8), a "Lightroom-version" of ProPhoto (gamma 1.0), plus the sRGB space for some output, which has its on curve. A conversion to L* for the monitor might reduce the risk of banding, since the monitor only receives 8 bits per channel, whereas the source file may be in 16 bit. Personally, I have used L*, 2.2 and sRGB, and I can't say that any of these have solved problems with banding or other artifacts.
The strongest argument to stick with sRGB, however, is the fact that images on the web are almost always in sRGB, or alternatively gamma 2.2, and if they are not tagged with a profile, you better use sRGB for your monitor (or alternatively gamma 2.2). And even if they are tagged with a profile, they might be displayed by Flash, which by default lacks abilities to color manage the image (even version 10 of Flash). The same holds for many images used inside applications.
Many programs and manuals still say that Mac uses a gamma of 1.8. For historical reasons, that was a very good gamma before ColorSync (i.e before color management), since it roughly matched the appearance on laser printers. Although there is nothing wrong with that gamma, it doesn't play well with the internet, since almost all images are in sRGB or gamma 2.2 these days. So, I recommend to stay away from gamma 1.8 for general usage.
So, if your profiling package allows you to choose sRGB as the tonal response curve, which SpectraView Profiler does (and thereby basICColor), then choose sRGB. Otherwise, choose gamma 2.2. Alternatively, if your profiling package accepts custom tone curves, you can install a curve that mimics sRGB; 21st century shoebox offers such a curve for download.
The issue of color temperature is strangely enough a bit controversial. The professional recommendation is clearly D50, and motivations can be found at European Color Initiative (click Downloads and choose Digipix3_V300_en.pdf). Since almost all illumination devices for color critical work is based on D50/5000 Kelvin, such as a light table or a photo booth, D50 is the only option if you want to do a direct comparison of a print with what you see on the monitor. I use Philips TLD 95, a 5-band fluorescent tube at 5000 Kelvin with RA=98.
However, if the monitor is not hardware calibratable, much of the 8 bits per channel in the interface will be wasted in trying to adjust a monitor's native gamma towards 5000 Kelvin (whereas the native gamma usually is between 6500 and 7000 Kelvin for good monitors, and 9000 Kelvin for old lousy monitors). The result is a loss of levels available to render the image. In addition, even when the monitor is adjusted to D50 (i.e something that corresponds to 5000 Kelvin) it may still look wrong! Low-end monitors simply can't achieve a white D50 that appears white — instead it often looks pink or some kind of yellowish, despite the colorimeter measures it to D50.
So, for monitors that are not hardware calibratable, it may be better to use a white point somewhat closer to D65. In practice, only expensive high-end monitors are hardware calibratable, and none of the laptops are good enough for any color critical work. For my MacBook Pro, I have found that a reasonable compromise is a target white point of either 5500 Kelvin or 6000 Kelvin. The overall perception and profile validation seem best at that white point. Also, the volume of the gamut increases somewhat as I go from D65 towards D50, while the whole color space turns so that the blue vertex becomes somewhat more aligned with sRGB than the case is at D65.
As for luminance, I use 120 cd/m2 on the SpectraView in a dim environment. For the MacBook Pro, I have found that the best performance (a combination of contrast and the validation done by SpectraView Profiler) is achieved at one step below maximum, which results in 120 cd/m2.
This article is discussed in this post at Adobe Forums.