The color rendering of a light source refers to its ability to reveal the colors of various objects faithfully (i.e. to produce illuminant metamerism) in comparison with an ideal or natural light source. Light sources with good color rendering are desirable in color-critical applications such as neonatal care[1] and art restoration. It is defined by the International Commission on Illumination (CIE) as follows:[2]
Effect of an illuminant on the color appearance of objects by conscious or subconscious comparison with their color appearance under a reference illuminant.
A wide variety of quantitative measures have been devised to measure the color rendering of a light source, to the human eye or to the camera. Notable ones include:
Researchers used daylight as the benchmark to which to compare color rendering of electric lights. In 1948, daylight was described as the ideal source of illumination for good color rendering because "it (daylight) displays (1) a great variety of colours, (2) makes it easy to distinguish slight shades of colour, and (3) the colours of objects around us obviously look natural".[6]
Around the middle of the 20th century, color scientists took an interest in assessing the ability of artificial lights to accurately reproduce colors. European researchers attempted to describe illuminants by measuring the spectral power distribution (SPD) in "representative" spectral bands, whereas their North American counterparts studied the colorimetric effect of the illuminants on reference objects.[7]
The color rendering index (CRI) of 1974 is the product of a CIE committee's study on the topic of color rendering. It uses the American colorimetric approach with a panel of human subjects instead of requiring spectrophotometry. Eight samples of varying hue would be alternately lit with two illuminants, and the color appearance compared. Since no color appearance model existed at the time, it was decided to base the evaluation on color differences in a suitable color space, CIEUVW. The residual difference in chromaticity is resolved with a chromatic adaptation transform before comparing to the reference illuminant. Each color difference was translated to a sub-score, eight of which are averaged to produce the final score of Ra.[8]
As early as 1971, an analogue of CRI for televisions have been devised by workers at the BBC.[9] At that time, the relatively broad-band nature of light sources meant that the CRI still approximated the color rendering for television cameras, an assumption quickly broken by the advent of LED lighting. As a result, the European Broadcasting Union re-introduced the concept of a television lighting consistency index (TLCI) in 2012, followed by a television luminaire matching factor (TLMF) in 2013 for mixed lights.[3]
To calculate a TLCI, a full measure of the spectral power distribution (SPD) of the light source is first taken. From this SPD a correlated color temperature (CCT) is found, which provides the reference illuminant. Under the test and reference illuminant, an image of the ColorChecker is simulated using known reflectivities and the color curves of an average HDTV camera and display. The differences are calculated in CIEDE2000. With the TLMF, the reference is not specified by a CCT, but by a user directly.[10]
The spectral similarity index (SSI) of 2016 is a scale that completely forgoes the comparison of color samples, instead directly comparing the SPDs of one light source to the reference.[4] Its developers argue that difference among cameras mean that TLCI can only describe three-chip television cameras, not the more-varied spectral sensitivities of single-chip digital cinema, still cameras, or film.[11] (In theory, color gels also introduce variations that are hard to be captured by TLCI.)
The SSI is calculated by taking two integrated, normalized SPDs in the 5-nm intervals from 375 to 675 nm and finding a weighted relative difference between them. This weighted relative difference is convolved, and the magnitude of the result is translated into a 100-point value. A low SSI only warns of potential color-rendering issues, but neither confirms the presence of one nor indicates what errors are likely to occur.[11]
TM-30 is the current (as of 2021) CIE recommended measure for color rendering as perceived by humans. It generates a large set of outputs, including an overall fidelity index (Rf), an overall gamut index (Rg) for changes in chroma, a gamut shape graph, and detailed values for chroma, hue, and color fidelity for each of the 16 hue ranges, plus color fidelity scores for each of the 99 sample colors. It uses the CAM02-UCS color space. The Rf has been adopted by the CIE as CIE 224:2017 "color fidelity index" (CFI).[12]
As with other newer scales, TM-30 is calculated from a SPD with reference to a SPD of the same CCT.[12] The uniqueness of TM-30 is that it goes beyond fidelity (accuracy of color reproduction) to describe other aspects of color rendering. This extra information allows for, e.g. fidelity to be sacrificed for vividness of skin tones under a certain design criterion. Three reference design intents and priority levels are defined in TM-30 Annex E.[13]
Before the aforementioned scales are devised to replace CRI, a number of other measures have been proposed. None of them have seen wide use, however: