Color Theory

Light and Matter

The modern understanding of color originated in the discovery of the spectral nature of light by Isaac Newton in the 1600s. Newton’s famous experiments demonstrated that light consists of energy of different wavelengths. We now know that the eye is sensitive to a broad band of wavelengths in the approximate range 350-750 nm. The visible spectrum represents only a small fraction of the full electromagnetic spectrum. Within the visible spectrum certain wavelengths give rise to certain visual sensations. For example, the shorter wavelengths are perceived to be violet and blue.

When light strikes a surface there are two things that can happen:
(i) the change in refractive index can cause light to be reflected by the surface and this surface-reflected light is called specular reflection;
(ii) light that is not reflected at the surface can penetrate the body of the material although as it passes through the surface the change in refractive index will cause the light to be refracted.
Light may pass completely through a material, in which case we say that it has been transmitted. Alternatively the light may be absorbed by the material or it may be scattered. Light that is scattered or reflected may eventually pass out of the front, back, or side of the material.

Light can be absorbed by materials according to a number of mechanisms that include atomic vibrations and rotations, ligand-field effects, molecular orbitals, and charge transfer. It is very often the case that specific quantities of light (energy) are absorbed by a specific material and thus the light absorbtion properties of materials are usually wavelength selective.
The energy that is absorbed by molecules can be dissipated as kinetic and heat energy, but sometimes the energy can be re-emitted. Fluourescence and phosphorescence are phenomena that result from the re-emission of absorbed light energy: in both cases the re-emitted energy is at a longer wavelength than the light originally absorbed.

When light strikes particles it may be scattered. When the scattering particles are extremely small (the order of 1000 nm) the light is scattered according to a simple law proposed by Rayleigh: short wavelengths are scattered more than long wavelengths. For larger particles (the order of 4000 nm and larger) the amount of scattering is according to Fresnel’s equations: the amount of scattering depends upon the difference between the refractive index of the particle and of the medium in which it is dispersed and this difference is wavelength dependent. If light is scattered evenly in all directions this is called isotropic scattering but it is rarely the case. The absorption and scattering properties of particles are complex and a number of theories exist to describe them including the Kubelka-Munk theory.

There are many reasons why substances appear colored but for most physical materials it is because the absorption and scattering properties of the material are different for different wavelengths of light. Thus a substance that appears yellow may do so because it absorbs most strongly in the blue part of the spectrum and scatters most strongly in the red and green parts of the spectrum. It is often the case that a pigment scatters light most efficiently in one region of the spectrum whilst having its main absorption band in another. This explains why translucent and transparent colored films can have different hues when viewed by reflected as opposed to transmitted light.

Color Vision

Color is more than just the property of objects and yet this is contrary to the way in which we use color in everyday language. The association of color with objects in our language, seen in statements such as ”this object is red”, is misleading for it is undeniable that the color that we perceive exists only in the brain.
It is commonly stated that color vision is the result of the nature of the physical world, the physiological response of the eye (more strictly the retina) to light, and the neural processing of the retinal response by the brain. The identification of three separate processes in this way is probably artificial, and does little justice to the complex nature of color perception, but the idea is useful and appealing; it turns out that the number “three” has an almost magical association with color vision.

Almost the whole of the interior of the spherically shaped eyeball is lined with a layer of photosensitive cells known collectively as the retina and it is this structure that is the sense organ of vision. The eyeball, though no mean feat of engineering itself, is simply a structure to house the retina and to supply it with sharp images of the outside world. Light enters the eye through the cornea and the iris and then passes through the lens before striking the retina. The retina receives a small inverted image of the outside world that is focussed jointly by the cornea and the lens. The lens changes shape to achieve focus but hardens with age so that we gradually lose our power of accommodation. The eye is able to partially adapt to different levels of illumination since the iris can change shape to provide a central hole with a diameter between 2 mm (for bright light) and 8 mm (for dim light).
The retina translates light into nerve signals and consists of three layers of nerve-cell bodies. Surprisingly the photosensitive cells, known as rods and cones, form the layer of cells at the back of the retina. Thus, light must pass though the other two layers of cells to stimulate the rods and cones. The reasons for this backward-design of the retina are not fully understood but one possibility is that the position of the light-sensitive cells at the back of the retina allows any stray unabsorbed light to be taken care of by cells immediately behind the retina that contain a black pigment known as melanin. The melanin-containing cells also help to chemically restore the light-sensitive visual pigment in the rods and cones after it has been bleached by light.
The middle layer of the retina contains three types of nerve cells: bipolar cells, horizontal cells, and amacrine cells. The connectivity of the rods and cones to these three sets of cells is complex but signals eventually pass to the front of the retina and to the third layer of cells known as retinal ganglion cells. The axons from retinal ganglion cells collect in a bundle and leave the eye to form the optic nerve. The backward-design of the retina means that the optic nerve must pass through the retina in order to leave the eye and this results in the so-called blindspot.
The rods and cones contain visual pigments. Visual pigments are much like any other pigments in that they absorb light and have absorption sensitivities that are wavelength dependent. The visual pigments have a special property, however, in that when a visual pigment absorbs a photon of light it changes molecular shape and at the same time releases energy. The pigment in this changed molecular form absorbs light less well than before and thus is often said to have been bleached. The release of energy by the pigment and the change in shape of the molecule together cause the cell to fire, that is to release an electrical signal, by a mechanism that is still not completely understood.

Rods are sensitive to very low levels of illumination and are responsible for our ability to see in dim light (scotopic vision). They contain a pigment with a maximum sensitivity at about 510 nm, in the green part of the spectrum. The rod pigment is often called visual purple since when it is extracted by chemists in sufficient quantities the pigment has a purple appearance. Scotopic vision is completely lacking in color; a single spectral sensitivity function is color-blind and thus scotopic vision is monochromatic.
Color vision is provided by the cones, of which there are three distinct classes each containing a different photosensitive pigment. The three pigments have maximum absorptions at about 430, 530, and 560 nm and the cones are often called ”blue”, ”green”, and ”red”. The cones are not named after the appearance of the cone pigments but are named after the color of light to which the cones are optimally sensitive. This terminology is unfortunate since monochromatic lights at 430, 530, and 560 nm are not blue, green, and red respectively but violet, blue-green, and yellow-green. The use of short-, medium-, and long-wavelength cones is a more logical nomenclature.
The existence of three spectral-sensitivity functions provides a basis for color vision since light of each wavelength will give rise to a unique ratio of short-, medium-, and long-wavelength cone responses. The cones therefore provide us with color vision (photopic vision) that can distinguish remarkably fine wavelength changes.

The eye cannot simultaneously focus on the three regions of the spectrum where the cone-pigment absorptions peak since refraction at the cornea and lens is greater for short wavelengths than it is for long wavelength. Thus, it is said that the eye is not corrected for chromatic aberration. The medium- and long-wavelength peaks are quite close together and therefore the lens optimally focuses light of about 560 nm on the retina. Since the short-wavelength cones receive a slightly blurred image it is not necessary to provide the same spatial resolution that is provided by the other two sets of cones. The retina contains approximately 40 long-wavelength cones and 20 medium-wavelength cones for every single short-wavelength cone.
The rods and cones are not evenly distributed on the retina. The central part of the retina, the fovea, contains only cones whereas at greater eccentricities there is a greater preponderance of rods. In the fovea the cones are densely packed and it is this part of the retina that provides the greatest spatial resolution under normal viewing conditions.

Since the retina contains four different types of receptor it might be thought that the neural pathways would carry four different signals to the brain, and more precisely to the primary visual cortex which is at the back and rear portion of the brain. It is generally believed, however, that color information is coded by the retinal and post-retinal neural structures as just three types of signals that are often called ”channels”.
The idea of ”channels” in the brain is central to the way in which the operation of the brain can be viewed as an information- or signal-processing task. A channel is a conceptual processing route and thus for the visual system we can say that the information from the cones is processed in three separate channels. Remembering that color perception is only one function of the visual system, there are other channels that are responsible for providing other information about the outside world that enables the perception of form, motion, and distance for example. The existence of channels for the processing of color information helps explain the two contradictory theories of color vision that were prevalent during the 19th century: the trichromatic theory and the opponent-colors theory.
The trichromatic theory was postulated by Young and later by Helmholtz and was based upon color-matching experiments carried out by Maxwell. Maxwell’s experiments demonstrated that most colors can be matched by superimposing three separate light sources known as primaries; a process known as additive mixing. Although any light sources could be used as primaries the use of monochromatic sources of radiation enables the widest gamut of colors to be obtained by additive mixing. The Young-Helmholtz theory of color vision was built around the assumption of there being three classes of receptors although direct proof for this was not obtained until 1964 when microspectrophotopic recordings of single cone cells were obtained. The roots of trichromacy are firmly understood to be in the receptoral stage of color vision. It is important to realize that a yellow stimulus produced by the additive mixture of appropriate red and green lights does not simply match monochromatic yellow light but is indistinguishable from it. Thus, the trichromatic nature of vision is essential for the operation of many color reproducing processes such as television, photography, and three-color printing.

The opponent-colors theory of color vision, proposed by Hering, seemingly contradicts the Young-Helmholtz trichromatic theory. It was advanced to explain various phenomena that could not be adequately accounted for by trichromacy. Examples of such phenomena are the after-image effect (if the eye is adapted to a yellow stimulus the removal of the stimulus leaves a blue sensation or after-effect) and the non-intuitive fact that an additive mixture of red and green light gives yellow and not a reddish-green. Hering proposed that yellow-blue and red-green represent opponent signals; this also went some way towards explaining why there were four psychophysical color primaries red, green, yellow, and blue and not just three. Hering also proposed a white-black opponency but this third opponent channel has been abandoned in most modern versions of the theory. It is now accepted that both the trichromatic theory and the opponent colors theory describe essential features of our color vision with the latter theory describing the perceptual qualities of color vision that derive from the neural processing of the receptor signals in two opponent chromatic channels and an achromatic channel.

The perceptual attributes brightness, hue, and colorfulness have been defined by Professor R.W.G. Hunt as follows: Brightness: attribute of a visual sensation according to which an area appears to exhibit more or less light. Hue: attribute of a visual sensation according to which an area appears to be similar to one, or proportions of two, of the perceived colors red, yellow, green, and blue. colorfulness: attribute of a visual sensation according to which an area appears to exhibit more or less of its hue.

CIE Color Specification

Additive color mixing refers to the mixing of different colored lights and can be easily demonstrated by the superposition of lights (primaries) on a white projection screen. When this is done using red, green, and blue primaries, the colors yellow, cyan, and magenta are produced where two of the primaries overlap. Where all three primaries overlap the sensation of white is produced if the spectral distributions and intensities of the three primaries are carefully chosen.

In 1931 the CIE (Commission Internationale de l’Eclairage) developed a system for specifying color stimuli using tristimulus values for three imaginary primaries. The basis of this system was the CIE 1931 standard observer.

According to the trichromatic theory of color vision an observer can match a color stimulus with an additive mixture of three primaries. Therefore any color stimulus can be specified by the amounts of the primaries that an observer would use in order to match the stimulus. The CIE standard observer resulted from experiments where observers were asked to match monochromatic wavelengths of light with mixtures of three primaries. The standard observer is in fact a table showing how much of each primary would be used (by an average observer) to match each wavelength of light. Tristimulus values are the amounts of three primaries that specify a color stimulus. The CIE 1931 tristimulus values are called X, Y, and Z.

It is impossible to choose three real primaries such that all possible colors can be matched with additive mixtures of those primaries. Thus, in a real additive color reproductive system such as color television only a limited gamut of colors can be displayed. In 1931, when the CIE system was specified, it was decided to use three imaginary primaries such that the tristimulus values X, Y, and Z, are always positive for all real color stimuli. The concept of imaginary primaries is complex but it is not strictly neccessary to understand this concept in order to understand and use the CIE system of color specification. In fact, the CIE could have used three real primaries, such as red, green, and blue lights, in which case the tristimulus values would be represented by R, G, and B.

There were several reasons for the adoption of imaginary primaries. Firstly, the primaries were chosen such that X, Y, and Z would be positive for all possible real stimuli. Although this might not seem particularly important today, the elimination of negative tristimulus values was an important consideration in pre-computer days. Secondly, the imaginary primaries were chosen such that the Y tristimulus value was directly proportional to the luminance of the additive mixture. Thirdly, X=Y=Z for a match to the equi-energy stimulus SE (a stimulus that has equal luminance at each wavelength).

Tristimulus values can be calculated if the reflectance spectrum of a sample is known. The reflectance spectrum can be measured using a reflectance spectrophotometer.
CIE XYZ tristimulus values can be calculated by the integration of the reflectance values R(l), the relative spectral energy distributions of the illuminant E(l), and the standard observer functions x(l), y(l), and z(l). The integration is appoximated by summation, thus:

X = 1/k ∑ R(l) E(l) x(l),
Y = 1/k ∑ R(l) E(l) y(l),
Z = 1/k ∑ R(l) E(l) z(l),

   where k = ∑ E(l) y(l) and l = wavelength.

The normalizing factor 1/k is introduced such that Y = 100 for a sample that reflects 100% at all wavelengths: recall that Y is proportional to the luminance of the stimulus. The introduction of this normalization is convenient since it means that relative, rather than absolute, spectral energy distributions for the illuminant can be used (thus the units in which they are expressed are unimportant).

There are two main types of instruments that are used for measuring the color of opaque surfaces: reflectance spectrophotometers and colorimeters. Reflectance spectrophotometers measure the amount of light reflected by a sample at many narrowband wavelength intervals resulting in a reflectance spectrum. By contrast, tristimulus colorimeters employ three broadband filters to obtain three numbers that can be converted directly to tristimulus values.

Reflectance spectrophotometers measure the amount of light reflected by a surface as a function of wavelength to produce a reflectance spectrum. The reflectance spectrum of a sample can be used, in conjunction with the CIE standard observer function and the relative spectral energy distribution of an illuminant, to calculate the CIE XYZ tristimulus values for that sample under that illuminant.

The operation of a spectrophotometer is basically to illuminate the sample with white light and to calculate the amount of light that is reflected by the sample at each wavelength interval. Typically data are measured for 31 wavelength intervals centred at 400nm, 410nm, 420nm, …, 700nm. This is done by passing the reflected light though a monochromating device that splits the light up into separate wavelength intervals. The instrument is calibrated using a white tile whose reflectance at each wavelength is known compared to a perfect diffuse reflecting surface. The reflectance of a sample is expressed between 0 and 1 (as a fraction) or between 0 and 100 (as a percentage). It is important to realize that the reflectance values obtained are relative values and, for non-fluorescent samples, are independent of the quality and quantity of the light used to illuminate the sample. It is only when tristimulus values, such as CIE XYZ, are computed that the measurements become illuminant specific.

The optical geometry of the instrument is important. In some instruments an integrating sphere is used that enables the sample to be illuminated diffusely (from all angles equally) and the reflected light to be collected at an angle roughly perpendicular to the surface of the sample. Alternatively, other instruments illuminate the sample at a certain angle and collect light at another angle. For example, typically the sample may be illuminated at 45 degrees to the surface and light reflected measured at 0 degrees – this is known as 45/0 geometry. The converse to this is 0/45. The sphere-based geometries are known as D/0 and 0/D. It is extremely difficult, if not impossible, to correlate measurements taken between instruments if the optical geometry is not identical.

The four CIE standard geometries are:
* diffuse illumination and light collection at the normal, D/0;
* normal illumination and diffuse light collection, 0/D;
* illumination at 45 degrees and light collection at the normal, 45/0;
* normal illumination and light collection at 45 degrees, 0/45.

Colorimeters measure tristimulus values and operate using three broad-band filters. Consequently, colorimeters cannot provide spectral reflectance data but historically they have been preferred to spectrophotometers because of their low cost of manufacture and portability.

When light strikes a surface some of the light penetrates where it can then be absorbed, scattered, or even transmitted if the layer is sufficiently thin. Nevertheless, because of the change in refractive index between air and most substances, a certain proportion of the incident light is reflected directly from the surface. The angular distribution of this light depends upon the nature of the surface but light that is reflected at the opposite angle to the incident light is called specular reflectance. Light that is reflected by the substance itself is called body reflectance.

Sphere-based spectrophotometers often incorporate a so-called gloss trap which allows the specular component of the reflected light to be either included or excluded.

The terms light source and illuminant have precise and different meanings. A light source is a physical emitter of radiation such as a candle, a tungsten bulb, and natural daylight. An illuminant is the specification for a potential light source. All light sources can be specified as an illuminant, but not all illuminants can be physically realized as a light source.

Illuminants are normally specified in terms of relative energy tabulated for each wavelength or wavelength band. There are several illuminants that are widely used by the color industry and these include A, C, D65, and TL84. Illuminants A and C were defined by the CIE in 1931 to represent tungsten light and natural daylight respectively. Illuminant C was found to be a poor representation of daylight in that it contains insufficient energy at the lower wavelengths and it has generally been replaced by a class of illuminants known as the D illuminants.

The D class of illuminants specify relative energy distributions that closely correspond to the radiation emitted by a so-called black-body. As the tempertature of a black body is increased there is a shift in the emitted radiation to longer wavelengths. A specific D illuminant is therefore notated with reference to the temperature (in Kelvin) of the black-body which it most closely matches. For example, the illuminant D65 has a spectral energy distribution that closely matches that of a black-body at 6500K. Illuminant D65 also closely resembles the relative spectral energy distribution of north-sky daylight and is accordingly important for color specification in northern Europe. Other D illuminants, notably D55, are important in other parts of the world.

There are a number of illuminants that specify light sources used in specific industries and sometimes by specific companies. An example of this is illuminant TL84.

The 1931 standard observer data were derived from color-matching experiments with an arrangement that meant that the stimuli activated an area of the retina of 2 degrees. The distribution of rods and cones is not uniform over the surface of the retina and this implies that the tristimulus values obtained from the 1931 data are strictly only valid for observations made under 2 degrees viewing conditions. This is equivalent to viewing a small coin held at arm-length and does not correspond particularly well with the viewing conditions often used in the coloration industry.

Because the 1931 2 degree observer is not really appropriate for large-field visual color judgements the CIE defined a second set of observer functions in 1964 known as the supplementary observer data based upon color-matching experiments with a field of 10 degrees. Since the 2 degree data are still in use the 10 degree data are often differentiated from the original 1931 data by the use of subscripts.

There is often a need for an intuitive interpretation of color specification in terms of tristimulus values. This is one reason why the three-dimensional color color space defined by X, Y, and Z is often transformed and plotted in terms of a chromaticity diagram. Chromaticity coordinates x, y, and z are derived by calculating the fractional components of the tristimulus values thus:

x = X/(X + Y + Z)

Since by definition x + y + z = 1, if two of the chromaticity coordinates are known then the third is redundant. Thus, all possible sets of tristimulus values can be represented in a two-dimensional plot of two of these chromaticity coordinates and by convention x and y are always used. A plot of this type is referred to as a chromaticity diagram. The use of chromaticity diagrams enables three-dimensional data to be compressed into two-dimensional data but at a cost. Consider two samples A and B having specification

Sample A: X = 10, Y = 20, Z = 30
Sample B: X = 20, Y = 40, Z = 60

Samples A and B have identical chromaticity coordinates but different tristimulus values. The difference between the two samples is one of luminance and B would probably appear brighter than A if the two samples were viewed together. A complete specification using chromaticity coordinates therefore requires two chromaticity coordinates and one of the tristimulus values.

There are perhaps two problems with the specification of colors in terms of tristimulus values and chromaticity space. Firstly, this specification is not easily interpreted in terms of the psychophysical dimensions of color perception namely, brightness, hue, and colorfulness. Secondly, the XYZ system and the associated chromaticity diagrams are not perceptually uniform. The second of these points is a problem if we wish to estimate the magnitude of the difference between two color stimuli. The need for a uniform color space led to a number of non-linear transformations of the CIE 1931 XYZ space and finally resulted in the specification of one of these transformations as the CIE 1976 (L* a* b*) color space.
In fact in 1976 the CIE specified two color spaces; one of these was intended for use with self-luminous colors and the other was intended for use with surface colors. These notes are principally concerned with the latter known as CIE 1976 (L* a* b*) color space or CIELAB.

CIELAB allows the sepcification of color perceptions in terms of a three-dimensional space. The L* axis is known as the lightness and extends from 0 (black) to 100 (white). The other two coordinates a* and b* represent redness-greenness and yellowness-blueness respectively. Samples for which a* = b* = 0 are achromatic and thus the L* axis represents the achromatic scale of greys from black to white.

The quantities L*, a*, and b* are obtained from the tristimulus values according to the following transformations:

L* = 116(Y/Yn)1/3 -16,
a* = 500[(X/Xn)1/3 – (Y/Yn)1/3],
b* = 200[(Y/Yn)1/3 – (Z/Zn)1/3],

   where Xn, Yn, and Zn are the values of X, Y, and Z for the illuminant that was used for the calculation of X, Y, and Z of the sample, and the quotients X/Xn, Y/Yn, and Z/Zn are all greater than 0.008856. Note: When any of the quotients are less than or equal to 0.008856 a slightly different set of equations is used.

It is often convenient to consider a single slice though color space at constant L*. Although it is possible to represent a point in the a*-b* plane by its cartesian coordinates a* and b* it is often better to specify the polar coordinates C* and H*.

It is dangerous to attempt to interpret the qualitative color difference between two samples using the a* b* representation. For example, even though the a* axis is the red axis, a sample with a large a* value would not necessarily appear redder than a sample with a smaller a* value. Hue is not uniquely defined by either a* or b*. The use of C* and H* leads to a more intuitive representation of color.

Color Difference Evaluation

CIE 1976 (L* a* b*) color space provides a three-dimensional representation for the perception of color stimuli. If two points in space, representing two stimuli, are coincident then the color difference between the two stimuli is zero. As the distance in space between two points increases it is reasonable to assume that the perceived color difference between the stimuli that the two points represent increases accordingly. One measure of the difference in color between two stimuli is therefore the Euclidean distance ΔE* between the two points in the three-dimensional space.

Unfortunately several evaluations of CIELAB have shown that ΔE* is not a particularly good measure of the magnitude of the perceptual color difference between two stimuli. The relatively poor ability of ΔE* to predict the magnitude of perceptual color differences has led to more complicated ways of computing a color difference from the CIELAB coordinates of two samples and some of these measures have been shown to be more reliable than ΔE*.

The L* C* H* representation is useful if qualitative color differences are required. Differences can be calculated thus:

ΔL* = L*btx – L*std,
ΔC*= C*btx – C*std
ΔH*= {(Δa*)2 + (Δb*)2 – (ΔC*)2}1/2

   where the subscripts std and btx refer to standard and batch respectively.

If ΔL* is positive the batch is lighter than the standard, but if ΔL* is negative the batch is darker than the standard.

If ΔC* is positive the batch is stronger than the standard, but if ΔC* is negative the batch is weaker than the standard.

The hue descriptor is more difficult to determine: the radial direction in hue from the standard to the batch is used to give two hue descriptors (eg: redder/yellower): the descriptors are derived from the first two axes that are crossed in the a*-b* plane of color space when moving from the standard to the batch in the direction of hue.

The term ΔE is derived from the German word for sensation Empfindung. ΔE therefore literally means difference in sensation. The superscript asterisk is sometimes used to denote a CIELAB difference thus, ΔE*.

It is established that the CIELAB color difference equation is inadequate for many purposes – equal sizes of ΔE* correspond to different perceptual differences in color. There is strong evidence to show that most of the modern optimized equations (such as CMC, M&S, BFD, and CIE 94) are more uniform than CIELAB. It is not clear, however, whether any one of these new equations is significantly better than the others. The CMC equation is a British Standard (BS 6923) and is being considered as an ISO standard.

The CMC color difference formula allows calculation of tolerance ellipsoids around the target standard where the dimensions of the ellipsoid is a function of the position in color space of the target. The design of this formula allows for two user-definable coefficients l and c and the formula is thus normally specified as CMC(l:c). The values of l and c modify the relative importance that is given to differences in lightness and chroma respectively. The CMC(2:1) version of the formula has been shown to be useful for the estimation of the acceptability of color difference evaluations.

The CMC(2:1) equation is a British Standard (BS:6923) for the assessment of small color differences and is currently being considered as an ISO standard.

A refinement of the CMC formula led to the introduction of the BFD formula. Recent research suggest that the BFD equation performs marginally better than the CMC equation.

A simplification of the CMC(l:c) equation has recently been considered by the CIE. It is too early to state whether this new equation, sometimes referred to as the CIE 94 color difference equation, is significantly better than its predecessor.

In the 1980s Marks & Spencer, in conjunction with Instrumental Color Systems, developed their own in-house equations that are used in the textile industry. Research shows that there is little to choose between the CMC and M&S equations in terms of overall performance. The fact that the M&S equations have never been published has restricted their use.

The pass/fail limit depends upon the equation that is used, but more importantly it also depends upon the application. The correct pass/fail value can only be determined from experience – pragmatically, the correct pass/fail limit is that such that all pairs of samples with a color difference less than this limit will be accepted by the customer.

Miscellaneous Topics

Color constancy is the phenomenon that most color surfaces appear to retain their approximate daylight appearance even when viewed under light sources that differ markedly from daylight. color constancy is surprising since the spectral distribution of light entering the eye from a surface can vary markedly from one light source to another. The phenomenon of color constancy is only approximate, however, and surfaces do not retain their daylight colors when viewed under certain fluorescent light sources or when viewed under monochromatic radiation. Certain surfaces appear to change markedly from one light source to another and such surfaces are said to lack color constancy; this phenomenon must not be confused with metamerism which is a phenomenon associated with at least two samples.

Metamerism refers to the situation where two color samples appear to match under one condition but not under another; the match is said to be conditional. Metamersim is usually discussed in terms of two illuminants (illuminant metamerism) whereby two samples may match under one illuminant but not under another. Other types of metamerism include geometrical metamerism and observer metamerism. Two samples that conditionally match are said to be a metameric pair. If two samples have identical reflectance spectra then they cannot be metameric – they are an unconditional match.

Whiteness is a complex perceptual phenomenon that depends not only on the luminance of a sample but also on the chromaticity. To promote uniformity of practice in the evaluation of whiteness the CIE has recommended that the formula for whiteness, W or W10, be used for comparisons of the whiteness of samples evaluated for CIE Standard Illuminant D65:

W = Y + 800(xn – x) + 1700(yn – y),
or
W10 = Y + 800(xn,10 – x10) + 1700(yn,10 – y10),

   where xn and yn refer to the chromaticities of the illuminant (D65), and the subscript 10 distinguishes the 10 degree observer data from the 2 degree data.

The higher the value of W or W10 the greater the whiteness: the formulas, however, are only valid for samples that would commercially be considered white and within certain other constraints.

If the measurement of whiteness is important, and if samples may be fluorescent, then it is very important that the light source in the spectrophotometer be a close approximation to the D65 illuminant.

The preferential absorption of light in the short wavelength region (380-440nm) by a nominally white substance usually causes an appearance of yellowness. A number of yellowness scales have been developed over the years.

It must be realized that a spectrophotometer (or a colorimeter) spatially averages the light reflected by a sample: thus it is possible that a uniform grey sample and a black-and-white checkerboard sample could give rise to identical measurements. The CIE system is strictly limited to the measurement of uniform color stimuli.

Digital color camera devices are beginning to be used for color measurement – especially for textured samples. They measure color at many thousands of spatial locations on the sample but currently provide relatively poor color resolution and precision.

There is an increasing need to be able to communicate color from one device (such as a VDU screen) to another (such as a color printer) with no loss in color fidelity. One way to achieve this is if all devices are calibrated in terms of a device-independent color space. The CIE color specification is being adopted by industry as a device-independent space.