Color appearance model: Difference between revisions

Content deleted Content added
No edit summary
JMilty (talk | contribs)
meaning opposite of what was stated
 
(39 intermediate revisions by 15 users not shown)
Line 5:
A '''uniform color space''' ('''UCS''') is a color model that seeks to make the color-making attributes perceptually uniform, i.e. identical spatial distance between two colors equals identical amount of perceived color difference. A CAM under a fixed viewing condition results in a UCS; a UCS with a modeling of variable viewing conditions results in a CAM. A UCS without such modelling can still be used as a rudimentary CAM.
 
==Background==
==Color appearance==
[[Color]] originates in the mind of the observer; “objectively”, there is only the [[spectral power distribution]] of the light that meets the eye. In this sense, ''any'' color perception is subjective. However, successful attempts have been made to map the spectral power distribution of light to human sensory response in a quantifiable way. In 1931, using [[Psychophysics|psychophysical]] measurements, the [[International Commission on Illumination|International Commission on Illumination (CIE)]] created the [[CIE 1931 color space|XYZ color space]]<ref>“XYZ” refers to a color ''model'' and a color ''space'' at the same time, because the XYZ color space is the only color space that uses the XYZ color model. This differs from e.g. the RGB color model, which many color spaces (such as [[sRGB]] or [[Adobe RGB color space|Adobe RGB (1998)]]) use.</ref> which successfully models human color vision on this basic sensory level.
 
===Color appearance===
However, the XYZ color model presupposes specific viewing conditions (such as the retinal locus of stimulation, the luminance level of the light that meets the eye, the background behind the observed object, and the luminance level of the surrounding light). Only if all these conditions stay constant will two identical stimuli with thereby identical XYZ [[CIE 1931 color space#Tristimulus values|tristimulus]] values create an identical color ''appearance'' for a human observer. If some conditions change in one case, two identical stimuli with thereby identical XYZ tristimulus values will create ''different'' color ''appearances'' (and vice versa: two different stimuli with thereby different XYZ tristimulus values might create an ''identical'' color ''appearance'').
{{further|Color perception}}
 
[[Color]] originates in the mind of the observer; “objectively”“subjectively”, there is only the [[spectral power distribution]] of the light that meets the eye. In this sense, ''any'' color perception is subjective. However, successful attempts have been made to map the spectral power distribution of light to human sensory response in a quantifiable way. In 1931, using [[Psychophysics|psychophysical]] measurements, the [[International Commission on Illumination|International Commission on Illumination (CIE)]] created the [[CIE 1931 color space|XYZ color space]]<ref>“XYZ” refers to a color ''model'' and a color ''space'' at the same time, because the XYZ color space is the only color space that uses the XYZ color model. This differs from e.g. the RGB color model, which many color spaces (such as [[sRGB]] or [[Adobe RGB color space|Adobe RGB (1998)]]) use.</ref> which successfully models human color vision on this basic sensory level.
 
However, the XYZ color model presupposes specific viewing conditions (such as the retinal locus of stimulation, the luminance level of the light that meets the eye, the background behind the observed object, and the luminance level of the surrounding light). Only if all these conditions stay constant will two identical stimuli with thereby identical XYZ [[CIE 1931 color space#Tristimulus values|tristimulus]] values create an identical color '''color appearance''' for a human observer. If some conditions change in one case, two identical stimuli with thereby identical XYZ tristimulus values will create ''{{em|different''}} color ''appearances'' (and vice versa: two different stimuli with thereby different XYZ tristimulus values might create an ''{{em|identical''}} color ''appearance'').
 
Therefore, if viewing conditions vary, the XYZ color model is not sufficient, and a color appearance model is required to model human color perception.
 
===Color appearance parameters===
The basic challenge for any color appearance model is that human color perception does not work in terms of XYZ tristimulus values, but in terms of '''appearance parameters''' ([[hue]], [[lightness]], [[brightness]], [[colorfulness|chroma, colorfulness and saturation]]). So any color appearance model needs to provide transformations (which factor in viewing conditions) from the XYZ tristimulus values to these appearance parameters (at least hue, lightness and chroma).
 
===Color appearance phenomena===
This section describes some of the color appearance phenomena that color appearance models try to deal with.
 
====Chromatic adaptation====
{{Main|Chromatic adaptation}}
[[Chromatic adaptation]] describes the ability of human color perception to abstract from the [[white point]] (or [[color temperature]]) of the illuminating light source when observing a reflective object. For the human eye, a piece of white paper looks white no matter whether the illumination is blueish or yellowish. This is the most basic and most important of all color appearance phenomena, and therefore a '''chromatic adaptation transform''' ('''CAT''') that tries to emulate this behavior is a central component of any color appearance model.
 
[[Chromatic adaptation]] describes the ability of human color perception to abstract from the [[white point]] (or [[color temperature]]) of the illuminating light source when observing a reflective object. For the human eye, a piece of white paper looks white no matter whether the illumination is blueish or yellowish. This is the most basic and most important of all color appearance phenomena, and therefore a '''[[chromatic adaptation transform']]'' ('''CAT''') that tries to emulate this behavior is a central component of any color appearance model.
 
This allows for an easy distinction between simple tristimulus-based color models and color appearance models. A simple tristimulus-based color model ignores the white point of the illuminant when it describes the surface color of an illuminated object; if the white point of the illuminant changes, so does the color of the surface as reported by the simple tristimulus-based color model. In contrast, a color appearance model takes the white point of the illuminant into account (which is why a color appearance model requires this value for its calculations); if the white point of the illuminant changes, the color of the surface as reported by the color appearance model remains the same.
Line 25 ⟶ 31:
Chromatic adaptation is a prime example for the case that two different stimuli with thereby different XYZ tristimulus values create an ''identical'' color ''appearance''. If the color temperature of the illuminating light source changes, so do the spectral power distribution and thereby the XYZ tristimulus values of the light reflected from the white paper; the color ''appearance'', however, stays the same (white).
 
====Hue appearance====
Several effects change the perception of hue by a human observer:
 
* '''[[Bezold–Brücke shift|Bezold–Brücke hue shift]]:''' The hue of monochromatica lightstimulus of constant chromaticity changes with [[luminance]].
* '''[[Abney effect]]:''' The hue of monochromatic light changes with the addition of white light (which would be expected color-neutral).
 
====Contrast appearance====
[[File:Bartleson-Breneman effect.png|thumb|200px|Bartleson–Breneman effect]]
Several effects change the perception of [[Contrast (vision)|contrast]] by a human observer:
Line 38 ⟶ 44:
* '''Bartleson–Breneman effect:''' Image contrast (of emissive images such as images on an LCD display) increases with the luminance of surround lighting.
 
====Colorfulness appearance====
{{further|Colorfulness#In color appearance models}}
 
There is an effect which changes the perception of colorfulness by a human observer:
 
* '''[[Hunt effect (color)|Hunt effect]]:''' Colorfulness increases with luminance.
 
====Brightness appearance====
There is an effect which changes the perception of brightness by a human observer:
 
* '''[[Helmholtz–Kohlrausch effect]]:''' Brightness increases with saturation. Not modeled by CIECAM02.
* Contrast appearance effects (see above), modeled by CIECAM02.
 
====Spatial phenomena====
Spatial phenomena only affect colors at a specific ___location of an image, because the human brain interprets this ___location in a specific contextual way (e.g. as a shadow instead of gray color). These phenomena are also known as [[optical illusion#Color and brightness constancies|optical illusions]]. Because of their contextuality, they are especially hard to model; color appearance models that try to do this are referred to as [[ICAM (Color Appearance Model)|image color appearance models (iCAM)]].
 
Line 57 ⟶ 66:
 
===CIELAB===
{{Main|CIELAB color space}}
In 1976, the [[International Commission on Illumination|CIE]] set out to replace the many existing, incompatible color difference models by a new, universal model for color difference. They tried to achieve this goal by creating a ''perceptually uniform'' color space (UCS), i.e. a color space where identical spatial distance between two colors equals identical amount of perceived color difference. Though they succeeded only partially, they thereby created the [[Lab color space#CIELAB|CIELAB (“L*a*b*”)]] color space which had all the necessary features to become the first color appearance model. While CIELAB is a very rudimentary color appearance model, it is one of the most widely used because it has become one of the building blocks of [[color management]] with [[ICC profile]]s. Therefore, it is basically omnipresent in digital imaging.
 
In 1976, the [[International Commission on Illumination|CIE]] set out to replace the many existing, incompatible color difference models by a new, universal model for color difference. They tried to achieve this goal by creating a ''perceptually uniform'' color space (UCS), i.e. a color space where identical spatial distance between two colors equals identical amount of perceived color difference. Though they succeeded only partially, they thereby created the [[LabCIELAB color space#CIELAB|CIELAB (“L*a*b*”)]] color space which had all the necessary features to become the first color appearance model. While CIELAB is a very rudimentary color appearance model, it is one of the most widely used because it has become one of the building blocks of [[color management]] with [[ICC profile]]s. Therefore, it is basically omnipresent in digital imaging.
 
One of the limitations of CIELAB is that it does not offer a full-fledged chromatic adaptation in that it performs the [[von Kries transform]] method directly in the XYZ color space (often referred to as “wrong von Kries transform”), instead of changing into the [[LMS color space]] first for more precise results. ICC profiles circumvent this shortcoming by using the [[LMS color space#CIECAM97s, LLAB|Bradford transformation matrix]] to the LMS color space (which had first appeared in the [[#LLAB|LLAB color appearance model]]) in conjunction with CIELAB.
 
Due to the "wrong" transform, CIELAB is known to perform poorly when a non-reference illuminantwhite XYZ valuepoint is used, making it a poor CAM even for its limited inputs. The wrong transform also seems responsible for its irregular blue hue, which bends towards purple as L changes, making it also a non-perfect UCS.
 
===Nayatani et al. model===
Line 72 ⟶ 83:
RLAB tries to improve upon the significant limitations of [[#CIELAB|CIELAB]] with a focus on image reproduction. It performs well for this task and is simple to use, but not comprehensive enough for other applications.
 
Unlike CIELAB, RLAB uses a proper von Kries step. It also allows for tuning the degree of adaptation by allowing a customized ''D'' value. "Discounting-the-illuminant" can still be used by using a fixed value of 1.0.<ref>{{cite book | doi=10.1002/9781118653128.ch13 | chapter=The RLAB Model | title=Color Appearance Models | date=2013 | pages=243–255 | isbn=9781119967033 }}</ref>
 
===LLAB===
Line 78 ⟶ 89:
 
===CIECAM97s===
After starting the evolution of color appearance models with [[#CIELAB|CIELAB]], in 1997, the CIE wanted to follow up itself with a comprehensive color appearance model. The result was CIECAM97s, which was comprehensive, but also complex and partly difficult to use. It gained widespread acceptance as a standard color appearance model until [[#CIECAM02|CIECAM02]] was published.
 
===[[ICtCp#In IPT|IPT]]===
{{Main|ICtCp#In IPT}}
Ebner and Fairchild addressed the issue of non-constant lines of hue in their color space dubbed ''IPT''.<ref>
{{Citation
Line 88 ⟶ 100:
| series = Proc. IS&T 6th Color Imaging Conference
| place = Scottsdale, AZ
| pages = 8–13.
| year = 1998
}}
Line 96 ⟶ 108:
| first = Christopher
| title = US Patent 8,437,053, Gamut mapping using hue-preserving color space
| url = httphttps://wwwpatents.google.com/patentspatent/US8437053
| access-date = 9 February 2016
}}
Line 103 ⟶ 115:
The IPT color appearance model excels at providing a formulation for hue where a constant hue value equals a constant perceived hue independent of the values of lightness and chroma (which is the general ideal for any color appearance model, but hard to achieve). It is therefore well-suited for [[Color management#Gamut mapping|gamut mapping]] implementations.
 
===[[ICtCp]]===
{{Main|ICtCp}}
ITU-R BT.2100 includes a color space called ''[[ICtCp]]'', which improves the original IPT by exploring higher dynamic
range and larger colour gamuts.<ref>
Line 111 ⟶ 124:
| year = 2016
}}
</ref> ICtCp can be transformed into an approximately uniform color space by scaling Ct by 0.5. This transformed color space is the basis of the Rec. 2124 wide gamut color difference metric ΔE<sub>ITP</sub>.<ref>{{cite web |title=Recommendation ITU-R BT.2124-0 Objective metric for the assessment of the potential visibility of colour differences in television |url=https://www.itu.int/dms_pubrec/itu-r/rec/bt/R-REC-BT.2124-0-201901-I!!PDF-E.pdf |date=January 2019}}</ref>
</ref>
 
===[[CIECAM02]]===
{{Main|CIECAM02}}
After the success of [[#CIECAM97s|CIECAM97s]], the CIE developed [[CIECAM02]] as its successor and published it in 2002. It performs better and is simpler at the same time. Apart from the rudimentary [[#CIELAB|CIELAB]] model, CIECAM02 comes closest to an internationally agreed upon “standard” for a (comprehensive) color appearance model.
 
Both CIECAM02 and CIECAM16 hashave some undesirable numerical properties when implemented to the letter of the specification.<ref>{{cite book |last1=Schlömer |first1=Nico |title=Algorithmic improvements for the CIECAM02 and CAM16 color appearance models |year=2018 |arxiv=1802.06067}}</ref>
 
===iCAM06===
Line 122 ⟶ 136:
 
===CAM16===
The CAM16 is a successor of CIECAM02 with various fixes and improvements. It also comes with a color space called CAM16-UCS. It is published by a CIE workgroup, but is not yet a CIE standard.<ref>{{cite journal |last1=Li |first1=Changjun |last2=Li |first2=Zhiqiang |last3=Wang |first3=Zhifeng |last4=Xu |first4=Yang |last5=Luo |first5=Ming Ronnier |last6=Cui |first6=Guihua |last7=Melgosa |first7=Manuel |last8=Brill |first8=Michael H. |last9=Pointer |first9=Michael |title=Comprehensive color solutions: CAM16, CAT16, and CAM16-UCS |journal=Color Research & Application |date=December 2017 |volume=42 |issue=6 |pages=703–718 |doi=10.1002/col.22131}}</ref> CIECAM16 standard was released in 2022 and is slightly different.<ref>{{Cite web |title=The CIE 2016 Colour Appearance Model for Colour Management Systems: CIECAM16 {{!}} CIE |url=https://cie.co.at/publications/cie-2016-colour-appearance-model-colour-management-systems-ciecam16 |access-date=2022-09-16 |website=cie.co.at |language=en}}</ref><ref>{{Cite web |title=PR: Implement support for "CIECAM16" colour appearance model. by KelSolaar · Pull Request #1015 · colour-science/colour |url=https://github.com/colour-science/colour/pull/1015 |access-date=2022-09-16 |website=GitHub |language=en}}</ref>
 
CAM16 is used in the [[Material Design]] color system in a cylindrical version called "HCT" (hue, chroma, tone). The hue and chroma values are identical to CAM16. The "tone" value is CIELAB L*.<ref>{{cite web |last1=O'Leary |first1=James |title=The Science of Color & Design |url=https://material.io/blog/science-of-color-design |website=Material Design |language=en}} [https://github.com/material-foundation/material-color-utilities source code]</ref>
 
===OKLab===
{{Main|Oklab color space}}
:A 2020 UCS designed for normal dynamic range color. Same structure as CIELAB, but fitted with improved data (CAM16 output for lightness and chroma; IPT data for hue). Meant to be easy to implement and use (especially from sRGB), just like CIELAB and IPT were, but with improvements to uniformity.<ref>{{cite web |last1=Ottosson |first1=Björn |title=A perceptual color space for image processing |date=23 December 2020 |url=https://bottosson.github.io/posts/oklab/ |language=en}}</ref>
 
As of September 2023, it is part of the [[CSS color]] level 4 draft<ref>{{cite web |title=CSS Color Module Level 4 |url=https://www.w3.org/TR/css-color-4/#resolving-oklab-oklch-values |website=www.w3.org}}</ref> and it is supported by recent versions of all major browsers.<ref>{{cite web |title=oklab() (Oklab color model) |url=https://caniuse.com/mdn-css_types_color_oklab |website=Can I use... |access-date=27 September 2023}}</ref>
 
===Other models===
Line 128 ⟶ 150:
: A 1947 UCS with generally good properties and a conversion from CIEXYZ defined in 1974. The conversion to CIEXYZ, however, has no closed-form expression, making it hard to use in practice.
;SRLAB2
:A 2009 modification of CIELAB UCS in the spirit of RLAB (but with discounting-the-illuminant). Uses CIECAM02 chromatic adaptation matrix to fix itsthe blue hue issue.<ref name=Levien>{{cite web |title=An interactive review of Oklab |url=https://raphlinus.github.io/color/2021/01/18/oklab-critique.html |first1=Raph |last1=Levien |language=en |date=18 January 2021}}</ref>
;{{vanchor|JzAzBz}}
:A 2017 UCS designed for HDR color. Has J (lightness) and two chromaticities.<ref>{{cite journal |last1=Safdar |first1=Muhammad |last2=Cui |first2=Guihua |last3=Kim |first3=Youn Jin |last4=Luo |first4=Ming Ronnier |title=Perceptually uniform color space for image signals including high dynamic range and wide gamut |journal=Optics Express |date=26 June 2017 |volume=25 |issue=13 |pages=1513115131–15151 |doi=10.1364/OE.25.015131|pmid=28788944 |bibcode=2017OExpr..2515131S |doi-access=free }}</ref>
;XYB
:A family of UCS used in [[Guetzli]] and [[JPEG XL]], with a main goal in compression. Better uniformity than CIELAB.<ref name=Levien/>
;{{vanchor|Oklab}}
:A 2020 UCS designed for normal dynamic range color. Same structure as CIELAB, but fitted with improved data (CAM16 output for lightness and chroma; IPT data for hue). Meant to be easy to implement and use, just like CIELAB and IPT were, but with improvements to uniformity.<ref>{{cite web |last1=Ottosson |first1=Björn |title=A perceptual color space for image processing |url=https://bottosson.github.io/posts/oklab/ |language=en}}</ref>
 
==Notes==
Line 163 ⟶ 183:
[[Category:Visual perception]]
[[Category:Cognitive modeling]]
[[Category:Color appearance phenomena| ]]
[[Category:Color space]]