Content deleted Content added
add archive bot. move old topics to archive 1 |
|||
(33 intermediate revisions by 12 users not shown) | |||
Line 1:
{{Talk header}}
{{WikiProject banner shell|class=C|collapsed=y|
{{WikiProject Computing|importance=Low}}
{{WikiProject Film |Filmmaking=yes}}
{{WikiProject Television |importance=Low}}
{{WikiProject Video games |class=C |importance=Low}}
{{WikiProject Technology }}
{{WikiProject Computer graphics|importance=Mid}}
}}
{{User:MiszaBot/config
|archiveheader = {{Talk archive}}
|algo = old(365d)
|maxarchivesize = 125K
|minthreadsleft = 5
|minthreadstoarchive = 1
|counter = 1
|archive = Talk:High-dynamic-range rendering/Archive %(counter)d
}}
==eye iris adaptation (size changing) is rudiment==
Line 448 ⟶ 33:
::Weakness of this algorithm is that it for example color RGB(255:255:255) will made RGB(121:121:121) and color RGB(255:0:0) will made RGB(255:0:0). Another example is that color RGB(128:0:0) it will made (205:0:0) and color RGB(128:128:128) it will made RGB(97:97:97). One more exammple is that color RGB(128:128:0) it will made RGB(129:129:0). And one more example color RGB(255:255:0) it will made RGB(159:159:0). And color RGB(64:64:64) it will made RGB(78:78:78). And color RGB(64:0:0) algorithm will made RGB(168:0:0). And color RGB(64:64:0) it will made RGB(104:104:0). The good news is that we can multiply by about 1.5 and so one channel is still the same and for two channels it is very positive: 159*1.5=238.5. So another step:
:7) 1.5*(sample.r / c)*46.9/255; 1.5*(sample.g / c)*46.9/255; 1.5*(sample.b / c)*46.9/255; if color channel >1, then color channel must be 1; maximum will be 1 and minimum 0.
:Here "shaders.pak" http://www.megaupload.com/?d=2URCLOQY file, which need to put (replace) in "C:\Program Files\Electronic Arts\Crytek\Crysis SP Demo\Game" directory or "\Crysis\Game" for full version. Actually among main HDR code original crysis code have many combinations of HDR code which add HDR effect to main code like gamma and colors matrices light shafts. Thus I think bloom, glare, light shafts and main HDR is only those necessary. Bright pass filter maybe which is in tutorial demo and is similar to glare or glow of bright objects. So for now this pak have removed many original lines of not main HDR and main HDR changed to this "vSample.xyz =3*(vSample.rgb-fAdaptedLum)+0.5;" and in "SkyHDR.cfx" file corrected with this "Color.xyz = pow(2, log(min(Color.xyz, (float3) 16384.0)));", where log mean natural logarithm (ln), so this changing division by 2.5 and reparing very dark colors, but dark colors of blue sky now little more gray, but since is over [main] HDR in "PostProcess.cfx" file, then this gray are only at dark places and with dark horizon (early at morning for example). Code which I describe in Sky HDR if would be used with lights, then would make perfect HDR without white and black areas when selected small range from big range. But this HDR (if applied only to added lights)
Line 460 ⟶ 44:
:Assume moon light is RGB(5:5:8) (from 255 max), room light [at 2 meters distance from lamp on white paper] is RGB(55:50:40), sun light is RGB(230:225:210) on white paper. Then after algorithm moon light will become RGB(<math>2^{\ln(5)} : 2^{\ln(5)}: 2^{\ln(8)}</math>)=RGB(3.05:3.05:4.23) and this need multiply by 5.47, so moon light will become RGB(3.05:3.05:4.23) *5.47=(17:17:23) (moon light [if you don't playing videogames at night] is exception and night lights you must simulate with changing ambient lighting, over wise if you change <math>\ln()</math> to <math>\log_{10}()</math>, then you will get too bright shadows from flashlight; or you can peak moon light stronger than it is, like RGB(15:15:15) and you will get RGB(36:36:36) and I guaranty it will not have impact on shadows from flashlight). After applying algorithm, room lamp light on white paper from 2 meters distance will become RGB(<math>2^{\ln(55)} : 2^{\ln(50)}: 2^{\ln(40)}</math>)=RGB(16.08:15.05:12.9) and this need multiply by 5.47, so will become RGB(16.08:15.05:12.9) *5.4759=(88:82:71) (room light perhaps better should be choosen little bit stronger like RGB(100:100:100), which after algorithm will become RGB(133:133:133)). Sun light on white paper without specularity will become RGB(<math>2^{\ln(230)} : 2^{\ln(225)}: 2^{\ln(210)}</math>)=RGB(43.35:42.7:40.7) and by multiplying with 5.475876 we get RGB(237:234:223). For stronger HDR, instead rising <math>2^{\ln()}</math> we can choose <math>1.5^{\ln()}</math> and decrease ambiant light (this is light under shadow; means how bright is shadow; this is how bright object under shadow of sun light or flash light or lamp). So this algorithm weak light alone makes strong and weak light added to strong light makes overall lighting without noticeable difference. If you don't plan using any lights in videogame, but only Sun light, then you don't need this algorithm. Roughly can say, that in this algorithm need all lights sum passed through this algorithm multiply with diffuse [lighting] and with texture colors, but texture must be multiplied with ambient lighting first, which should made texture brightest colors about 10-50 and diffuse lighting from 0 to 1 multiplied after making texture brightest values 255 to 10-50; so it means, that in the end of algorithm need everything (final color(s) result) divide by 10-50. But actually ambient lighting is just another light without intensity fallow, so better first to multiply each light with diffuse [lighting] (N*L), which can be from 0 to 1 depending on angle between surface and light, and add all lights. Ambient lighting usually don't need to multiply with diffuse, because sky shining from all sides. Ambient lighting must be about from 10 to 100 depending on how strong HDR you want to make (<math>1.5^{\ln()}</math> or <math>2^{\ln()}</math>; ambient 10-20 if 1.5). So then all lights including ambient lighting added, then we pass through algorithm <math>2^{\ln(ambient+diffuse*light1+diffuse*light2+diffuse*light3)}</math>; then what we get, we multiply with texture colors, which can be from 0 to 1. And if texture with lighting need to clamp to 0-1 values then need divide by 255.
:Kinda official or faster way to made similar thing is <math>(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*2/(1+(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math>, but all lights must be from 0 to 1 and better each light not exceed 0.8 (especially not sun light). For stronger HDR formula become this <math>texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*4/(1+3*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math>, which increasing very weak light almost 4 times and strong lights intensity almost don't changing. But official formula multiplying texture first and I suggest don't do it, because dark and not so dark colors will be not such colorful and become more gray. So texture must be multiplied after algorithm and not by all lights sum like this <math>(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*4/(1+3*texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math>.
:So why in general <math>texture*(5.4759*2^{\ln(255*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))})/255</math> formula better than this <math>texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*2/(1+(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math> ? Answer is that there almost no difference. In first formula weak light would loose color like from RGB(192:128:64) to RGB(209:158:98), and in second formula light also will lose color but little bit differently like from RGB(192:128:64) to RGB(219:170:102). For weak colours difference bigger: first algorithm RGB(20:10:5) converts to RGB(43.7:27: <math>5.4759\cdot 2^{\ln(5)}</math>)=RGB(43.7:10: <math>5.4759\cdot 2^{1.6094}</math>) =RGB(43.7:27: <math>5.4759\cdot 3.05133</math>)=RGB(43.7:27:16.7)=RGB(44:27:17); second algorithm RGB(20:10:5) converts to RGB(255*0.145:255*0.07547: <math>255\cdot 2\cdot (5/255)/(1+(5/255))</math>)=RGB(37:19.2: <math>255\cdot 2\cdot 0.0196/(1+0.0196)</math>)=RGB(37:19.2: <math>255\cdot 0.0392/1.0196</math>)=RGB(37:19:255*0.03846)=RGB(37:19:9.8)=RGB(37:19:10). <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 17:03, 27 October 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
::According to my experiments adaptation time from lamp light [lighted room] to very very weak light is 20-25 seconds. And adaptation time between average and strong lights is about 0.4 second. So adaptation time is quite long only for very very weak light. But it really not 20 minutes and even not a 1 minute. Eye adaptation time from very very weak light to stronger and to average lighting and even to very strong is also 0.4 ''s''. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 02:35, 28 October 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> It apears that adaptation to very weak light 20-25 seconds is because of blinking bloom-glow from strong light and according to my experiments if only part of view have bright light in eye, then another part adaptation is instant. Thus I come to only one logical explanation, that there is adaptation similar to adaptation to color and based not on eye iris size, but on some induction of previous light. Because it's obvious, that if one part is adapted and another part of field of view need adaptation time and after turning head or eyes you can see that you either see or not, thus it really can't be because of eye iris size changing, if everything around is black. So eye iris is really rudiment and can play role only as pain causing factor before adaptation to strong(er) light for measuring difference in luminance of scene. In best case scenario iris adaptation can play role only for adaptation to weak lighted objects, if there is some errors in my experiments due too very strong radiosity (endless raytracing), which eliminating sense of transition from strong light to weak and vice-versa and due to perhaps wider human visibility dynamic range or some brain colors filtering mystery. But human seeing as he have very wide dynamic range and eye iris size don't play any role to human visibility, but only small chance, that that iris play role for adaption to weak colors.
==Look how I kill HDR==
Line 482 ⟶ 64:
::final.rgb=(color.rgb/average)/1.3333; 0<color.rgb<1, 0.25<average<0.75, 0<final<1;
::changing everything. By using only division you can't change natural color to another. This algorithm disadvantage to compare with my (and over which using subtraction 0.3333) is that it don't adapts to bright light, but if bright light is strong (average is big), then image is unchanged, but this can be even better. And if there is dark colors domination then brighter colors turns to white like in previous algorithms. At minimum average=0.25 all colors becoming 4/1.3333=3 times stronger. At average=0.5 all colors becoming 2/1.3333=1.5 times stronger. At average 0.75 and above we have normal image like without using algorithm. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 11:20, 10 November 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
:There is even better way, than lights compression. This better way is luminance compression like this:
Line 489 ⟶ 70:
:final.rgb=(color.rgb/average)/1.3333; 0<color.rgb<1, 0.25<average<0.75, 0<final<1;
:and even here very much benefit would be if average is calculated choosing biggest number from 3 RGB channels of each pixel and all pixels strongest channels summed up without division by 3. In this way there will not be wrong adaptation to bright grass, when only green color dominating (kinda color RGB(0:200:0) and no need to think, that it is RGB(0:200/3:0)=RGB(0:67:0) and increase all luminance dramaticly, that green becoming far stronger than 255 (about 300-400 after adaptation)). <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 08:17, 17 November 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
===Reviving my real HDR algorithm===
Line 555 ⟶ 135:
:::And if we want to adapt not according to maximum single pixel brightness of maximum this pixel channel brightnes, then we use average of all pixels channels or all pixels channels maximumus:
:::final.rgb=color.rgb/average-0.0196*2*average; 0.1<averageMSP<1;
:::but then we get some pixels overbrighted, but do you subrtract 5 from 255 at maximum average or 1 from 255 at minimum average (all pixels luminence is 5 times bigger) this don't makes any difference. '''So if we want to try simulate human eye adaptation, then we must much more give attention to all bright pixels, than to weak colour pixels. This can be done if average is computed using square root of each pixel luminance, but all numbers from 0 to 1 (and only after average sum calculated everything divide by number of pixels channels). And of course would be much better to sum up only maximals pixels channels (RGB) under square root. In this way we get bigger average, for example instead (0.2+0.9)/2=0.65, we get''' <math>(0.2^{1/2}+0.9^{1/2})/2=(0.4472+0.94868)/2=0.6979</math>. '''It can be root of any order like rise 1/3 or 1/4 for adaptation to very weak if weak colours are ''really'' weak like if most colours values 0.05-0.2 and do not adapt if there is even only 1/5 of strong colours (and 4/5 all weak) or adapt just little bit. Another way to do it is use numbers from 0 to 255 and calculate average like sum of all channels (or maximals of each pixel channels) in logarithm, like this''' <math>46.018*(\ln(255)+\ln(3))/2=46*(5.54+1.0986)/2=46*3.3199=153</math> (<math>255/\log_{2.7}(255)=255/\ln(255)=46.018</math>); (255+3)/2=129. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 08:38, 26 November 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
::::''Update to black shrift''. [[Natural logarithm]] function is really expensive and not very practical but is equivalent to square root of 5 degree. For example <math>46.018*\ln(128) =223</math> and <math>255\cdot (128/255)^{1/5}=255\cdot 0.87123=222.</math> Another example <math>46.018\cdot \ln(50) =180</math> and <math>255\cdot (128/255)^{1/5}=255\cdot 0.7219=184.</math> One more example, <math>46.018\cdot \ln(5) =74</math> and <math>255\cdot (5/255)^{1/5}=255\cdot 0.4555=116.</math> And here 74 is not equal to 116, because to match exactly, need to choose power approximate 0.31 instead 1/5=0.2. Then we get <math>255\cdot (5/255)^{0.31}=255\cdot 0.295565=75.</math> Also <math>255\cdot (128/255)^{0.31}=255\cdot 0.807621=206.</math> Well, it appears it's not replace Natural Logarithm, but gives very similar result.
:::Why I saying "If human eye is capable to adaptation", because human eye iris size changing can be [[Rudiment (disambiguation)|rudiment]], because at strong light hard to tell difference between 1 and 5 (from 0-255 possible if 5 appears at strong weak light and 1 at strong light). But more than this is, that strong light, especially sun light by passing into eye iris through lens reflecting from eye iris and eye white "ball" thing and then by physics laws light passing from one matter to another (from eye lens to air) makes light reflection first from iris and white part of eye and then this light goes, where eye lens and air intersects and reflects from air back to iris (you can check how laser pointer reflecting from air if you direct it into window). So this from air reflection in eye lens probably makes most, if not all, light blooms, glows, glares and so on and so pretty weak colours (say from 0 to 20-50 from 0-255 possible) are overgrayed (overlighted, overtaken) with this strong light refection inside lens from air. And even from iris itself due to not ideal flat surface of eye iris, light from strong riris illuminated point goes to near bumpy iris receptors and very weak light near strong light is mixed with strong light shining halo, glare. Also iris physical size difference not necessary must give 5-7 times bigger sensitivity at maximum eye iris size than at minimum eye iris size, but can give only 2 or 1.5 or 1.3 times bigger sensitivity at maximum eye iris size than at minimum eye iris size (this would mean, that monitor maximum white colour is 1.3-2 times weaker than white paper illuminated by sun and that lamp light at 1-3 metters distance not so weak compare with sun light, but then two such lamps must stronger illuminate than direct sunlight). So if, say, 2 times stronger weak colour at maximum eye iris size than at minimum eye iris size, then at maximum eye iris size human seeing 1-128 (from 0-255 possible, 0 is black) and at minimum eye iris size human see 2-255 (from 0-255 possible, 0 is black). But say human eye, probably not selecting only this two ranges or 1-128 or 2-255, but between also, like 1.5-191 and hard to see difference and hard to tell if there is some darker objects at strong light (or near strong light/luminance) due to eye iris adaptation or due to blanking effect of various blooms and glows due to reflection light from air inside eye lens. And at all colours comparison is hard task even if they are on monitor separated by black space and one is RGB(255:0:0) and over RGB(191:0:0), then if they not near each over hard to tell which is which. Maybe iris size becoming not rudiment only when it is from average to big and from average to small nothing changing at all, etc.
:::BTW I make all possible tests to see if red or green or blue turning to gray if this basic colour is very very weak (need to have monitor with big contrast ratio, some stupid CRT monitors can be even better with too big contrast ratio, that less than 50 is not seen, so need to do display driver software contrast and brightness calibration if you still want to use it). So RGB colours if they are very very weak then from first look it's harded to tell diference between blue and green and much easier between red and any over, but don't matter how weak they are there still possible to say colour at any time with 90-99% correct answer, especially for red and if all weak colours of red, green and blue a displayed together. Specular highlights of all 3 colours and threshold of colour RGB(1:0.4:0) makes it say red raver than orange so number of possible colours decreasing in dark and if object is of two mixed channels RGB, then stronger channel will be seen only at very weak light and weaker will be under threshold of visibility. They are pretty weak so need concentration, maybe thats why hard to recognise colours in dark. So on monitor either you see ver very weak colour of separate chaneel red, green, or blue or don't see nothing at all at night. So don't dare to say about some gray colours bullshit at night, that you have something in eye to see everything monochrome. Dark colours just look dark and thats how it is. If you want to look in game at night, then specular highlights must dominate of material, but this in most cases comes naturally and especially and most LCD monitors with small contrast 300:1, there even 0 shining like 30-50 on monitor with big contrast like 1000:1 or bigger. So such monitors with small contrast better suited to use at day and of course this LCD led light still almost overcoming number 3 or 5 or ten so you still don't see this weak colours or if see they not pure red or gree or blue, but they turned from pure red or green or blue to such like they strong analogs RGB(255:200:200) for red, RGB(200:255:200) for green, RGB(200:200:255) for blue, so there no need in game to simulate gray for dark illumination, because LCD monitor Led backlight and room light graying weak colours pretty much itself already. But I have to admit, that with too big contrast monitors turning all colours spectrum little bit in direction into 6 basic colours, like my unmodified algorithm, red, green, blue, cyan, yellow, pink, because 128 is no more two times weaker than 255, but about 2.2 times and 64 is not 2 times weaker than 128, but about 2.5 times. http://imageshack.us/g/827/rgbcolorsdark2.png/
Line 572 ⟶ 153:
::If you have monitor (with big contrast like 8^8=16777216:1), where 255 is 8 times stronger than 128, and 128 is 8 times stronger than 64, and 64 is 8 times stronger than 32 and so on. Then by rising gamma to value <math>k_g=3</math> you applying algorithm "<math>final.rgb=(color.rgb)^{1/3}; </math> 0<color.rgb<1" and you will get, that 255 is 2 times stronger than 128, and 128 is 2 times stronger than 64, and 64 is 2 times stronger than 32. Because <math>0.8^{1/3}/0.1^{1/3}=0.9283/0.46416=2.</math>
::For monitors with contrast 2^8=256~300:1, there is no point use gamma correction, because 1 (and even 0) shining pretty strong. So if monitors developers don't put they own calibration into monitor (that 0 is 1000 times weaker than 1 and 1 is about 300 times weaker than 255), then gamma should perfectly to let you to choose desired contrast ratio (from say 50:1 to 100000:1) by changing coefficient <math>0.5<k_g<0<3.5.</math> Good thing about gamma is that it don't rising 0 at all. So this is main advantage of big contrast monitors over small contrast monitors (which have strong 0 and contrast between 1 and 0 is about 2:1 or at most 10:1), because if 0 is very black, then better visible weak colours like 3, 5, 10, if gamma is more than 1 (default gamma=1). But for some reason at least for old some CRT monitors contrast and brightness combined correction "contrast=100-brightness/2.55" rising too weak colours better and in correct contrast (you must judge if contrast between colours is correct by comparing 10 with 20 and 255 with 128 or 10 with 5, and if in all cases two times smaller number looks like two times weaker then contrast is correct, by correcting with gamma for some reason disappearing difference between 255 and 128 and difference between 5 and 10 is very big and between 10 and 20 very small, but it's maybe because in CRT (cathode ray tube) monitors screen becoming too negative and for weak colours it's big difference and for strong colours almost no difference, also after some time (after about 20 minutes) in CRT monitors screen becoming charged and weak colours becoming weaker; so for LCD monitors gamma should do everything correct). This contrast and brightness combined correction "contrast=100-brightness/2.55" difference between weak colours doing almost invisible; if before this correction colour was 10 and was two times stronger than 5, then after correction colour 10 is about 1.1 or 1.3 times stronger than 5, but for strong colours almost nothing changing, like if 128 was 2 times stronger than 64, then after correction 128 is 1.9 times stronger than 64. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 19:25, 12 December 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
::If you have monitor with contrast ratio 2^8=256:1, where 255 is 2 times stronger than 128, and 128 is 2 times stronger than 64, and 64 is 2 times stronger than 32. Then by changing gamma from 1 to 2, you will get contrast <math>(\sqrt{2})^8=1.4142^8=16:1</math>. Then 255 will be 1.4142 times stronger than 128, and 128 will be 1.4142 times stronger than 64 and so on. Because <math>\sqrt{0.2}/\sqrt{0.1}=\sqrt{2}=1.4142.</math> So if for HDR <math>k_g</math> changing from 1 to 2, then weakest color is 1/255=0.003921568 and if scene is very dark then weakest colour will become <math>0.003921568^{1/2}=0.062622429</math> and this is 0.0626*255=15.9687=16. Another example if <math>k_g=1.5</math>, then <math>0.003921568^{1/1.5}=0.00392^{2
::<math>final.rgb=(color.rgb)^{1/k_g}-(1/255)^{1/k_g}=(color.rgb)^{1/(2-average)}-(1/255)^{1/(2-average)};</math> 0<color.rgb<1; 0<average<1.
::Also we may want, that during weak lighting in scene, when 1/255=0.0039, would look like 16/255=0.0627, so then we do not subtract anything:
::<math>final.rgb=(color.rgb)^{1/k_g}=(color.rgb)^{1/(2-average)};</math> 0<color.rgb<1; 0<average<1.
::But if we do not subtract, then contrast ratio from 1:16 will become 16:64=1:4, will become very small. And if we subtract, then contrast ratio will increase 3 times, because before algorithm if is 1:16, then after (17-16):(64-16)=1:48. But unfortunately subtraction changing normal colours balance. So better use normal algorithm "<math>final.rgb=color.rgb/average</math>". Or to use correction, which don't brings distortion of natural colors balance:
::<math>final.rgb=((color.rgb)^{1/(2-average)})/average;</math> 0<color.rgb<1; 0<average<1.
::This way you will get weakest colour rised by 16 and more and in proper colors natural balance. For example if average=16/255=0.062745, then color=1/255=0.00392 is rised to:
::1) <math>final.rgb=(1/255)^{1/(2-16/255)}/(16/255)=0.00392^{1/1.937254902}/0.062745=0.057247641/0.062745=0.912384286;</math> or 232.66=233; so need average put to some limits like 0.5<average<1;
::2) <math>final.rgb=(1/255)/(16/255)=0.00392/0.062745=0.0625;</math> or 15.9375=16.
::Another example, average=128/255=0.5, color=16/255=0.062745 and for first case 0.5<average<1, then:
::1) <math>final.rgb=(16/255)^{1/(2-0.5)}/0.5=0.062745^{1/1.5}/0.5=0.1579/0.5=0.315803203;</math> or 80.5298=81;
::2) <math>final.rgb=(16/255)/0.5=0.062745/0.5=0.12549;</math> or 32.
::2.1) <math>final.rgb=(16/255)/0.2=0.062745/0.2=0.31372549;</math> or 80.
::And if average=128/255=0.5, color=100/255=0.392156862 and for first case 0.5<average<1, then:
::1) <math>final.rgb=(100/255)^{1/(2-0.5)}/0.5=0.392^{1/1.5}/0.5=0.53576/0.5=1.071527222;</math> or 273.239=>255;
::2) <math>final.rgb=(100/255)/0.5=0.392/0.5=0.7843;</math> or 200.
::2.1) <math>final.rgb=(100/255)/0.2=0.392/0.2=1.960784314;</math> or 500=>255.
::And if average=128/255=0.5, color=1/255=0.00392 and for first case 0.5<average<1, then:
::1) <math>final.rgb=(1/255)^{1/(2-0.5)}/0.5=0.00392^{1/1.5}/0.5=0.02487/0.5=0.049735887;</math> or 12.68=13;
::2) <math>final.rgb=(1/255)/0.5=0.00392/0.5=0.007843137;</math> or 2.
::2.1) <math>final.rgb=(1/255)/0.2=0.00392/0.2=0.019607843;</math> or 5. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 21:57, 12 December 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
:::Note, that algorithm, which using gamma don't matter if is combined with this "final.rgb=color.rgb/average" algorithm or not, still making contrast between say 128 and 64 instead normal 2:1, it making <math>0.2^{1/1.5}/0.1^{1/1.5}=1.5874:1</math> or bigger depending on "average" like <math>0.2^{1/1.25}/0.1^{1/1.25}=1.7411:1</math> instead normal 2:1. So this algorithm graying combined or not with with this "final.rgb=color.rgb/average". But graying all colours equally don't matter they strong or weak and contrast between all colours depending only on "average".
:::Compressed luminance algorithm "final.rgb=(2*color.rgb)/(1+color.rgb)" graying the same don't matter if used before or after this "final.rgb=color.rgb/average" algorithm or used alone. But it graying stronger colours more than weaker and contrast after this "final.rgb=(2*color.rgb)/(1+color.rgb)" algorithm between say 128 and 64 is smaller than between 20 and 10. For example [2*0.2/(1+0.2)]/[2*0.1/(1+0.1)]=[0.4/1.2]/[0.2/1.1]=[0.3333]/[0.1818]=1.83333, so contrast becoming 1.8333:1 after algorithm, compare with noraml 2:1 before algorithm (here was colours 0.1*255=25.5=26 and 0.2*255=51). And if colours are 128/255=0.5 and 64/255=0.25, then [2*0.5/(1+0.5)]/[2*0.25/(1+0.25)]=[1/1.5]/[0.5/1.25]=[0.6667]/[0.4]=1.6667, so contrast between 128 and 64 equal to 1.6667:1 instead normal 2:1. So you can imagine, that there small contrast between 255 and 128 (contrast after algorithm becoming 1.5:1, because [2*1/(1+1)]/[2*0.5/(1+0.5)]=[1]/[1/1.5]=[1]/[0.6667]=1.5).
::But I tell you secret, that average is calculated using only 16 textures centers (pixels) or less likely variant, that each sixteen pixel on screen (width*height/16). So still no so very real average and the slower adaptation, the better. So best of all to use maximum of all 16 pixels and maximum of this pixel all channels RGB instead average and then it will go perfectly in all algorithms. If color=230/255=0.9, colormax=230/255=0.9, then:
::1) <math>final.rgb=0.9^{1/(2-0.9)}/0.9=0.9^{1/1.1}/0.9=0.90866/0.5=1.009624247;</math> or 257.454=>255; 0.5<colormax<1;
::2) <math>final.rgb=0.9/0.9=1;</math> or 255. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 22:59, 12 December 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
|