Content deleted Content added
Shiftchange (talk | contribs) assess as C class |
add archive bot. move old topics to archive 1 |
||
(6 intermediate revisions by 6 users not shown) | |||
Line 1:
{{Talk header}}
{{WikiProject banner shell|class=C|collapsed=y|
{{WikiProject Computing|importance=Low}}
{{WikiProject Film |Filmmaking=yes}}
{{WikiProject Television |importance=Low}}
{{WikiProject Video games |class=C |importance=Low}}
{{WikiProject Technology }}
{{WikiProject Computer graphics|importance=Mid}}
}}
{{User:MiszaBot/config
|archiveheader = {{Talk archive}}
|algo = old(365d)
|maxarchivesize = 125K
|minthreadsleft = 5
|minthreadstoarchive = 1
|counter = 1
|archive = Talk:High-dynamic-range rendering/Archive %(counter)d
}}
==eye iris adaptation (size changing) is rudiment==
Line 450 ⟶ 33:
::Weakness of this algorithm is that it for example color RGB(255:255:255) will made RGB(121:121:121) and color RGB(255:0:0) will made RGB(255:0:0). Another example is that color RGB(128:0:0) it will made (205:0:0) and color RGB(128:128:128) it will made RGB(97:97:97). One more exammple is that color RGB(128:128:0) it will made RGB(129:129:0). And one more example color RGB(255:255:0) it will made RGB(159:159:0). And color RGB(64:64:64) it will made RGB(78:78:78). And color RGB(64:0:0) algorithm will made RGB(168:0:0). And color RGB(64:64:0) it will made RGB(104:104:0). The good news is that we can multiply by about 1.5 and so one channel is still the same and for two channels it is very positive: 159*1.5=238.5. So another step:
:7) 1.5*(sample.r / c)*46.9/255; 1.5*(sample.g / c)*46.9/255; 1.5*(sample.b / c)*46.9/255; if color channel >1, then color channel must be 1; maximum will be 1 and minimum 0.
:Here "shaders.pak" http://www.megaupload.com/?d=2URCLOQY file, which need to put (replace) in "C:\Program Files\Electronic Arts\Crytek\Crysis SP Demo\Game" directory or "\Crysis\Game" for full version. Actually among main HDR code original crysis code have many combinations of HDR code which add HDR effect to main code like gamma and colors matrices light shafts. Thus I think bloom, glare, light shafts and main HDR is only those necessary. Bright pass filter maybe which is in tutorial demo and is similar to glare or glow of bright objects. So for now this pak have removed many original lines of not main HDR and main HDR changed to this "vSample.xyz =3*(vSample.rgb-fAdaptedLum)+0.5;" and in "SkyHDR.cfx" file corrected with this "Color.xyz = pow(2, log(min(Color.xyz, (float3) 16384.0)));", where log mean natural logarithm (ln), so this changing division by 2.5 and reparing very dark colors, but dark colors of blue sky now little more gray, but since is over [main] HDR in "PostProcess.cfx" file, then this gray are only at dark places and with dark horizon (early at morning for example). Code which I describe in Sky HDR if would be used with lights, then would make perfect HDR without white and black areas when selected small range from big range. But this HDR (if applied only to added lights)
Line 463 ⟶ 45:
:Kinda official or faster way to made similar thing is <math>(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*2/(1+(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math>, but all lights must be from 0 to 1 and better each light not exceed 0.8 (especially not sun light). For stronger HDR formula become this <math>texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*4/(1+3*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math>, which increasing very weak light almost 4 times and strong lights intensity almost don't changing. But official formula multiplying texture first and I suggest don't do it, because dark and not so dark colors will be not such colorful and become more gray. So texture must be multiplied after algorithm and not by all lights sum like this <math>(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*4/(1+3*texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math>.
:So why in general <math>texture*(5.4759*2^{\ln(255*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))})/255</math> formula better than this <math>texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*2/(1+(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math> ? Answer is that there almost no difference. In first formula weak light would loose color like from RGB(192:128:64) to RGB(209:158:98), and in second formula light also will lose color but little bit differently like from RGB(192:128:64) to RGB(219:170:102). For weak colours difference bigger: first algorithm RGB(20:10:5) converts to RGB(43.7:27: <math>5.4759\cdot 2^{\ln(5)}</math>)=RGB(43.7:10: <math>5.4759\cdot 2^{1.6094}</math>) =RGB(43.7:27: <math>5.4759\cdot 3.05133</math>)=RGB(43.7:27:16.7)=RGB(44:27:17); second algorithm RGB(20:10:5) converts to RGB(255*0.145:255*0.07547: <math>255\cdot 2\cdot (5/255)/(1+(5/255))</math>)=RGB(37:19.2: <math>255\cdot 2\cdot 0.0196/(1+0.0196)</math>)=RGB(37:19.2: <math>255\cdot 0.0392/1.0196</math>)=RGB(37:19:255*0.03846)=RGB(37:19:9.8)=RGB(37:19:10). <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 17:03, 27 October 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
::According to my experiments adaptation time from lamp light [lighted room] to very very weak light is 20-25 seconds. And adaptation time between average and strong lights is about 0.4 second. So adaptation time is quite long only for very very weak light. But it really not 20 minutes and even not a 1 minute. Eye adaptation time from very very weak light to stronger and to average lighting and even to very strong is also 0.4 ''s''. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 02:35, 28 October 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> It apears that adaptation to very weak light 20-25 seconds is because of blinking bloom-glow from strong light and according to my experiments if only part of view have bright light in eye, then another part adaptation is instant. Thus I come to only one logical explanation, that there is adaptation similar to adaptation to color and based not on eye iris size, but on some induction of previous light. Because it's obvious, that if one part is adapted and another part of field of view need adaptation time and after turning head or eyes you can see that you either see or not, thus it really can't be because of eye iris size changing, if everything around is black. So eye iris is really rudiment and can play role only as pain causing factor before adaptation to strong(er) light for measuring difference in luminance of scene. In best case scenario iris adaptation can play role only for adaptation to weak lighted objects, if there is some errors in my experiments due too very strong radiosity (endless raytracing), which eliminating sense of transition from strong light to weak and vice-versa and due to perhaps wider human visibility dynamic range or some brain colors filtering mystery. But human seeing as he have very wide dynamic range and eye iris size don't play any role to human visibility, but only small chance, that that iris play role for adaption to weak colors.
Line 483 ⟶ 64:
::final.rgb=(color.rgb/average)/1.3333; 0<color.rgb<1, 0.25<average<0.75, 0<final<1;
::changing everything. By using only division you can't change natural color to another. This algorithm disadvantage to compare with my (and over which using subtraction 0.3333) is that it don't adapts to bright light, but if bright light is strong (average is big), then image is unchanged, but this can be even better. And if there is dark colors domination then brighter colors turns to white like in previous algorithms. At minimum average=0.25 all colors becoming 4/1.3333=3 times stronger. At average=0.5 all colors becoming 2/1.3333=1.5 times stronger. At average 0.75 and above we have normal image like without using algorithm. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 11:20, 10 November 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
:There is even better way, than lights compression. This better way is luminance compression like this:
Line 490 ⟶ 70:
:final.rgb=(color.rgb/average)/1.3333; 0<color.rgb<1, 0.25<average<0.75, 0<final<1;
:and even here very much benefit would be if average is calculated choosing biggest number from 3 RGB channels of each pixel and all pixels strongest channels summed up without division by 3. In this way there will not be wrong adaptation to bright grass, when only green color dominating (kinda color RGB(0:200:0) and no need to think, that it is RGB(0:200/3:0)=RGB(0:67:0) and increase all luminance dramaticly, that green becoming far stronger than 255 (about 300-400 after adaptation)). <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 08:17, 17 November 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
===Reviving my real HDR algorithm===
Line 600 ⟶ 179:
::1) <math>final.rgb=0.9^{1/(2-0.9)}/0.9=0.9^{1/1.1}/0.9=0.90866/0.5=1.009624247;</math> or 257.454=>255; 0.5<colormax<1;
::2) <math>final.rgb=0.9/0.9=1;</math> or 255. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 22:59, 12 December 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
|