Content deleted Content added
add archive bot. move old topics to archive 1 |
|||
(16 intermediate revisions by 10 users not shown) | |||
Line 1:
{{Talk header}}
{{WikiProject banner shell|class=C|collapsed=y|
{{WikiProject Computing|importance=Low}}
{{WikiProject Film |Filmmaking=yes}}
{{WikiProject Television |importance=Low}}
{{WikiProject Video games |class=C |importance=Low}}
{{WikiProject Technology }}
{{WikiProject Computer graphics|importance=Mid}}
}}
{{User:MiszaBot/config
|archiveheader = {{Talk archive}}
|algo = old(365d)
|maxarchivesize = 125K
|minthreadsleft = 5
|minthreadstoarchive = 1
|counter = 1
|archive = Talk:High-dynamic-range rendering/Archive %(counter)d
}}
==eye iris adaptation (size changing) is rudiment==
Line 450 ⟶ 33:
::Weakness of this algorithm is that it for example color RGB(255:255:255) will made RGB(121:121:121) and color RGB(255:0:0) will made RGB(255:0:0). Another example is that color RGB(128:0:0) it will made (205:0:0) and color RGB(128:128:128) it will made RGB(97:97:97). One more exammple is that color RGB(128:128:0) it will made RGB(129:129:0). And one more example color RGB(255:255:0) it will made RGB(159:159:0). And color RGB(64:64:64) it will made RGB(78:78:78). And color RGB(64:0:0) algorithm will made RGB(168:0:0). And color RGB(64:64:0) it will made RGB(104:104:0). The good news is that we can multiply by about 1.5 and so one channel is still the same and for two channels it is very positive: 159*1.5=238.5. So another step:
:7) 1.5*(sample.r / c)*46.9/255; 1.5*(sample.g / c)*46.9/255; 1.5*(sample.b / c)*46.9/255; if color channel >1, then color channel must be 1; maximum will be 1 and minimum 0.
:Here "shaders.pak" http://www.megaupload.com/?d=2URCLOQY file, which need to put (replace) in "C:\Program Files\Electronic Arts\Crytek\Crysis SP Demo\Game" directory or "\Crysis\Game" for full version. Actually among main HDR code original crysis code have many combinations of HDR code which add HDR effect to main code like gamma and colors matrices light shafts. Thus I think bloom, glare, light shafts and main HDR is only those necessary. Bright pass filter maybe which is in tutorial demo and is similar to glare or glow of bright objects. So for now this pak have removed many original lines of not main HDR and main HDR changed to this "vSample.xyz =3*(vSample.rgb-fAdaptedLum)+0.5;" and in "SkyHDR.cfx" file corrected with this "Color.xyz = pow(2, log(min(Color.xyz, (float3) 16384.0)));", where log mean natural logarithm (ln), so this changing division by 2.5 and reparing very dark colors, but dark colors of blue sky now little more gray, but since is over [main] HDR in "PostProcess.cfx" file, then this gray are only at dark places and with dark horizon (early at morning for example). Code which I describe in Sky HDR if would be used with lights, then would make perfect HDR without white and black areas when selected small range from big range. But this HDR (if applied only to added lights)
Line 462 ⟶ 44:
:Assume moon light is RGB(5:5:8) (from 255 max), room light [at 2 meters distance from lamp on white paper] is RGB(55:50:40), sun light is RGB(230:225:210) on white paper. Then after algorithm moon light will become RGB(<math>2^{\ln(5)} : 2^{\ln(5)}: 2^{\ln(8)}</math>)=RGB(3.05:3.05:4.23) and this need multiply by 5.47, so moon light will become RGB(3.05:3.05:4.23) *5.47=(17:17:23) (moon light [if you don't playing videogames at night] is exception and night lights you must simulate with changing ambient lighting, over wise if you change <math>\ln()</math> to <math>\log_{10}()</math>, then you will get too bright shadows from flashlight; or you can peak moon light stronger than it is, like RGB(15:15:15) and you will get RGB(36:36:36) and I guaranty it will not have impact on shadows from flashlight). After applying algorithm, room lamp light on white paper from 2 meters distance will become RGB(<math>2^{\ln(55)} : 2^{\ln(50)}: 2^{\ln(40)}</math>)=RGB(16.08:15.05:12.9) and this need multiply by 5.47, so will become RGB(16.08:15.05:12.9) *5.4759=(88:82:71) (room light perhaps better should be choosen little bit stronger like RGB(100:100:100), which after algorithm will become RGB(133:133:133)). Sun light on white paper without specularity will become RGB(<math>2^{\ln(230)} : 2^{\ln(225)}: 2^{\ln(210)}</math>)=RGB(43.35:42.7:40.7) and by multiplying with 5.475876 we get RGB(237:234:223). For stronger HDR, instead rising <math>2^{\ln()}</math> we can choose <math>1.5^{\ln()}</math> and decrease ambiant light (this is light under shadow; means how bright is shadow; this is how bright object under shadow of sun light or flash light or lamp). So this algorithm weak light alone makes strong and weak light added to strong light makes overall lighting without noticeable difference. If you don't plan using any lights in videogame, but only Sun light, then you don't need this algorithm. Roughly can say, that in this algorithm need all lights sum passed through this algorithm multiply with diffuse [lighting] and with texture colors, but texture must be multiplied with ambient lighting first, which should made texture brightest colors about 10-50 and diffuse lighting from 0 to 1 multiplied after making texture brightest values 255 to 10-50; so it means, that in the end of algorithm need everything (final color(s) result) divide by 10-50. But actually ambient lighting is just another light without intensity fallow, so better first to multiply each light with diffuse [lighting] (N*L), which can be from 0 to 1 depending on angle between surface and light, and add all lights. Ambient lighting usually don't need to multiply with diffuse, because sky shining from all sides. Ambient lighting must be about from 10 to 100 depending on how strong HDR you want to make (<math>1.5^{\ln()}</math> or <math>2^{\ln()}</math>; ambient 10-20 if 1.5). So then all lights including ambient lighting added, then we pass through algorithm <math>2^{\ln(ambient+diffuse*light1+diffuse*light2+diffuse*light3)}</math>; then what we get, we multiply with texture colors, which can be from 0 to 1. And if texture with lighting need to clamp to 0-1 values then need divide by 255.
:Kinda official or faster way to made similar thing is <math>(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*2/(1+(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math>, but all lights must be from 0 to 1 and better each light not exceed 0.8 (especially not sun light). For stronger HDR formula become this <math>texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*4/(1+3*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math>, which increasing very weak light almost 4 times and strong lights intensity almost don't changing. But official formula multiplying texture first and I suggest don't do it, because dark and not so dark colors will be not such colorful and become more gray. So texture must be multiplied after algorithm and not by all lights sum like this <math>(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*4/(1+3*texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math>.
:So why in general <math>texture*(5.4759*2^{\ln(255*(ambient+diffuse*light1+diffuse*light2+diffuse*light3))})/255</math> formula better than this <math>texture*(ambient+diffuse*light1+diffuse*light2+diffuse*light3)*2/(1+(ambient+diffuse*light1+diffuse*light2+diffuse*light3))</math> ? Answer is that there almost no difference. In first formula weak light would loose color like from RGB(192:128:64) to RGB(209:158:98), and in second formula light also will lose color but little bit differently like from RGB(192:128:64) to RGB(219:170:102). For weak colours difference bigger: first algorithm RGB(20:10:5) converts to RGB(43.7:27: <math>5.4759\cdot 2^{\ln(5)}</math>)=RGB(43.7:10: <math>5.4759\cdot 2^{1.6094}</math>) =RGB(43.7:27: <math>5.4759\cdot 3.05133</math>)=RGB(43.7:27:16.7)=RGB(44:27:17); second algorithm RGB(20:10:5) converts to RGB(255*0.145:255*0.07547: <math>255\cdot 2\cdot (5/255)/(1+(5/255))</math>)=RGB(37:19.2: <math>255\cdot 2\cdot 0.0196/(1+0.0196)</math>)=RGB(37:19.2: <math>255\cdot 0.0392/1.0196</math>)=RGB(37:19:255*0.03846)=RGB(37:19:9.8)=RGB(37:19:10). <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 17:03, 27 October 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
::According to my experiments adaptation time from lamp light [lighted room] to very very weak light is 20-25 seconds. And adaptation time between average and strong lights is about 0.4 second. So adaptation time is quite long only for very very weak light. But it really not 20 minutes and even not a 1 minute. Eye adaptation time from very very weak light to stronger and to average lighting and even to very strong is also 0.4 ''s''. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 02:35, 28 October 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot--> It apears that adaptation to very weak light 20-25 seconds is because of blinking bloom-glow from strong light and according to my experiments if only part of view have bright light in eye, then another part adaptation is instant. Thus I come to only one logical explanation, that there is adaptation similar to adaptation to color and based not on eye iris size, but on some induction of previous light. Because it's obvious, that if one part is adapted and another part of field of view need adaptation time and after turning head or eyes you can see that you either see or not, thus it really can't be because of eye iris size changing, if everything around is black. So eye iris is really rudiment and can play role only as pain causing factor before adaptation to strong(er) light for measuring difference in luminance of scene. In best case scenario iris adaptation can play role only for adaptation to weak lighted objects, if there is some errors in my experiments due too very strong radiosity (endless raytracing), which eliminating sense of transition from strong light to weak and vice-versa and due to perhaps wider human visibility dynamic range or some brain colors filtering mystery. But human seeing as he have very wide dynamic range and eye iris size don't play any role to human visibility, but only small chance, that that iris play role for adaption to weak colors.
==Look how I kill HDR==
Line 484 ⟶ 64:
::final.rgb=(color.rgb/average)/1.3333; 0<color.rgb<1, 0.25<average<0.75, 0<final<1;
::changing everything. By using only division you can't change natural color to another. This algorithm disadvantage to compare with my (and over which using subtraction 0.3333) is that it don't adapts to bright light, but if bright light is strong (average is big), then image is unchanged, but this can be even better. And if there is dark colors domination then brighter colors turns to white like in previous algorithms. At minimum average=0.25 all colors becoming 4/1.3333=3 times stronger. At average=0.5 all colors becoming 2/1.3333=1.5 times stronger. At average 0.75 and above we have normal image like without using algorithm. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 11:20, 10 November 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
:There is even better way, than lights compression. This better way is luminance compression like this:
Line 491 ⟶ 70:
:final.rgb=(color.rgb/average)/1.3333; 0<color.rgb<1, 0.25<average<0.75, 0<final<1;
:and even here very much benefit would be if average is calculated choosing biggest number from 3 RGB channels of each pixel and all pixels strongest channels summed up without division by 3. In this way there will not be wrong adaptation to bright grass, when only green color dominating (kinda color RGB(0:200:0) and no need to think, that it is RGB(0:200/3:0)=RGB(0:67:0) and increase all luminance dramaticly, that green becoming far stronger than 255 (about 300-400 after adaptation)). <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 08:17, 17 November 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
===Reviving my real HDR algorithm===
Line 558 ⟶ 136:
:::final.rgb=color.rgb/average-0.0196*2*average; 0.1<averageMSP<1;
:::but then we get some pixels overbrighted, but do you subrtract 5 from 255 at maximum average or 1 from 255 at minimum average (all pixels luminence is 5 times bigger) this don't makes any difference. '''So if we want to try simulate human eye adaptation, then we must much more give attention to all bright pixels, than to weak colour pixels. This can be done if average is computed using square root of each pixel luminance, but all numbers from 0 to 1 (and only after average sum calculated everything divide by number of pixels channels). And of course would be much better to sum up only maximals pixels channels (RGB) under square root. In this way we get bigger average, for example instead (0.2+0.9)/2=0.65, we get''' <math>(0.2^{1/2}+0.9^{1/2})/2=(0.4472+0.94868)/2=0.6979</math>. '''It can be root of any order like rise 1/3 or 1/4 for adaptation to very weak if weak colours are ''really'' weak like if most colours values 0.05-0.2 and do not adapt if there is even only 1/5 of strong colours (and 4/5 all weak) or adapt just little bit. Another way to do it is use numbers from 0 to 255 and calculate average like sum of all channels (or maximals of each pixel channels) in logarithm, like this''' <math>46.018*(\ln(255)+\ln(3))/2=46*(5.54+1.0986)/2=46*3.3199=153</math> (<math>255/\log_{2.7}(255)=255/\ln(255)=46.018</math>); (255+3)/2=129. <small><span class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Versatranitsonlywaytofly|Versatranitsonlywaytofly]] ([[User talk:Versatranitsonlywaytofly|talk]] • [[Special:Contributions/Versatranitsonlywaytofly|contribs]]) 08:38, 26 November 2011 (UTC)</span></small><!-- Template:Unsigned --> <!--Autosigned by SineBot-->
::::''Update to black shrift''. [[Natural logarithm]] function is really expensive and not very practical but is equivalent to square root of 5 degree. For example <math>46.018*\ln(128) =223</math> and <math>255\cdot (128/255)^{1/5}=255\cdot 0.87123=222.</math> Another example <math>46.018
:::Why I saying "If human eye is capable to adaptation", because human eye iris size changing can be [[Rudiment (disambiguation)|rudiment]], because at strong light hard to tell difference between 1 and 5 (from 0-255 possible if 5 appears at strong weak light and 1 at strong light). But more than this is, that strong light, especially sun light by passing into eye iris through lens reflecting from eye iris and eye white "ball" thing and then by physics laws light passing from one matter to another (from eye lens to air) makes light reflection first from iris and white part of eye and then this light goes, where eye lens and air intersects and reflects from air back to iris (you can check how laser pointer reflecting from air if you direct it into window). So this from air reflection in eye lens probably makes most, if not all, light blooms, glows, glares and so on and so pretty weak colours (say from 0 to 20-50 from 0-255 possible) are overgrayed (overlighted, overtaken) with this strong light refection inside lens from air. And even from iris itself due to not ideal flat surface of eye iris, light from strong riris illuminated point goes to near bumpy iris receptors and very weak light near strong light is mixed with strong light shining halo, glare. Also iris physical size difference not necessary must give 5-7 times bigger sensitivity at maximum eye iris size than at minimum eye iris size, but can give only 2 or 1.5 or 1.3 times bigger sensitivity at maximum eye iris size than at minimum eye iris size (this would mean, that monitor maximum white colour is 1.3-2 times weaker than white paper illuminated by sun and that lamp light at 1-3 metters distance not so weak compare with sun light, but then two such lamps must stronger illuminate than direct sunlight). So if, say, 2 times stronger weak colour at maximum eye iris size than at minimum eye iris size, then at maximum eye iris size human seeing 1-128 (from 0-255 possible, 0 is black) and at minimum eye iris size human see 2-255 (from 0-255 possible, 0 is black). But say human eye, probably not selecting only this two ranges or 1-128 or 2-255, but between also, like 1.5-191 and hard to see difference and hard to tell if there is some darker objects at strong light (or near strong light/luminance) due to eye iris adaptation or due to blanking effect of various blooms and glows due to reflection light from air inside eye lens. And at all colours comparison is hard task even if they are on monitor separated by black space and one is RGB(255:0:0) and over RGB(191:0:0), then if they not near each over hard to tell which is which. Maybe iris size becoming not rudiment only when it is from average to big and from average to small nothing changing at all, etc.
:::BTW I make all possible tests to see if red or green or blue turning to gray if this basic colour is very very weak (need to have monitor with big contrast ratio, some stupid CRT monitors can be even better with too big contrast ratio, that less than 50 is not seen, so need to do display driver software contrast and brightness calibration if you still want to use it). So RGB colours if they are very very weak then from first look it's harded to tell diference between blue and green and much easier between red and any over, but don't matter how weak they are there still possible to say colour at any time with 90-99% correct answer, especially for red and if all weak colours of red, green and blue a displayed together. Specular highlights of all 3 colours and threshold of colour RGB(1:0.4:0) makes it say red raver than orange so number of possible colours decreasing in dark and if object is of two mixed channels RGB, then stronger channel will be seen only at very weak light and weaker will be under threshold of visibility. They are pretty weak so need concentration, maybe thats why hard to recognise colours in dark. So on monitor either you see ver very weak colour of separate chaneel red, green, or blue or don't see nothing at all at night. So don't dare to say about some gray colours bullshit at night, that you have something in eye to see everything monochrome. Dark colours just look dark and thats how it is. If you want to look in game at night, then specular highlights must dominate of material, but this in most cases comes naturally and especially and most LCD monitors with small contrast 300:1, there even 0 shining like 30-50 on monitor with big contrast like 1000:1 or bigger. So such monitors with small contrast better suited to use at day and of course this LCD led light still almost overcoming number 3 or 5 or ten so you still don't see this weak colours or if see they not pure red or gree or blue, but they turned from pure red or green or blue to such like they strong analogs RGB(255:200:200) for red, RGB(200:255:200) for green, RGB(200:200:255) for blue, so there no need in game to simulate gray for dark illumination, because LCD monitor Led backlight and room light graying weak colours pretty much itself already. But I have to admit, that with too big contrast monitors turning all colours spectrum little bit in direction into 6 basic colours, like my unmodified algorithm, red, green, blue, cyan, yellow, pink, because 128 is no more two times weaker than 255, but about 2.2 times and 64 is not 2 times weaker than 128, but about 2.5 times. http://imageshack.us/g/827/rgbcolorsdark2.png/
|