Projective texture mapping: Difference between revisions

Content deleted Content added
removed unsourced conversational sentences
Narfhead (talk | contribs)
Tags: canned edit summary Mobile edit Mobile app edit Android app edit App section source
 
(3 intermediate revisions by 3 users not shown)
Line 1:
{{confusing|date=March 2012}}
{{Tone|date=April 2022}}
'''Projective texture mapping''' is a method of [[texture mapping]] that allows a textured image to be projected onto a scene as if by a [[slide projector]]. Projective texture mapping is useful in a variety of lighting techniques and it is the starting point for [[shadow mapping]].
 
Line 5 ⟶ 6:
 
== Fixed function pipeline approach ==
Historically{{ref|nvsdk_ptm}}, using projective texture mapping involved considering a special form of eye linear texture coordinate generation{{ref|glEyeLinear}} transform (''tcGen'' for short). This transform was then multiplied by another matrix representing the projector's properties which waswere stored in texture coordinate transform matrix{{ref|glTCXform}}. The resulting concatenatedconcentrated matrix was basically a function of both projector properties and vertex eye positions.
 
The key points of this approach are that eye linear tcGen is a function of vertex eye coordinates, which is a result of both eye properties and object space vertex coordinates (more specifically, the object space vertex position is transformed by the model-view-projection matrix).
Because of that, the corresponding texture matrix can be used to "shift" the eye properties so the concatenatedconcentrated result is the same as using an eye linear tcGen from a point of view which can be different from the observer.
 
== Programmable pipeline approach ==
Line 15 ⟶ 16:
The previous algorithm can then be reformulated by simply considering two model-view-projection matrices: one from the eye point of view and the other from the projector point of view.
 
In this case, the projector model-view-projection matrix is essentially the aforementioned concatenationconcentration of eye-linear tcGen with the intended projector shift function.
By using those two matrices, a few instructions are sufficient to output the transformed eye space vertex position and a projective texture coordinate. This coordinate is simply obtained by considering the projector's model-view-projection matrix: in other words, this is the eye-space vertex position if the considered projector would have been an observer.