Displacement mapping: Difference between revisions

Content deleted Content added
Eekerz (talk | contribs)
more specific cat
Line 11:
The first commercially available renderer to implement a micropolygon displacement mapping approach through REYES was [[Pixar]]'s [[PhotoRealistic RenderMan]]. Micropolygon renderers commonly tessellate geometry themselves at a granularity suitable for the image being rendered. That is: the modeling application delivers high-level primitives to the renderer. Examples include true [[NURBS]]- or [[subdivision surfaces]]. The renderer then tessellates this geometry into micropolygons at render time using view-based constraints derived from the image being rendered.
 
Other renderers that require the modeling application to deliver objects pre-tessellated into arbitrary polygons or even triangles have defined the term displacement mapping as moving the vertices of these polygons. Often the displacement direction is also limited to the surface normal at the vertex. While conceptually similar, those polygons are usually a lot larger than micropolygons. The quality archivedachieved from this approach is thus limited by the geometry's tessellation density a long time before the renderer gets access to it.
 
This difference between displacement mapping in micropolygon renderers vs. displacement mapping in a non-tessellating (macro)polygon renderers can often lead to confusion in conversations between people whose exposure to each technology or implementation is limited. Even more so, as in recent years, many non-micropolygon renderers have added the ability to do displacement mapping of a quality similar to what a micropolygon renderer is able to deliver, naturally. To distinguish between the crude pre-tessellation-based displacement these renderers did before, the term '''sub-pixel displacement''' got introduced to describe this feature.