Jump to content

Displacement mapping

From Wikipedia, the free encyclopedia
This is an old revision of this page, as edited by Moritz Moeller (talk | contribs) at 04:40, 24 October 2006. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Displacement mapping

Displacement mapping is an alternative technique in contrast to bump mapping, normal mapping, and parallax mapping, using a heightmap to cause an effect where the actual geometric position of points over the textured surface are displaced along the surface normal according to the values stored into the texture. It gives textures a great sense of depth and detail, permitting in particular self-occlusion, self-shadowing and silhouettes; on the other hand, it is the most costly of this class of techniques owing to the large amount of additional geometry.

For years, displacement mapping was a peculiarity of high-end rendering systems like RenderMan, while realtime Application Programming Interfaces, like OpenGL and DirectX, lacked this possibility. One of the reasons for this absence is that the original implementation of displacement mapping required an adaptive tessellation of the surface in order to obtain micropolygons whose size matched the size of a pixel on the screen.

With graphics hardware supporting Shader Model 3.0, displacement mapping can be interpreted as a kind of "vertex-texture mapping" where the values of the texture map do not alter pixel colors (as is much more common), but instead change the position of vertices. Unlike bump, normal and parallax mapping, all of which can be said to "fake" the behavior of displacement mapping, in this way a genuinely rough surface can be produced from a texture. It has to be used in conjunction with adaptive tessellation techniques (that increases the number of rendered polygons according current viewing settings) to produce highly detailed meshes.

Meaning of the term in different contexts

Renderers using the REYES algorithm or similar approaches based on micropolygons have allowed to do displacement mapping at arbitrary high frequencies since they became available almost 20 years ago.

The first commercially available renderer to implemented this was Pixar's PhotoRealistic RenderMan. These renderers commonly tesselate geometry themselves at a granularity suitable for the image being rendered. That is: the modeling application delivers high-level primitives like true NURB- or subdivision surfaces (and not pre-tesselated polygon meshes) to the renderer.

Other renderers that require the modeling application to deliver objects pre-teselated into arbitray polygons or the even triangles have defined the term displacement mapping as moving the vertices of these polygons. While conceptually similar, those polygons are usally a lot larger than micropolygons. The quality archived from this approach is thus limited by the geometry's tesselation density long time before the renderer gets access to it.

This difference between displacement mapping in micropolygon renderers vs. displacement mapping in a non-tesselating ([macro]polygon) renderer can often lead to confusion in conversations between people whose exposure to each technology or implementation is limited.

Even more so, as in recent years, many non-micropolygon renderers have added the ability to do displacement of a quality similar to what a micropolygon renderer is able to deliver, naturally. To distinguish between the crude pre-tesselation-based displacement these renderers did before, the term sub-pixel displacement got introduced to describe this feature.

Sub-pixel displacement commonly refers to dicing a geometry that was already tesselated into non-micropolygons into micropolygons (often microtriangles) and them moving these along their normals to archive the displacement mapping.

True micropolygon renderers have always been able to do what sub-pixel-displacement archieves. But at a higher quality and in arbitrary displacement directions.

One can see already though, that as some of the renderers that use sub-pixel displacement move towards supporting higher level geometry too, but stick with the term, this will lead to more obfuscation of what displacement mapping really stands for, in 3D Computer Graphics.

Further reading

See also