![]() ![]() ![]() The most striking example of a displacement-mapped subdivision surface model is probably the creature Gollum from the recent Lord of the Rings movies. Subdivision surfaces are often combined with displacement mapping to add extra detail for computer-generated characters in live-action movies. Introduced to the big screen by Pixar with A Bug's Life, they have been used in all of Pixar's movies since, including Finding Nemo and The Incredibles. Specifically, Catmull-Clark subdivision surfaces are supported in practically every 3D modeling and animation application. Subdivision surfaces are arguably the most popular curved-surface representation used in computer graphics today. Intermediate results are also rendered to and read from texture maps, and the final tessellation results (position, normal, and so on) are rendered to a vertex array ready to be used by a render-primitives call such as glDrawElements(). This method assumes that the vertex data of the subdivision surface control mesh are stored in a texture map. The subdivision, flatness test, and final vertex attribute calculations are done using fragment programs (pixel shaders). This chapter takes a repeated subdivision approach to tessellation, implemented by rendering into 2D textures. The second part of this chapter describes how to add displacement-mapping support for rendering highly detailed models that can be animated in real time. The first part of this chapter explains how to tessellate subdivision surfaces to polygons for rendering high-quality curved surfaces without visible polygon artifacts. We use the GPU to do the tessellation calculations, which saves graphics bus bandwidth and is many times faster than using the CPU. In this chapter we describe how to perform view-dependent, adaptive tessellation of Catmull-Clark subdivision surfaces with optional displacement mapping. Adaptive Tessellation of Subdivision Surfaces with Displacement Mapping The CD content, including demos and content, is available on the web and for download.Ĭhapter 7. You can purchase a beautifully printed version of this book, and others in the series, at a 30% discount courtesy of InformIT and Addison-Wesley. The process here essentially does the same as normal SSAO.GPU Gems 2 GPU Gems 2 is now available, right here, online. Instead of sampling from the depth buffer and attempting to recreate the correct position, we sample from the position buffer and use the real view-space distance between the two points. ![]() ![]() This version of SSAO uses a position buffer in addition to a normal buffer. Normal SSAO uniformly samples in a "sphere" around every pixel by sampling from the depth buffer. Now, for the way the AO works, and why it's not simply SSAO. Here's an example of the difference with it applied (mouse-over to change image):Īnd one of the checker ball, which is easier to see: The gem of this project, however, is VSAO/SSAO (a combination of View Space Ambient Occlusion and Screen Space Ambient Occlusion). That's because the only component is SH, which gives a very nice though unrealistic visualization. If you look at the images above, you'll notice that all faces with the same normal look the same (all my normals are per face - I didn't want to do a second pass for per-vertex normals). It's actually an environment map used as a part of the ambient component, done with spherical harmonics lighting. There were a number of ways of visualizing the computed data, but I made only a single material. The second was with a 3d simplex noise function: ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |