Displacement Mapping

Started by
3 comments, last by ET3D 17 years, 9 months ago
Hey guys, I was just wondering, what are the minimum requirements to perform displacement mapping? I have yet to learn how to do it, but before I do, I want to make sure my comp can handle it. I know this is a very expensive technique. My specs are: Intel Celeron 2.6GHz, NVIDIA GeForce FX 5200, 256MB RAM. Yeah, I know that these are very crappy specs, but until I finish my new comp, this is what I have to put up with. Thanks.
Advertisement
That most likely depends on how big the object you want to apply the effect to is.

Since displacement mapping is a per-pixel effect, the number of pixels drawn with the effect would have a high impact on how much time it takes.

I'd recommend you go ahead and try it. There are all kinds of optimizations that can be done later on if it isn't working well enough.

As far as support for the correct Shader Model, and general features required, you shouldn't have a problem getting it to work.

[EDIT] Whoops, might as well ignore my incoherent ramblings :).

[Edited by - sirob on July 8, 2006 11:08:50 PM]
Sirob Yes.» - status: Work-O-Rama.
Modern displacement mapping requires a vertex texture fetch. Unfortunately your card does not support this. Shader Model 3.0 cards do (although I'm not honestly sure about ATI's support of this).
There are two main ways to implement vertex based displacment mapping on the GPU. If you really mean pixel level displacement mapping approximations then this doesn't apply :)

1) Vertex texture fetch, which is currently only supported on NVidia shader model 3 cards (GeForce6 & 7 series). This works by binding up to 4 textures which can be accessed by the vertex shader, which you can use to displace your verts (or whatever else you want to do with it). You can access it with arbitrary uvs so you can do scrolling, scaling or whatever.

Some limitations exist such as floating point textures, point sampling only (you can do the bilinear yourself if you want but 4 vertex fetches is expensive), no anisotropic filtering, and mipmapping is manual - you have to supply the lod you wish to access the texture at.


2) A two pass approach using Render To Vertex Buffer (R2VB). This is supported all the way back to the Radeon9800 I believe, though you can render and access atleast 4 streams on newer ATI hardware (X1k series) instead of 1 on older cards. The way this approach works is you render to a special render target (R2VB fourCC code I believe) which can be accessed by the vertex shader as a seperate stream (or several on later hardware). So on the first pass render out your displacements from the pixel shader from your displacement map texture into your vertex stream(s), and the second pass actually uses that data on the final geometry.

So some disadvantages are that this is a 2 pass approach and it takes up more memory (potentially). Some advantages: the pixel shader has full filtering, you can access all 16 samplers with arbitrary texture formats when rendering into the buffer (which may offset the extra memory used to store the streams) and you have the full power of the pixel shader unit when rendering into the buffer.
I'd like to add that when using R2VB (which is ATI only, BTW), it may actually be beneficial to do all the processing in the pixel shader, if you have the latest ATI cards, since they have more pixel processing power than vertex processing power (some of them very significantly so, like the X1900).

But in any case, the answer still is: you have to use different code for ATI and NVIDIA, and neither method will run on your card.

This topic is closed to new replies.

Advertisement