Archived

This topic is now archived and is closed to further replies.

Compressed terrain -> GPU

This topic is 5576 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''ve heard that the amount of data passed to the graphics card makes a big difference to performance. My terrain naturally compresses very well: it''s a grid so the x&y coords are easily derived, I use a 16bit height value and an 8bit lightmap value. So I store at 4bytes per vertex. Expanding to a normal FVF requires 24bytes with what I use, which for a map with millions of points makes a big difference! However on non-shader cards the simple multiplication etc ops to convert to FVF format is a big CPU drain. Can I make my FVF just store say the DIFFUSE colour, which is packed with my custom format? Then I could use a vertex shader to do all this on the GPU, right. The thing is if I did, I''d want to use the vertex shader on non-shader cards in software. Would the shader mean more work gets done, thus slowing things up? I mean on the CPU I can guarantee each vertex gets generated only once but the shader will do it each time it gets the vertex? Index buffers don''t eliminate the problem as I may share a vertex across more than one call to DrawPrimitive. So any advice? I know it''d be faster on shader-equipped cards but they''re so quick anyway! My low-spec is something like a TNT or Voodoo3. Oh one other thing, how do I know where to put V-shader outputs to be used by the default pixel-shader on non-shader cards?
Read about my game, project #1 NEW (13th August): A new screenshot is up, plus diaries for week #3
John 3:16

Share this post


Link to post
Share on other sites