Jump to content
  • Advertisement


This topic is now archived and is closed to further replies.



This topic is 5201 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have Geomipmapping working fine. Now I''m trying to implement Geomorphing to remove the popping. I''m having trouble coming up with an efficient implementation. I store an index buffer for every LOD in each patch and a one vertex buffer that contains every static vertex. The way I''m currently thinking to implement geomorphing is to move the vertice in question by storing it''s original value, changing the value of the vertice when it is in a transitioning range, then reassigning the vertice its original value when it''s not morphing anymore (I''m using 65x65 patches). My concern with this is the overhead in copying these vertices before morphing starts and assigning them back once morphing is over. I guess I can just store the original value for all morphing candidates so I don''t need to do the copying, but that may require too much memory. What do you think? Thanks, -Wil

Share this post

Link to post
Share on other sites
I use a static vertex buffer and do the geomorphing in the vertex shader. There will be a performance hit if you change the values in the vertex buffer every frame.

In each vertex, I store highest LOD (lowest resolution) at which this vertex is used (h0) and the height of the terrain the next higher LOD (h1) at this vertex's location.

I pass in the current LOD+lerp value in a shader constant. For example, 5.7 means to use LOD 5 and we are 70% of the way to LOD 6.

In the shader, I subtract the vertex's LOD value and use the result (clamping between 0 and 1) to lerp between h0 and h1.

Warning: I don't remember exactly how I implemented it, so some of the details might be wrong.

[edited by - JohnBolton on April 17, 2004 3:37:33 PM]

Share this post

Link to post
Share on other sites
I''m not sure whether this will help you or not, but there was an article a while back about doing Geomorphing in a vertex shader.

Here is a link.

Share this post

Link to post
Share on other sites
Thank you for the respones. I was going to implement this in software first, then try for hardware, since I never messed with the vertex shader before. Now I''m thinking that I should just do the hardware implementation. Since I''ve never messed with vertex shaders before, I have a question:

The terrain tutorial in the link above has some code for using vertex shaders, but it''s really brief. I know that Nehe has a vertex shader tutorial for openGL, but it requires Nvidia''s CG. Will I need to install similar libraries to accomplish this? Is there a good shader book I can get or is there some good tutorials online that I haven''t found yet that can show me what I need to get the geomorphing in the vertex buffer finished? Thanks -Wil

Share this post

Link to post
Share on other sites
Vertex shader wise it all depends on what you want to target and how you want to write the code.

If you dont mind getting your hands dirty then you can write the code in assembler using the ARB_vertex_program extension, this should be avalible on practicaly every card out there with a half decent opengl implimention (which covers pretty much every Nvidia and ATI card in common existance now).
On the plus side, you dont need any libraries just the extension present to use it
On the downside, its assembler, which is pretty much being left behind shader coding wise in favour of high level languages.

If you dont mind installing/linking some libs and/or you want to write with a high level shading language for, again, every card out there then Cg is your friend, it will do its thing via the ARB_vertex_program extension as above however your code will be in slightly easier to read/understand high level code.
On the plus side you dont have to learn assembler, just a C-like language.
On the downside you''ll have to link an extra library and do whatever else is required to make Cg go (I''ve no practical experiance with it)

Finally, if you want to say to hell with all pre-DX9 class hardware cards (so only GFFX and 9500+ cards) you can use the OpenGL shading language to do your shaders. This is a high level language, like Cg however GLSL is native to OpenGL, doesnt require extra libs and just requires the ARB_vertex_shader extension.
On the plus side its OpenGL''s native shading language, high level and requires no extra libs.
On the downside anything pre-GFFX from Nvidia and pre-R3x0 chip set(before the 9500) wont be able to run your code which will take a chunk out of your user base.

End of the day you have to weight it up;
+ high level shading language
+ OGL native
+ no extra libs
- wont work on pre-DX9 class hardware
- implimentions are still a little bit buggy
+ high level shading language
+ works on probably every card out there (fast software fall back if not in hardware)
- extra lib to link/include
+ no extra lib
+ works on probably every card out there (fast software fall back if not in hardware)
- harder to work with than a high level language

Thats the basicaly pros (+) and cons (-) of the 3 methods.

As for books, i cant speak for ARB_vertex_program or Cg, however OpenGL Shading Languge (orange book) is a very good book to get hold of if you are wanting to learn the GLSL.

Share this post

Link to post
Share on other sites
Here is my vertex shader. I''m not a shader expert so I wouldn''t be suprised if there is a much better way.
// Constants
// c0-c3 view-projection matrix
// c4 lod
// c5 0,0,0,0
// c6 1,1,1,1

// Inputs
dcl_position0 v0 // position data
dcl_texcoord0 v3 // light map uv coords
dcl_texcoord1 v4 // detail map uv coords
dcl_blendweight v2 // max lod
dcl_position1 v1 // interpolated Z

// Outputs
// oPos vertex coordinates
// oT0 light map uv
// oT1 detail map uv

add r0, c4.xxxx, -v2.xxxx // Get lod difference (lod - max lod)
max r0, r0, c5 // limit to 0
mov r2, v0 // Get z
add r1, v1.xxxx, -r2.zzzz // Get z difference (interpolatedz - z)
mad r2.z, r0.z, r1.z, r2.z // lerp r3.z = v0.z + r0.x * ( v1.x - v0.z )
m4x4 oPos, r2, c0 // transform vertices by world-view-projection matrix
mov oT0, v3 // pass uv
mov oT1, v4 // pass uv

Share this post

Link to post
Share on other sites
Thank you Bolton, Phantom, and Reaptide for all the valuable info. I''ll be playing around with it for a couple of days to learn. I think I''m going to try it with assembler like JohnBolton posted. Even though I have no experience with it, this site seems to be laying it out pretty good: http://www.flipcode.com/tutorials/tut_dx8shaders.shtml . Thank you very much. -Wil

Share this post

Link to post
Share on other sites
yeah, that site looks to be quite good, however keep in mind that there are some difference between OpenGL vertex shaders and D3D vertex shaders, instruction/varible name wise, and the tutorial deals with D3D shaders, so while it should give you the correct grounding in the subject you''ll have to lookup the opengl syntax to make sure (http://developer.nvidia.com or http://www.ati.com/developer/ are probably a good places to start)

good luck with it

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!