Jump to content
  • Advertisement
Sign in to follow this  
stakon

[DirectX9 C++] - Advanced mesh coloring

This topic is 3023 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Good day, I am developing a program where real buildings are drawn and analyzed. I am trying to create the following effect: When a wall or pillar is damaged (from an earthquake for example) i want to represent this damage by changing it to a reddish color. In the same mesh which represents a wall i want its base to be green (no damage) and as we go to its top, to gradually turn red. So i have one mesh, but need to apply a scale of colors across it. I guess i have to create meshes with more vertices than needed and gradually apply different colors across them. However i find this idea a bit inefficient. Any other ideas? Thanx in advance for any guidelines on the matter. PS. Some of those meshes are created manually by D3DXCreateMeshFVF and some are created by D3DXCreateCylinder.

Share this post


Link to post
Share on other sites
Advertisement
How precisely do you need to control the gradient? If there are just two boundary colors and the gradient can be linear, you can simply set one color (green) to the 2 bottom vertices of the wall and the other color (red) to the top 2 vertices - this will automatically create a linear gradient between the colors.

But if you want more complicated gradient (for example red-green gradient is better if you go through yellow color in the middle, so red-yellow-green), you would have to add another row or vertices.
Or - for a full control of the gradient behaviour, you could maybe utilize texture coordinates together with vertex colors to store some informations for your own use and than use them in your pixel shader to create the gradient?

Share this post


Link to post
Share on other sites
Hi Tom,

the control over the gradient should be precise, so that damage representation is as accurate as possible.
So, one solution is using more rows of vertices, and color them appropriately.

This can work with my custom-made meshes, but how can i access the vertices of meshes created automatically by D3DXCreateCylinder?

I am confused by the other solution you propose. Do you mean i should have a set of different textures and apply the most accurate depending on vertex information?

PS - I am not using shaders.

Share this post


Link to post
Share on other sites
Quote:

This can work with my custom-made meshes, but how can i access the vertices of meshes created automatically by D3DXCreateCylinder?

You can access the vertex buffer of a mesh by using the GetVertexBuffer or directly LockVertexBuffer methods of the mesh.
If you create the cylinder with enough rows of vertices (that's one of the parameters of D3DXCreateCylinder), you can lock the vertex buffer and manually change colors of the vertices.
But don't forget that meshes created with D3DXCreate* don't have color information in their vertex declaration. The way around this is using CloneMesh (or CloneMeshFVF) and changing the vertex declaration (FVF).

Quote:

I am confused by the other solution you propose. Do you mean i should have a set of different textures and apply the most accurate depending on vertex information?

No no, not textures. You said you are not using shaders, but do you have any experiences with them?
What I meant is that in pixel shader you can do almost whatever you want to generate the output color of the particular pixel (so you are not limited to vertices).
For example you can pass the height (elevation) from vertex shader to pixel shader and than you can create a per-vertex relation between the actual height (z position) of the pixel and the resulting color.

Share this post


Link to post
Share on other sites
Unfortunately, i have no experience in shaders.

So i guess i'll try the first solution.

Some other ideas i have are:

a) Divide the mesh into subsets and draw each subset with a different material and/or texture.

b) For each object create a texture during run-time depending on the situation and apply it.

Do you think the above are worth testing, or will they affect performance too much?

Share this post


Link to post
Share on other sites
If you have some time, I think you should try them and see :)

The b) (creating a texture) sounds quite good for me as far as the "damage" won't change too often.

How does it work?
Does the application compute something, create the scene based on the result and than just display it (that means you are rendering still the same scene generated at one time)?
Or do you simulate the damage during time (means you would need to change the texture for every building very often)?

Share this post


Link to post
Share on other sites
The program calculates how much damage will happen if an earthquake shakes the building. So for every object the total damage on it will be calculated and dispalyed once.

So now i'm out to find how to programmatically generate colored textures!

If everything works out fine i'll try to find time and learn using shaders and HLSL...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!