Sign in to follow this  

Mapping two textures to one mesh?

This topic is 4086 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi All, Firstly thanks for all the people that post in here; it's helping a lot with my noob efforts. I'm writing an application that creates a mesh as the user plots out a series of points and that is working very well. Now I am trying to map several textures onto the one mesh. I can use Attributes ok to put different textures onto different polygons, but here's the problem. Texture Map coordinates are stored along with the vertex, so what happens when two polygons that are to use 2 different textures with 2 different mapping coords, share the same vertex? I could split the mesh into two meshes with a separate vertex for each one, but that would mean that the normal mapping would be affected. In my attached pic I've attempted to simply the problem. One side is a grass texture, the other a rock texture and each will have their own Diffuse, Bump and Spec maps with their own UV coords. Verts 2 and Vert 5 are shared between the two textures. If I break the mesh into two meshes, duplicating Verts 2 and 5, generating normals will result in flat looking polygons (using the pink normals) instead of the rounded look that one mesh provides. Am I missing something or is this just a wall I've run into. I'm currently thinking that perhaps I need to split, then calculate my own normals but am really hoping to avoid this. Thanks in advance!! http://img142.imageshack.us/my.php?image=mappingproblemkl2.jpg

Share this post


Link to post
Share on other sites
This sort of thing can happen when you're using indexed geometry - there isn't really anything that you can do except split/duplicate the vertices. Even though they might share 90% identical data, the fact that they have even a slight difference means they should really be unique/different vertices in VRAM.

However, there isn't anything saying you can't generate normals (so you get the preferred average normal) and then splitting for texture coordinates.

hth
Jack

Share this post


Link to post
Share on other sites
Thanks for that. I guess that would mean something along the lines of:

1. Create full mesh (using AttribId's).
2. Calc normals.
3. Split into tri-strips using ConvertMeshSubsetToStrips().
4. Individually assign UV Map to each tri-strip verts.

Too easy!! lol

Share this post


Link to post
Share on other sites
Well, the rock/gras textured vertices basically belong to different subsets. Most exporters I've seen will also just duplicate the shared vertices between the subsets.

But going from your proposed application of the textures, there are other (easier & better looking imo) ways to combine grass & rock textures on your terrain. A common way is to blend the different terrain layers on top of eachother using a mask. This is commonly referred to as texture splatting, on which you can find more info over here. This technique is quite easy to implement with shaders as well, so if you're up for that you can find more on that here.

Good luck :)

Share this post


Link to post
Share on other sites
If your target spec has vertex shader 3.0 then you can use conditional shader operations. This allows you to take advantage of selective vertex streaming, which can reduce redundant vertex data storage and throughput by decoupling the position data from the texture data.
For example, you could have the geometry in vertex stream 0, the first set of normals and texture coorcinates in stream 1, and the next in stream 2. From here, a single call to DIP can pick and choose what it needs from the various streams.

This can be a bit of a hassle to implement, and may not be worth the effort if you only have two sets of data. However, it is worth remembering the technique. A similar process (called instancing) makes use of the separate streams so that many (hundreds, thousands) instances of the same mesh may be rendered without having to duplicate the data in the pipeline: only position, orientation, colour data need be duplicated.

Regards
Admiral

Share this post


Link to post
Share on other sites
Quote:
Original post by remigius
But going from your proposed application of the textures, there are other (easier & better looking imo) ways to combine grass & rock textures on your terrain. A common way is to blend the different terrain layers on top of eachother using a mask. This is commonly referred to as texture splatting, on which you can find more info over here. This technique is quite easy to implement with shaders as well, so if you're up for that you can find more on that here.

Good luck :)


Thanks, that does look great and doesn't seem to hard - I'll try to add support for that. I am writing an app that'll allow others to map their own textures onto parts of the mesh as they wish so, as long as I add the support for this, it'll be up to them what method they wish to use. The rock/grass is just one example, but there'll be other time when they want a clean line between the textures without the blending.

Here's my app so far for anyone that's interested ... http://bobstrackbuilder.thepywells.net

This is my first DirectX app and I only get a couple hours each day to work on it so I'm a little in the deep end. T'is fun though!

Quote:
Original post by TheAdmiral
If your target spec has vertex shader 3.0 then you can use conditional shader operations. This allows you to take advantage of selective vertex streaming, which can reduce redundant vertex data storage and throughput by decoupling the position data from the texture data.
For example, you could have the geometry in vertex stream 0, the first set of normals and texture coorcinates in stream 1, and the next in stream 2. From here, a single call to DIP can pick and choose what it needs from the various streams.


Gulp ... I've done a little HLSL, enough to be dangerous, but I think this is probably a bit beyond me for now. I understand the concept but applying it would probably prove difficult (for me). Besides, at this point I am aiming for Shader 2.0 support as well. Thanks anyway, the instancing also looks good but I'm not sure how I could use it in this app. Each part of the mesh will be unique. But I will keep it in the back of my mind as it might prove useful when writing other parts of the application.

Oh .. and what's a DIP?

Share this post


Link to post
Share on other sites
That got me stumped too at first, them dips [smile] It's just shorthand for DrawIndexedPrimitive.

Just checked out the screenshots on the page you linked to and it looks very nice, good luck with that project.

Share this post


Link to post
Share on other sites

This topic is 4086 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this