• Advertisement
Sign in to follow this  

Shader Performances

This topic is 598 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all,

 

I want to consult on a thing I have in mind.

 

I have a shader which does Triplanner overlay effect. ( Sample the same texture 3 times per each plane and interpolates the colors ).

This shader was created in order to eliminate the use of secondary UV's ( As I wanted to spare it in case I'll need it in the future ).

 

The art pipeline will start soon, and I need to decide whether asking the artists to create a secondary uv's, or use the run-time generated one.

I cannot measure the performances now. I do know the overlays will be very common ( 90% of the drawing objects ) and I think that Triplanner effect is a costly effect ( Correct me if I'm wrong ).

 

 Any suggestion will be great.

 

Tnx,

 

Share this post


Link to post
Share on other sites
Advertisement

Hi,

 

I don't fully understand why you are using this effect or what you are even trying to apply it to. I do not see a reason for you to make 2 UV maps for a single object? Maybe I misread the question and you can explain it a bit differently?

Share this post


Link to post
Share on other sites

What exactly are your primary UV's and what is the tri-planar UV mapping for (diffuse/normal/spec?)

 

That phrases what I was trying to ask much better.

Share this post


Link to post
Share on other sites

Hi,

 

Let my try explain it better.

I have to make a decision which cannot be undone later. Also, I cannot really test the impact on performances at the current

time, So I'm basing the decision based on articles I have read of (ATI/Nvidia) which says to try lower the amount of textures sampling

and push more ALU ops. Also I do know this effect will be very common > 90% of the drawing.

The artists will create a massive amount of data which cannot be fixed later on (Bought problem :( )

 

The problem:

The run-time generated uv's are used for overlay effect ( extra texture that is blended by alpha channels ).

 

Options:

A: Let the artist create the secondary uv channel via 3d app, and use them to sample a detail texture.

or 

B: Create the uv's at run-time, and pay the price of sampling the same texture 3 times (Tri-Planer).

 

Please note that B is not revertible. If later I will find that the effect harms the performances, I cannot "Bake" the data back to UV's.

 

I wanted to spare the secondary uv as it will give me more flexibility later on. For example I can use it for : Light-maps, AO, ... 

Share this post


Link to post
Share on other sites
Depending on what this overlay is, you can do triplanar texturing with one texture fetch if you use the per-triangle normals... You just get texture seams everywhere. It's generally ok for man made architecture, but bad for organic shapes.

You're not limited to primary/secondary UVs - you can have ten sets of UVs if if makes you happy.

You could also try generating new UVs at runtime simply by scaling the primary UV by some number.

Lastly, you can write a tool that modifies all your models to automatically add a new set of UVs for this purpose.

Share this post


Link to post
Share on other sites

Hi,

 

Tnx for you answer.

 

The current target engine is Unity, So I'm bounded to 2 sets of uvs.

 

Using a math function that transform the first uv's to the second uvs will not work for all cases. (curved ones).

 

I cannot modify the models automatically without having the artists to take a look. And doing it for a massive amount of

data is out of our bought).

Share this post


Link to post
Share on other sites

I'm pretty sure in Unity you can use more than 2 sets of UVs, but I recommend option A for you. It may take a longer time to do, but it seems like the solution is far easier. Then if you really only do have 2 UVs you can use the other UV for something different.

Share this post


Link to post
Share on other sites
I cannot modify the models automatically without having the artists to take a look. And doing it for a massive amount of

data is out of our bought).

That's what a build pipeline is for - you modify the models when they are imported without changing the source assets. That's not really any different to calculating different uvs in the shader when those UVs are calculated in a way that do not change based on other factors in the game.

You mentioned Unity, so I'll point out that if you are doing mobile games then multiple texture samples gets expensive very quickly based on the target devices. Older Android devices can be especially bad for this. Older mobile devices also struggle if you calculate the texture coordinates in the pixel shader. Blending is also a major bottleneck on mobile. If you are doing PC though, these points don't apply as the hardware is very different.

 

Its pretty important that you take the time to construct performance tests, even if you hand write them yourself.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement