Jump to content
  • Advertisement
Sign in to follow this  
Hybrid

Some random material/shader questions

This topic is 5472 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've been reading all the shader/material threads recently and am currently designing my rendering system based on all this. However I have questions about various things, that I haven't found the answer to... or things that have just struck me as unusual with other people's implementations but just want to get a view on incase I missed the point ;-) 1. I've read that some people's implementation of the shader-effect system (i.e. jamessharpe), have it set up so that the effect contains the offset table for the geometry vertex data. This struck me as unusual, because doesn't that mean that the effect knows about the geometry chunk. If the effect is in charge of the offsets then it means that the geometry chunk can only be rendered by specific effects, as other effects may have the wrong tables??? I was just thinking that it would be better if the geometry chunks stored their own offset tables for their interleaved data, and the effects can just query it for the different data stream offsets. Is that not more flexible? 2. When you decide to render something over multiple shaders for multipass rendering. Where do you set specific blend modes that need setting between the two shaders? I'm trying to think of an example... but let's just say that you had a diffuse bump-mapped surface with specular highlights, and you decided to do this with two shaders - one for the diffuse bumpmapping, and the other for the specular highlight (I know not a good example). So you do the first bumpmapping shader, but then in between you need to set the blend mode to 'add' for the specular part. You don't want to do that in pixel shader as that would alter what the specular shader did. Basically, you need to split an effect in two, but that can only be done with blending/state changes in between the two shaders to reproduce the effect but these blending changes are not part of the two shaders, how can you specify and set this???? Hell, is this even needed??? I'm confused about some of this! 3. Do people still use normalisation cubemaps? Or is this an out of date practice and waste of a texture? 4. When you render the preliminary Z-Pass, do you upload all the geometry data required by the eventual effect (positions, normals, tangents, texcoords etc.) and do the Z-Pass then full render, or do you upload the basic data required for the Z-Pass, do the Z-Pass then re-upload but this time upload all the geometry data to VRAM and render fully? Yes, they are unusual questions, but it's the early hours of the morning and I can't 'see' the answers right now. Thanks for any ideas/help. Perhaps others can use this thread for their 'random material/shader questions'?

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Hybrid
I've been reading all the shader/material threads recently and am currently designing my rendering system based on all this. However I have questions about various things, that I haven't found the answer to... or things that have just struck me as unusual with other people's implementations but just want to get a view on incase I missed the point ;-)

1. I've read that some people's implementation of the shader-effect system (i.e. jamessharpe), have it set up so that the effect contains the offset table for the geometry vertex data. This struck me as unusual, because doesn't that mean that the effect knows about the geometry chunk. If the effect is in charge of the offsets then it means that the geometry chunk can only be rendered by specific effects, as other effects may have the wrong tables??? I was just thinking that it would be better if the geometry chunks stored their own offset tables for their interleaved data, and the effects can just query it for the different data stream offsets. Is that not more flexible?

2. When you decide to render something over multiple shaders for multipass rendering. Where do you set specific blend modes that need setting between the two shaders? I'm trying to think of an example... but let's just say that you had a diffuse bump-mapped surface with specular highlights, and you decided to do this with two shaders - one for the diffuse bumpmapping, and the other for the specular highlight (I know not a good example). So you do the first bumpmapping shader, but then in between you need to set the blend mode to 'add' for the specular part. You don't want to do that in pixel shader as that would alter what the specular shader did. Basically, you need to split an effect in two, but that can only be done with blending/state changes in between the two shaders to reproduce the effect but these blending changes are not part of the two shaders, how can you specify and set this???? Hell, is this even needed??? I'm confused about some of this!

3. Do people still use normalisation cubemaps? Or is this an out of date practice and waste of a texture?

4. When you render the preliminary Z-Pass, do you upload all the geometry data required by the eventual effect (positions, normals, tangents, texcoords etc.) and do the Z-Pass then full render, or do you upload the basic data required for the Z-Pass, do the Z-Pass then re-upload but this time upload all the geometry data to VRAM and render fully?

Yes, they are unusual questions, but it's the early hours of the morning and I can't 'see' the answers right now. Thanks for any ideas/help. Perhaps others can use this thread for their 'random material/shader questions'?



1. There are different ways to implement this, i dont know james's way but he might have the system setup so that gc's are sorted into effects (per frame, visibility maybe) and then each effect is rendered, sent down to the queue/pipeline/batchController. He might be doing some sort of advanced batching.

2. Shaders do everything except draw... so it should go like this

initShader() -> called once on change - sets up effect globals, enables texture units, blending, etc
setupShader(gc/spgc) -> sets up specifics for this gc
fillCache(gc/spgc) -> caches the gc in VRAM
glDrawElements() -> draws the geometry
exitShader() -> again called on shader change (disables stuff)

every state change goes into shaders, dunno if that answers it

3. Dont use em

4. Send everything down, it'll already be cached in the vbos for the next pass. Currently not doing this since i am not fillrate limited.

btw there are different ways of implementing from Yann's threads, i got a method which is quite different from what was explained but still keeps the modular shader approach (can be loaded from dll's). I'm sure everyone that implemented a system did it a different way.

Share this post


Link to post
Share on other sites
Quote:

1. I've read that some people's implementation of the shader-effect system (i.e. jamessharpe), have it set up so that the effect contains the offset table for the geometry vertex data. This struck me as unusual, because doesn't that mean that the effect knows about the geometry chunk. If the effect is in charge of the offsets then it means that the geometry chunk can only be rendered by specific effects, as other effects may have the wrong tables??? I was just thinking that it would be better if the geometry chunks stored their own offset tables for their interleaved data, and the effects can just query it for the different data stream offsets. Is that not more flexible?


I had also this problem. If you sort the stream of geometry only for one effect then I can't change effect in runtime because another effect can't change the vertex stream order in a geometry element. My solution is to mantain a big table(say table A) for the effect that give me the correct order of the stream and if a particular stream is needed(Like a yann and james solution) ,this table is an index of another small table (say Table B) that give me the position in geometry of particular vertex stream. With this method if you want to change the effect in runtime, you only need to change the Table B.

Share this post


Link to post
Share on other sites
Quote:
Original post by nts
1. There are different ways to implement this, i dont know james's way but he might have the system setup so that gc's are sorted into effects (per frame, visibility maybe) and then each effect is rendered, sent down to the queue/pipeline/batchController. He might be doing some sort of advanced batching.

Well, when a geometry chunk is rendered by an effect is turned into SPGC's for the shaders. So those SPGC's will be sorted by shader in each render RPASS_* I think? So you don't sort by effect, but by shader.

But also, say you did sort by effect, I still don't understand why the effect has the offset tables. Take two meshes, both rendered with the same effect, however one of the mesh's data also has tangents and binormals, where as the first mesh doesn't. Surely, it should be possible that the effect can render both meshes if the streams it requires are there... but if the offset table is in the effect, one of those meshes will produce incorrect data, as the offset table will be looking at incorrect data, right?

I dunno, but I think I'll go for the offset table in the GC route, it seems more flexible than leaving it to the effects.

I'll do something like davidino79 has done. The effect stores information about what data it needs and in what order/format, while the GC will store all the offsets and strides to get to this data.

Share this post


Link to post
Share on other sites
The offset table in the effect class really was just me jumping in and getting something that works. It also has the drawbacks you mention. So to solve this I've implemented a vertex description system to describe the format of a vertex. This is simply a class describing each element in the description nad its format. So now what I do is store a 'requirements' descriptor in the effect class i.e. those elements that are required to be rendered by this effect, and then store a vertex descriptor along with the geometry stream. Then I simple run a quick compatibility test when the effect ID is changed to ensure that it has the required streams for rendering. This could also possibly optimise the VRAM uploading.

Share this post


Link to post
Share on other sites
Thanks for replying. Yeah a vertex description sounds like a safe way of doing things, and comparing the vertex descriptions of the GC with the vertex requirements of the effect would be simple. I guess a similar system can be used for the shader-effect linking, with shaders having their requirements, and effects exposing the data they have access to.

Share this post


Link to post
Share on other sites
for 'potential' multipass effects (remember: a future shader might only require one pass instead of two, for example) I implemented a simple scripting sequence.

I composed effects of seperate 'attributes' such as the following:


.visual
{
.texture.diffuse(unit0)
.color.diffuse
}


then implemented the potential multipass blending as


.out.color = mul{ .texture.diffuse(unit0), .color.diffuse }


the engine then decides if multiple passes are required, and if so, uses alpha blending to multiply two seperate shaders together. if not, it informs the shader it should multiply the two attributes together through a simple 'blending mode' tree.

it provides maximum flexibility with the engine determining at runtime whether multiple passes are necessary.

Share this post


Link to post
Share on other sites
Quote:
Original post by c t o a n
for 'potential' multipass effects (remember: a future shader might only require one pass instead of two, for example) I implemented a simple scripting sequence.

I composed effects of seperate 'attributes' such as the following:


.visual
{
.texture.diffuse(unit0)
.color.diffuse
}


then implemented the potential multipass blending as


.out.color = mul{ .texture.diffuse(unit0), .color.diffuse }


the engine then decides if multiple passes are required, and if so, uses alpha blending to multiply two seperate shaders together. if not, it informs the shader it should multiply the two attributes together through a simple 'blending mode' tree.

it provides maximum flexibility with the engine determining at runtime whether multiple passes are necessary.


Hmmm... Sounds quite cool. I just use a "profile" approach, where each shader can have multiple profiles, each with multiple passes. Depending on the hardware available, and the compatability with the said effect, a specific profile is chosen at loadtime. It allows for newer hardware to only have one pass say, when older hardware needs 2 or 3, but it requires the coding of another profile in the shader. Not too much effort really, as it's all scripted.

Share this post


Link to post
Share on other sites
Another random question for you guys. Thanks.

1. How do you handle things like shadowmapping and fog in the shaders/rendering? For example you have a diffuse, bump-map shader, but the object is affected by a shadow map? Do you write another version of the diffuse, bump-map shader but with shadow mapping? Or do you write a longer shader with conditionals that can handle the shadow mapping - though that means more parameters sent to the shader, right? If you then take fog as well, do you have to do that yourself, or can it be left to the graphics card to add after your shaders have run? or again do you have to write different version shaders with fog enabled? Seems wrong to write x number of different shader versions for every possibility.

Share this post


Link to post
Share on other sites
Quote:
Original post by Hybrid
Another random question for you guys. Thanks.

1. How do you handle things like shadowmapping and fog in the shaders/rendering? For example you have a diffuse, bump-map shader, but the object is affected by a shadow map? Do you write another version of the diffuse, bump-map shader but with shadow mapping? Or do you write a longer shader with conditionals that can handle the shadow mapping - though that means more parameters sent to the shader, right? If you then take fog as well, do you have to do that yourself, or can it be left to the graphics card to add after your shaders have run? or again do you have to write different version shaders with fog enabled? Seems wrong to write x number of different shader versions for every possibility.



For the shadow mapping shader yeah i would write a different one that has the shadowmap applied and one that doesn't simply because not all hardware support it, pre geforce3 so i would like a fallback (without passing parameters here and there) and not everything is going to be shadowmapped in the scene.

For fog, i would make it global. If a surface/material needs fog disabled then it would be the shaders job to check if it is enabled and disable it. Same if fogging is disabled but the shader needs fog, just enable it. I think it'll mostly be global though, shaders aren't gonna play with fog much.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!