Jump to content
  • Advertisement
Sign in to follow this  
Hybrid

OpenGL LOD shaders?

This topic is 5414 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm thinking about shaders and writing a basic shader based renderer in OpenGL, but have a couple of questions and I don't recall reading this in other material/shader threads. 1. Do people swap shaders based on distance? For example, you have an object that has an amazing shader with loads of effects (bump mapping, environment mapping etc. etc.), but this object is in the distance... Would people swap this shader to something more basic over distance or doesn't it matter (cos of the preliminary Z-pass render?)? If you do, how can you go about doing this? 2. Just wondering, but is mip-mapping automatic in shaders if you have specified mip-mapping for the texture? I assume the answer is yes. Pretty basic questions I know, but thanks for any answers. :)

Share this post


Link to post
Share on other sites
Advertisement
1 - Depends on the engine, if you were doing a tight fps like doom3 then the confined spaces and low view distances wouldnt really require shader LODing imo. However, if you were doing something with huge distances and views then I certainly would. The z-only pass wouldnt help here, for example if you were standing at one end of a football field and the object you were rendering was at the other end you would still see it (so it would pass the z-test) but as its at distance the shader needed for it wouldnt be any where near as complex.

As for how, while my first thoughts are long the lines of;when setting up to render the frame as the object which shader is going to be using and it will reply with the correct shader it requires for that LOD.
The alternative is you work out how far away the object is from the camera and have the engine assign the correctly LODed shader to the object (which would be as simple as placing the object in the correct render group so as to keep everything nice and batched together).

2 - AFAIK then answer is indeed yes, the hardware takes care of that for you.

Share this post


Link to post
Share on other sites
Quote:
Original post by Hybrid
1. Do people swap shaders based on distance? For example, you have an object that has an amazing shader with loads of effects (bump mapping, environment mapping etc. etc.), but this object is in the distance... Would people swap this shader to something more basic over distance or doesn't it matter (cos of the preliminary Z-pass render?)? If you do, how can you go about doing this?


This will depend greatly on the game that you make, closed indoor fps enviornments wouldn't find use of this as the previous poster mentioned. If you have a long view distance with lots of little things far away then i would do this, switch per pixel effects for per vertex, etc.

How to go about doing this, again depends on your game and engine type. You can code multiple paths in for the shader or code multiple shaders for different LOD's (what i'm doing). You also need to decide who takes care of the LOD, does the game/scenegraph do it when giving you the geometry to render or does the engine do it when it receives the geometry.

In my case the geometry that i pass to the renderer has an identifier for the requested effect (can be switched at will). So if the geometry is close to the camera the game will set this effect identifier to "SuperCoolEffectWithEverything" and as it gets further away the game will start switching the effect to something lower until it becomes diffuse texture only, maybe even go down into flatshading.

The design of the whole system will depend on your game type

Share this post


Link to post
Share on other sites
Quote:
Original post by Hybrid

2. Just wondering, but is mip-mapping automatic in shaders if you have specified mip-mapping for the texture? I assume the answer is yes.

Pretty basic questions I know, but thanks for any answers. :)


Mipmapping depends on the type of texture instruction you use. Its 'automatic' if you use the default ones, but you can also specify things manually.

if you are using GLSL, Its somewhat dodgy as to what happens with gradient instructions inside dynamic flow control (e.g. a normal texture load). IIRC the spec says its undefined - so you basically need to not do this:

if([dynamic expression])
{
GradientInstruction()
}

I'm not sure what Cg does, but I'd guess it inlines the expression in this case since its the only safe thing to do.

Share this post


Link to post
Share on other sites
It depends, but my advice would be to hold off on trying to LOD shaders unless you really get to the point where it looks like your shader instructions are what's slowing down your scene. Even in large, outdoor environments, the benefits of doing this can be questionable.

First of all, if an object you're drawing is further off in the distance, and thus smaller on the screen, than there will be less pixels being shaded. That means that the speed of your pixel shader won't matter as much. So, your biggest savings will be in your vertex shader. However, your objects will still have to be shaped the same. So, your more expensive vertex shader operations, like skinning, will still need to happen. So, effectively, you could save *some* shader operations, but probably not as many as you might originally think.

Also, there will be some deficits if you LOD your shaders. The added overhead of tracking which shaders to use when will cost a bit of processing power on large scenes. You'll also have to potentially switch shaders more often, which takes driver time. Add to this the fact that it may be tricky to turn off fancy shading without seeing a visual pop when you switch between shaders and the benefits start to look less promising.

So, again, I wouldn't worry about it much unless you really do become shader bound. If that happens, you might consider a deferred shading scheme before LOD-ing your shaders.

-John

Share this post


Link to post
Share on other sites
It's not a bad idea, but it might not be giving much compared to the time taken to implement it well enough. It depends a little on your shaders, but LODding per-fragment effects to vertex lighting, for example, should help immensely if you have a lot of triangles on-screen. But generally, before you go about doing optimizations like this, you gotta have something worthwhile running :)

Share this post


Link to post
Share on other sites
Indeed. I'm currently interested in implementing the shader-based system that people like Yann L, jamessharpe etc. have implemented. I'm a confident enough programmer to dive head first into it, but at present I've not done pixel or vertex shaders, so I'm finding it hard to work out what needs to be done where and when.

So at the moment I'm looking into pixel and vertex shaders and the different langauges for that. Though I'm keen on using the low level assembly like one (ASM???).

Anyway, I think LOD of shaders isn't really gonna be a problem for the two things mentioned by other people...

1) If you render a Z-pass first, then typically you could probably render the entire screen with your most expensive shader and be okay - don't quote me on that, remember I've not done shaders before! :-)

2) If the object is the distance, then it will only cover a few pixels anyway, so changing the shader is probably not worth it for those few pixels.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!