Jump to content
  • Advertisement
Sign in to follow this  
allsorts46

OpenGL XNA Effect class - appropriate usage

This topic is 3629 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello. I've just started experimenting with XNA, and been through a few tutorials, got some models and terrain and such rendering nicely. However most of the samples are pretty specific, and not really designed to be reusable, some I'm starting to try to build a more engine-like framework. I can't seem to find much information about what an 'effect' is actually doing, in terms of the graphics hardware itself. I've not ever done any DirectX or OpenGL development before, but I recall reading about how changing the 'state' of the card too much was a bad thing to do (although it was years ago, maybe this doesn't apply any more). I've created an octree-based scene manager, and my drawing is done by calling RootNode.Draw() from my overridden Draw() method in my game class. At the moment, I get my effect, select the current technique, call Effect.Begin(), iterate over the contained passes, call Pass.Begin(), and call the octree drawing code from inside here. Obviously this means that every object in my scene gets drawn with the same effect and same technique. This was fine when I was just drawing a couple of pretty textured boxes, but now I want to draw things with different techniques. Should I be worried about calling Effect.Begin()/End(), switching techniques, and changing vertex formats too much? Is it feasible for every object to pick a technique and vertex format and call Begin()/End() for itself, or is this wasteful and should I be trying to group things in a particular way? I was planning to create a Material class which would have it's own Effect and technique to be rendered with, but if switching is expensive then that isn't going to work well. Thanks for any advice!

Share this post


Link to post
Share on other sites
Advertisement
The important thing to know about Effects in XNA (and consequently Effects in D3D9) is that they're simply framework for managing GraphicsDevice API calls. In other words...it doesn't do anything you could do on your own through methods of GraphicsDevice. Instead it handles calling those methods for you, and provides a convenient way of associating rendering state with different techniques and passes.

These are the kinds of rendering state you can associate with an Effect (or a Technique or Pass inside an Effect):

-Render states, such as z-buffering, stencil-buffering, alpha-blending, blending mode, etc. These are set through calls to GraphicsDevice.RenderState
-Sampler states, such as magnification filter, minification filter, and texture addressing mode. These are set through calls to GraphicsDevice.SamplerStates.
-Shader constants. These are set using GraphicsDevice.SetVertexShaderConstant and GraphicsDevice.SetPixelShaderConstant, depending on which shaders the constant is used in.
-Textures. These are set using GraphicsDevice.Textures.
-Pixel shader and vertex shader. These are set using GraphicsDevice.PixelShader and GraphicsDevice.VertexShader.

Okay so like you mentioned, that's a lot of potential state changing that can occur when you change your Effect, or even the technique inside an effect. The bad news is that lots of state changes will increase your CPU processing load. So in general, you'll want to minimize them

The good news is that using Effects helps you keep track of different states, which can help you sort out your render queue in order to minimize state changes. For example if you only have one pass, then you know that staying on the same technique in the same Effect won't cause state changes. So you might sort your renderables by technique. Or maybe that causes too much vertex buffer and vertex declaration switiching and you sort by those to minimize that, and then sort by technique afterward. Unfortunately there's no magic "best way" to sort things...what performs best will depend on many factors. Ultimately you'll want to set things up so that you can choose how you want things sorted, so you can profile.

Share this post


Link to post
Share on other sites
Thanks for the information, it helps me understand better what an effect actually is. At what point are these changes actually happening though? Should I assume that every time I call Effect.Begin() several states will be changed? And relatively how expensive are these changes? If I'm rendering a scene with perhaps 5,000 entities in it, each of which call their own Effect.Being()/End inside their own drawing code, are those 5,000 changes going to affect me or is it negligible until we reach hundreds of thousands or millions?

Looking at other existing engines and frameworks, it seems most boast a 'material framework' where meshes are assigned materials which control their render states. This was the approach I was about to take, but I assume I must have missed something and they aren't just switching everything for every mesh drawn.

Although, it seems the XNA-provided Model class works this way - each ModelPart can have it's own Effect, and most tutorials teach that the way to draw a model is to iterate over all effects in a model and set their parameters, before calling Model.Draw().

[Edit: Just been reading this which seems to be talking about this, but no conclusion was drawn]

I guess what I really need is to go and buy a good book on modern (shader-based) engine architecture. Samples and tutorials are too specific, and existing engines are too large and complex to follow.

I'm amazed at how poor the MSDN documentation is at the moment. The documentation of Effect.Begin() consists of 'Begins application of the active technique.', and it's not much different for all the related classes.

Share this post


Link to post
Share on other sites
Quote:
Original post by allsorts46
Thanks for the information, it helps me understand better what an effect actually is. At what point are these changes actually happening though? Should I assume that every time I call Effect.Begin() several states will be changed? And relatively how expensive are these changes? If I'm rendering a scene with perhaps 5,000 entities in it, each of which call their own Effect.Being()/End inside their own drawing code, are those 5,000 changes going to affect me or is it negligible until we reach hundreds of thousands or millions?


Most of your state changes will occur when you call EffectPass.Begin. That is because most states (textures, constants, shaders) are going to be specific to a given technique or pass. If you want to know for sure, run your program with PIX and capture a frame. Then find your draw call in the list of API calls on the left side, and right before you'll see the underlying native calls to ID3DXEffect::Begin and ID3DXEffect::BeginPass. If you expand those, you'll see all the actual SetRenderState/SetTexture/SetVertexShader calls being made to apply state.

As for how much the changes will affect you...it completely depends on what changes are being made, the video card, the driver, and the CPU. Some state changes will be more expensive than others (for instance, changing the render target), and some will be more or less expensive on different hardware and drivers. For the most part drivers will check for redundant state changes, but it's not something you can rely on. Unfortunately the only good way to know is to profile...but I can tell you that the performance effects can be quite significant. It's easier to be CPU-bound than you think. :P

Quote:
Original post by allsorts46
Looking at other existing engines and frameworks, it seems most boast a 'material framework' where meshes are assigned materials which control their render states. This was the approach I was about to take, but I assume I must have missed something and they aren't just switching everything for every mesh drawn.

Although, it seems the XNA-provided Model class works this way - each ModelPart can have it's own Effect, and most tutorials teach that the way to draw a model is to iterate over all effects in a model and set their parameters, before calling Model.Draw().


I'm sure they're not switching states all over the place, at least not if its a good framework. They'll probably sort by material, and then sort materials in a way so that changes are minimized from one to another.

As for the Model class...just drawing each MeshPart with different materials is a naive way of doing it. It's fine for learning or simple stuff, but with lots of different models you'll start to choke quickly. If you use the Model class you may not want to draw everything by just looping through MeshPart's...or you may not want to use the Model class altogether. After all it can't possibly be the ideal solution for all problems.


Quote:
Original post by allsorts46
I guess what I really need is to go and buy a good book on modern (shader-based) engine architecture. Samples and tutorials are too specific, and existing engines are too large and complex to follow.


Probably not a bad idea at all. It's a complex subject with lots of ways to tackle different problems.

Quote:
Original post by allsorts46
I'm amazed at how poor the MSDN documentation is at the moment. The documentation of Effect.Begin() consists of 'Begins application of the active technique.', and it's not much different for all the related classes.


You might want to check out the documentation for the actual native D3D9 methods being called, for instance ID3DXEffect::Begin. You might find some useful info or insight. The rest you'll pick up as you go. [smile]

Share this post


Link to post
Share on other sites
I suppose then, instead of my scene nodes actually drawing, they should just spit out a list of renderable things, which I can then sort and draw afterwards. I guess this is also standard procedure, but then I've never done any 3D rendering before! Well, I have, but that was a software renderer - never worked with a 3D API before.

Just to get this straight: the state changes are expensive because they use CPU (and presumably some bandwidth to the graphics hardware) in the API and graphics driver, not because they stress the GPU itself?

Thanks for the tip regarding the documentation, the D3D documentation is much more useful! I came across Shawn Hargreaves' blog just before I posted here, finding that very interesting too, though I hadn't read as far as the post you linked to.

Share this post


Link to post
Share on other sites
Quote:
Original post by allsorts46
I suppose then, instead of my scene nodes actually drawing, they should just spit out a list of renderable things, which I can then sort and draw afterwards. I guess this is also standard procedure, but then I've never done any 3D rendering before! Well, I have, but that was a software renderer - never worked with a 3D API before.


Indeed, that's probably a good way to go. In fact you might even want to have the renderer store renderables in its own data structure, and then have your scene objects notify the renderer when a corresponding renderable needs to move or change some other property. This is common in more modern multi-threaded engines, where the logic thread will pass messages to the rendering thread telling it how to manipulate the renderable ojects.

Quote:
Original post by allsorts46
Just to get this straight: the state changes are expensive because they use CPU (and presumably some bandwidth to the graphics hardware) in the API and graphics driver, not because they stress the GPU itself?


Yup. And not just state changes, but any API call. The reason why is because in D3D9, the DirectX Runtime operates in user mode and stores up commands that you give it via API calls. Then once a threshold has been reached, the runtime sends over the commands to the kernel-mode driver (the driver supplied by your GPU manufacturer), causing a switch from user-mode to kernel-mode. This is typically an expensive operation, and will eat up many CPU cycles. So ultimately you'll want to render your scene with as few API calls as possible, so that you minimize user-mode/kernel-mode transitions. This is why batching and instancing is so important, since they reduce the number of DrawPrimitive calls you need to make.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!