Post processing - managing resources

Started by
4 comments, last by bzroom 12 years, 3 months ago
My post processing framework is coming along nicely but I've come to a bit of a design decision...

Suppose for my game I need 3 types of PP effect; depth of field, blur and bloom. I'd like my engine to be configurable enough so that if I decide to add, say, a black and white effect, I can just quickly add it to my effect stack.

My question lies in and around resources. I need to first render the normal scene to a screen-sized texture so I'll need that as a resource. This isn't really an issue because all effects will use this as an initial source and output it as a final destination (for either the backbuffer copy or as the input to the next effect).

But for an effect such as bloom, I need to use two smaller sized textures (for the vert and horz blurs) so I will need to create and manage them. This is all fine if I'm hard coding it but what happens if I create those two textures at, say, 512x512 and another effect also needs to use similar-sized textures (a simple blur effect will almost certainly use them). I don't want to be creating loads of the same sized textures when I can just re-use them.

So I thought of having some kind of texture pool for the PP framework. As effects are registered with the engine, I create the required textures (based on size and attributes - bit depth etc). So when bloom is registered, I create two 512x512 textures. Then if another effect is registered that uses 512x512 textures with the same attributes, I can just use one of the ones I've already created. It would probably be a std::map using a composite key (like '512x512-A8R8G8B8') which holds a std::vector of identical textures. Then as I come to render each step, I just look up what textures are required, pull them out of the map and pass them in to the shaders. To avoid the map having to match on strings, I'd actually probably use a predetermined numeric value for the keys.

Does this sound like a feasible approach?  Or am I over (or even under) complicating things?
Advertisement
Its a great way of doing it and its done this way quite often to manage graphics resources.

Generally people have such a resource manager for not only managing the post processing textures, but every texture that they use in the entire graphics pipeline.
Thanks very much for the response, I'll carry on with my design as it is then
Sounds good, I've considered a resource management model similar to that before too.
I might not do this though:


Then as I come to render each step, I just look up what textures are required, pull them out of the map and pass them in to the shaders. To avoid the map having to match on strings, I'd actually probably use a predetermined numeric value for the keys.

Instead I would aim to resolve the lookups once at load-time (it's fine to match on strings at this point) and then directly cache/bind a reference to the texture with the effect as this avoids doing lookups at runtime.
AFAIK, this "render-target pool" design is quite common.
Seeing the pool is usually quite small, you can even just use a linear list (vector) of resources, and do a naive linear search for a match.
by small, there may be like two textures. as long as the texture is not trying to render to itself. it is likely big enough. so you should be able to bounce between two textures or so to get all of your post processing requirements.

This topic is closed to new replies.

Advertisement