Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Next generation renderer design.


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 Simon_Roth   Members   -  Reputation: 149

Like
0Likes
Like

Posted 24 November 2011 - 11:22 AM

So if you read my last thread I'm looking into renderer design for an upcoming project. Having decided to roll my own I'm now looking for good resources for designing a such system. This thread is going to be something of a ramble!

Since I am doing this for a AAA quality game it has to work, but I am also doing it for my doctorate so I have to defend and reference any of my choices, even ones made in my gut. :D Any book suggestions would be great. I already use the GPU Gems series and Games programming Gems series with Game Engine Architecture, and Game engine Gems 2 in the post. So far, while there are tons of great titbits of knowledge to be found, a nice rounded next gen architecture hasn't been documented anywhere to my knowledge.

I'm no stranger to building renderers, but often I find my designs fall short at implementation. Heres areas I need to consider:

Interface:

The core render would sit on a separate thread and run continuously. Render calls would be sent via an interface on the game thread and communicated via a buffer. Perhaps a lock-less ring buffer of some sort, or a set of buffers set to ping-pong.

Calls would be made by providing a function (pointers or handles to) a mesh object, a material (shader + textures), a culling volume, a transform and an ID of the targeted rendering stage. How would you organise your shader uniforms? In terms of organisation it makes sense to have them inside the material bundle, however that means different objects will need different materials increasing the amount of binds I will need to do. If I keep them separate then models using the same material can skip its shader and texture bind stage, but that leaves the question of how to neatly deal with uniform variables.

We also want shaders and textures to be able to be swapped in at realtime. No company I've ever worked at has managed to make that happen as fluidly as the artists would like, what design considerations would you make to make it easier? My first thought is a material manager which would centralise the change over by tracking relationships between assets etc at for a small overhead in artist's builds.

Render passes:

The implementation of the renderer in the game will be deferred, with a latter forward stage for transparent objects. We will have perhaps more than 20 passes in some frames so I need to build a compositor that artists can access and fully control.

I'm thinking I will make the render passes system reasonably generic, with no "fullscreen effect manager" or other classes to differentiate between pass types. The passes will probably be broken down into 3d and 2d/compositing. I will probably create a node based system that is scripted via XML with a series of inputs and outputs.

Culling:

Since I'm not sure if I'm going to get pre-culled data from the game engine yet I'm not sure if I'd implement it. Most likely it will just be a Frustum-sphere/AABB test.

I'll add to this as I think of more issues for the design.

-Si

Edit: Clarified my tired ramblings a bit.

Sponsor:

#2 Promit   Moderators   -  Reputation: 7344

Like
3Likes
Like

Posted 24 November 2011 - 11:42 AM

Rather than give specific advice, I'll direct you to the presentations on Frostbite 2 by DICE. There's this presentation on overall design for multicore scalability for example. There's a presentation about the lighting pipeline in BF3. Real-time radiosity. Terrain rendering. DirectX 11 rendering. Probably more, too, but those are the ones I found quickly.

#3 thedodgeruk   Members   -  Reputation: 124

Like
0Likes
Like

Posted 24 November 2011 - 06:44 PM

i not sure what your asking here , im part way through doing my engine for my final year degree , and what you are asking is the whole system by sound of it

you say you have done several renderers before , go back a nd find out were they fell short of the idea you had ,
you need to make it so its loose coupling and easy to change.

"We also want shaders and textures to be able to be swapped in at realtime" this is very easy to acheive with very little coding . all depends on how you design your enitre code to allow for the flexability

#4 Hodgman   Moderators   -  Reputation: 31177

Like
6Likes
Like

Posted 24 November 2011 - 08:54 PM

We also want shaders and textures to be able to be swapped in at realtime. No company I've ever worked at has managed to make that happen as fluidly as the artists would like, what design considerations would you make to make it easier? My first thought is a material manager which would centralise the change over.

IMO, this is a must-have feature for any modern game engine.
As long as there is a level of indirection between your game and your actual resources, this should be really simple to implement.
e.g. if the game has pointers to "resources", which themselves point to the underlying swappable resources: struct Texture { InternalTexture* asset; };

The core render would sit on a separate thread and run continuously. Render calls would be sent via an interface on the game thread and communicated via a buffer. Perhaps a lock-less ring buffer of some sort, or a set of buffers set to ping-pong.

This is a huge red-flag for me -- no low level module should not dictate your threading policy. If I want to run it in the same thread as my game logic, or in it's own thread, or spread over 8 different threads, I should be able to.
Fair enough if some parts of it have to be run by one thread only (e.g. the part that actually submits commands to the GPU), but if those parts are documented as such, then I can choose to build a renderer-in-it's-own-thread-with-a-message-queue model on top of it if I wanted to (which I don't, because one-thread-per-system designs don't scale and waste resources).

Calls would be made by providing a function (pointers or handles to) a mesh object, a material (shader + textures), a culling volume, a transform and an ID of the targeted rendering stage. How would you organise your shader uniforms? In terms of organisation it makes sense to have them inside the material bundle, however that means different objects will need different materials increasing the amount of binds I will need to do. If I keep them separate then models using the same material can skip its shader and texture bind stage, but that leaves the question of how to neatly deal with uniform variables.

There's nothing special about transforms that should elevate them to a core concept -- they're a shader uniform just like material parameters. Same goes for materials, they're just a collection of required state-changes and shader uniforms. Moreover, transforms and material settings are not the only uniform parameters -- also included are skeleton poses, lights, environmental settings, per-game-object states, etc... So this all seems way too high-level and specialized for a core rendering library.

I would have the lower level submission API take in bundles of state + draw-calls. State can include the binding of cbuffers (which are groups of uniform parameters) and textures. You can then build higher level concepts, like common transform groups, or materials, on top of these basic primitives.

The renderer will be deferred, with a latter forward stage for transparent objects.

This again is a red flag for me. The rendering library shouldn't dictate what kind of pipeline I'm allowed to implement with it -- the pipeline needs to be able to change from game to game, and be easily changed over the life of a single game's development.
The core rendering library should be agnostic as to what kind of pipeline you build on top of it.

I'm thinking I will make the render passes system reasonably generic, with no "fullscreen effect manager" or other classes to differentiate between pass types. The passes will probably be broken down into 3d and 2d/compositing. I will probably create a node based system that is scripted via XML with a series of inputs and outputs.

If you build this kind of system, then you should also use it to implement your deferred/forward rendering logic, to address the above point.

Since I'm not sure if I'm going to get pre-culled data from the game engine yet I'm not sure if I'd implement it. Most likely it will just be a Frustum-sphere/AABB test.

Yeah I would expect the engine to have a scene management module (separate from the renderer), which can determine the visible set of objects that need to be rendered.

#5 Frenetic Pony   Members   -  Reputation: 1350

Like
0Likes
Like

Posted 24 November 2011 - 11:38 PM

Here's a neat resource: http://bitsquid.blogspot.com/

Also, as platform agnostic as possible is of course great, but that's fairly obvious.

#6 Simon_Roth   Members   -  Reputation: 149

Like
0Likes
Like

Posted 25 November 2011 - 08:09 AM

We also want shaders and textures to be able to be swapped in at realtime. No company I've ever worked at has managed to make that happen as fluidly as the artists would like, what design considerations would you make to make it easier? My first thought is a material manager which would centralise the change over.

IMO, this is a must-have feature for any modern game engine.
As long as there is a level of indirection between your game and your actual resources, this should be really simple to implement.
e.g. if the game has pointers to "resources", which themselves point to the underlying swappable resources: struct Texture { InternalTexture* asset; };

The core render would sit on a separate thread and run continuously. Render calls would be sent via an interface on the game thread and communicated via a buffer. Perhaps a lock-less ring buffer of some sort, or a set of buffers set to ping-pong.

This is a huge red-flag for me -- no low level module should not dictate your threading policy. If I want to run it in the same thread as my game logic, or in it's own thread, or spread over 8 different threads, I should be able to.
Fair enough if some parts of it have to be run by one thread only (e.g. the part that actually submits commands to the GPU), but if those parts are documented as such, then I can choose to build a renderer-in-it's-own-thread-with-a-message-queue model on top of it if I wanted to (which I don't, because one-thread-per-system designs don't scale and waste resources).

Calls would be made by providing a function (pointers or handles to) a mesh object, a material (shader + textures), a culling volume, a transform and an ID of the targeted rendering stage. How would you organise your shader uniforms? In terms of organisation it makes sense to have them inside the material bundle, however that means different objects will need different materials increasing the amount of binds I will need to do. If I keep them separate then models using the same material can skip its shader and texture bind stage, but that leaves the question of how to neatly deal with uniform variables.

There's nothing special about transforms that should elevate them to a core concept -- they're a shader uniform just like material parameters. Same goes for materials, they're just a collection of required state-changes and shader uniforms. Moreover, transforms and material settings are not the only uniform parameters -- also included are skeleton poses, lights, environmental settings, per-game-object states, etc... So this all seems way too high-level and specialized for a core rendering library.

I would have the lower level submission API take in bundles of state + draw-calls. State can include the binding of cbuffers (which are groups of uniform parameters) and textures. You can then build higher level concepts, like common transform groups, or materials, on top of these basic primitives.

The renderer will be deferred, with a latter forward stage for transparent objects.

This again is a red flag for me. The rendering library shouldn't dictate what kind of pipeline I'm allowed to implement with it -- the pipeline needs to be able to change from game to game, and be easily changed over the life of a single game's development.
The core rendering library should be agnostic as to what kind of pipeline you build on top of it.

I'm thinking I will make the render passes system reasonably generic, with no "fullscreen effect manager" or other classes to differentiate between pass types. The passes will probably be broken down into 3d and 2d/compositing. I will probably create a node based system that is scripted via XML with a series of inputs and outputs.

If you build this kind of system, then you should also use it to implement your deferred/forward rendering logic, to address the above point.

Since I'm not sure if I'm going to get pre-culled data from the game engine yet I'm not sure if I'd implement it. Most likely it will just be a Frustum-sphere/AABB test.

Yeah I would expect the engine to have a scene management module (separate from the renderer), which can determine the visible set of objects that need to be rendered.


Cool thanks. Good points. I'm surprised anyone could untangle my post. I was very tired when I wrote it. hehe.

On the deferred shading point I meant the implementation I'll be creating in the the game would be deferred since I'll be using the engine for that. I absolutely agree that I don't want to dictate a pipeline.

As for the threading policy I agree, although is dictation that bad for an engine targeted at specific platforms? Since I will need a low level generic interface for the graphics hardware API, I guess that would allow the renderer to use that interface from any thread anyway, so as long as I design it right I guess I'm OK there.

Is the message queue +thread idea that bad in terms of an implementation? I've found it reduces the complexity whilst giving reasonable performance. I've seen a lot worse in modern games... Halo Reach copies the entire game state over to the rendering thread every frame!

Excellent point on the transforms, I guess I've always written stuff for people learning to code (or been learning myself) so have presented them with a obvious interface. Building it from the ground up from the lower level API sounds like a better idea than the top down fluffy approach I was planning on taking.

Thanks for your thoughts

#7 Simon_Roth   Members   -  Reputation: 149

Like
0Likes
Like

Posted 25 November 2011 - 08:18 AM

i not sure what your asking here , im part way through doing my engine for my final year degree , and what you are asking is the whole system by sound of it

you say you have done several renderers before , go back a nd find out were they fell short of the idea you had ,
you need to make it so its loose coupling and easy to change.


Yeah I'm a bit all over the place at the moment so I'm kinda using the forum to get my thoughts down, whilst letting people pick them apart. :D

In everything I've had, we always put features ahead of implementation... everything I've made looks great and runs fast, but the code was pure sin. Agreed on the loose coupling, I think I may try to use lots of interfaces to create "firewalls" between systems. Theres a price to pay in performance, but hopefully it will be offset by the gains from having a well engineered renderer,

#8 Kyall   Members   -  Reputation: 287

Like
0Likes
Like

Posted 27 November 2011 - 06:13 AM

I've been working on my own engine for a while, and recently I figured out a new direction I want to take with it so I'll write that here, not really offering any advice or tips though.

I have a scene graph that is a bit of a mess. In one way it's an oct tree, with occlusion volumes as parent nodes of objects that exist with the spacial divisions of the tree. In another way it's a tree that is logically ordered for child nodes to commence their update for game logic after their parents have updated for game logic, this takes care of stuff cloth simulations sticking to parent node. And in another way it's a method of setting the scope of effects; such as lighting, shadows & physics forces, etc. It's basically just a logical tree with a root that has child objects that is processed breadth first which doubles as an oct tree for culling and occlusion purposes. The advantages of this scene graph are:

- By limiting the scope of lights to a sub tree of the tree where the light actually has effect, I can use just about as many lights as I want without worrying about speed concerns or hardware constraints (in the case of hardware lighting).
- Same with shadows, so for shadow mapping I have a scope of objects I need to test against the shadow casting frustum and the camera frustum to determine elements that will actually be rendered in the shadow map and elements that must be rendered using the shadow map quite easily. Obviously a shadow caster/reciever set up, will be it's own little graph in the scene graph. But it's also it's own little rendering section so meh.
- By ordering the scene graph in this fashion I can exploit an update tree based structure for game play programming. For example I can have a water simulation node, and under that water simulation, I have a physics simulation node that calculates a boat's buoyancy on that water as well as it's physical velocities and drag etc, and under that I have a boat node that inherits it's transform from that, and underneath that I have a particle simulation that simulates water splashing off the bow. I find this structure can be exploited for game programming to a pretty high degree. It's not as simple as throwing everything in one object, but it's cleaner and allows for code re-use without modification.

I could go on about the rendering and that but then this'll be TLDR.

Basically I get tree based culling, update ordering and rendering or physics scope out of this; and I like that. The engine still has to iterate over everything; but for more specific stuff the number of elements that needs be iterated over by your average code solution for shadows or whatever and I like that.

The other side of what I want to implement are global effects. Except they don't act in any local scope so they don't have to be added to the scene graph and I'll probably have a seperate bit of code to handle them; so that stuff like post process effects can be added as a global effect.
I say Code! You say Build! Code! Build! Code! Build! Can I get a woop-woop? Woop! Woop!

#9 anthony2011   Members   -  Reputation: 100

Like
0Likes
Like

Posted 28 November 2011 - 03:40 AM

Hi Simon

Since I am doing this for a AAA quality game it has to work, but I am also doing it for my doctorate



I hope you won’t mind my 2 cents worth on the doctorate aspect of your work.

I completed a research MSc in 3D visibility and a PhD in CAD / 3D graphics in 2005 and 2009 respectively at two different universities in Manchester, UK. Also, I’ve written a few software renderers in my time – latest capable of rendering Quake III maps with mip-mapping, texturing, light-mapping, collision etc. I mention all this just so you know my experience / perspective – not to toot my own horn!



Take these “words of wisdom” with a pinch of salt as I’m speaking from my own personal experience and they may not apply in your situation. Also, it’s all very much dependent on your supervisor, institution and location. All PhDs are not “equal”. Also, I’d be interested to know how far along you are with the research and whether or not your doctorate is tied in with funding/work from/for a company.



OK, here goes. ...



Without doubt, to successfully defend your doctorate at the viva, the examiners will need evidence of novelty - a substantial and original contribution to knowledge in your area of research. If you’ve already identified this in your work – then just ignore my post! I appreciate that this may be the case, and that your renderer is basically the groundwork / platform to support your actual research contribution. If, however, the renderer is the research contribution then you’ve got your work cut out. Much of what goes into the renderer will not constitute PhD level research. I appreciate that researchers have gone down this route successfully before – but it’s a herculean task. Each of my supervisors for each of my degrees warned/advised against “architectures” and “frameworks” when it comes to the actual research contribution as it may indicate lack of focus in the work. I’m sure you know that the PhD is a specialization and not a generalization and if you're working on a renderer solely for the research contribution (an not to underpin it) then there is a danger of losing focus.



Well, just my 2 cents worth. Not intended to patronize or offend in anyway. That can be the trouble with forum postings, as posts can appear condescending. I hope it goes well for you and that you have fun with it!



Best,



Anthony.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS