Jump to content

  • Log In with Google      Sign In   
  • Create Account

Simon_Roth

Member Since 16 Aug 2008
Offline Last Active May 08 2013 10:46 AM

Posts I've Made

In Topic: Next generation renderer design.

25 November 2011 - 08:18 AM

i not sure what your asking here , im part way through doing my engine for my final year degree , and what you are asking is the whole system by sound of it

you say you have done several renderers before , go back a nd find out were they fell short of the idea you had ,
you need to make it so its loose coupling and easy to change.


Yeah I'm a bit all over the place at the moment so I'm kinda using the forum to get my thoughts down, whilst letting people pick them apart. :D

In everything I've had, we always put features ahead of implementation... everything I've made looks great and runs fast, but the code was pure sin. Agreed on the loose coupling, I think I may try to use lots of interfaces to create "firewalls" between systems. Theres a price to pay in performance, but hopefully it will be offset by the gains from having a well engineered renderer,

In Topic: Next generation renderer design.

25 November 2011 - 08:09 AM

We also want shaders and textures to be able to be swapped in at realtime. No company I've ever worked at has managed to make that happen as fluidly as the artists would like, what design considerations would you make to make it easier? My first thought is a material manager which would centralise the change over.

IMO, this is a must-have feature for any modern game engine.
As long as there is a level of indirection between your game and your actual resources, this should be really simple to implement.
e.g. if the game has pointers to "resources", which themselves point to the underlying swappable resources: struct Texture { InternalTexture* asset; };

The core render would sit on a separate thread and run continuously. Render calls would be sent via an interface on the game thread and communicated via a buffer. Perhaps a lock-less ring buffer of some sort, or a set of buffers set to ping-pong.

This is a huge red-flag for me -- no low level module should not dictate your threading policy. If I want to run it in the same thread as my game logic, or in it's own thread, or spread over 8 different threads, I should be able to.
Fair enough if some parts of it have to be run by one thread only (e.g. the part that actually submits commands to the GPU), but if those parts are documented as such, then I can choose to build a renderer-in-it's-own-thread-with-a-message-queue model on top of it if I wanted to (which I don't, because one-thread-per-system designs don't scale and waste resources).

Calls would be made by providing a function (pointers or handles to) a mesh object, a material (shader + textures), a culling volume, a transform and an ID of the targeted rendering stage. How would you organise your shader uniforms? In terms of organisation it makes sense to have them inside the material bundle, however that means different objects will need different materials increasing the amount of binds I will need to do. If I keep them separate then models using the same material can skip its shader and texture bind stage, but that leaves the question of how to neatly deal with uniform variables.

There's nothing special about transforms that should elevate them to a core concept -- they're a shader uniform just like material parameters. Same goes for materials, they're just a collection of required state-changes and shader uniforms. Moreover, transforms and material settings are not the only uniform parameters -- also included are skeleton poses, lights, environmental settings, per-game-object states, etc... So this all seems way too high-level and specialized for a core rendering library.

I would have the lower level submission API take in bundles of state + draw-calls. State can include the binding of cbuffers (which are groups of uniform parameters) and textures. You can then build higher level concepts, like common transform groups, or materials, on top of these basic primitives.

The renderer will be deferred, with a latter forward stage for transparent objects.

This again is a red flag for me. The rendering library shouldn't dictate what kind of pipeline I'm allowed to implement with it -- the pipeline needs to be able to change from game to game, and be easily changed over the life of a single game's development.
The core rendering library should be agnostic as to what kind of pipeline you build on top of it.

I'm thinking I will make the render passes system reasonably generic, with no "fullscreen effect manager" or other classes to differentiate between pass types. The passes will probably be broken down into 3d and 2d/compositing. I will probably create a node based system that is scripted via XML with a series of inputs and outputs.

If you build this kind of system, then you should also use it to implement your deferred/forward rendering logic, to address the above point.

Since I'm not sure if I'm going to get pre-culled data from the game engine yet I'm not sure if I'd implement it. Most likely it will just be a Frustum-sphere/AABB test.

Yeah I would expect the engine to have a scene management module (separate from the renderer), which can determine the visible set of objects that need to be rendered.


Cool thanks. Good points. I'm surprised anyone could untangle my post. I was very tired when I wrote it. hehe.

On the deferred shading point I meant the implementation I'll be creating in the the game would be deferred since I'll be using the engine for that. I absolutely agree that I don't want to dictate a pipeline.

As for the threading policy I agree, although is dictation that bad for an engine targeted at specific platforms? Since I will need a low level generic interface for the graphics hardware API, I guess that would allow the renderer to use that interface from any thread anyway, so as long as I design it right I guess I'm OK there.

Is the message queue +thread idea that bad in terms of an implementation? I've found it reduces the complexity whilst giving reasonable performance. I've seen a lot worse in modern games... Halo Reach copies the entire game state over to the rendering thread every frame!

Excellent point on the transforms, I guess I've always written stuff for people learning to code (or been learning myself) so have presented them with a obvious interface. Building it from the ground up from the lower level API sounds like a better idea than the top down fluffy approach I was planning on taking.

Thanks for your thoughts

In Topic: 2d World Generation

24 November 2011 - 10:05 AM

You will certainly want a good noise algorithm even for a tile based world. A RNG will only give you a mess where noise combined to give turbulence will give good effects.

In Topic: Large terrain shadows

09 November 2011 - 05:52 PM

Thank you for your tips ! I will make more tries with CSM / static shadow maps.


Cheers !



How many splits are you using in your CSM? What res maps and where is the bottleneck occurring that's slowing it down?

-Si

In Topic: Graphics engines and middleware.

09 November 2011 - 04:15 PM

I would also be interested in knowing about any lightweight open-source graphics engines. I'm not so interested in full game engines as there are lots of those, and I'd rather handle integration myself. Some ones that I've found include:

Horde3D
Visualization Library (Ignore 'visualisation' in the name - it actually appears quite low level)
Linderdaum (more restrictive license)


Cool those are useful thanks.

I've found out today that Torchlight is using Ogre and Hand circus used it on their indie titles. Whilst they are hardly huge developers, its useful to see it in production and working well.

Heres what I've jotted down this evening on the pros and cons of Ogre I've seen at a quick glance.

Ogre:


Positives features

Multiplatform abstraction. The code base would be platform agnostic, and allow DirectX and OpenGl builds. This will be of specific help for more difficult platforms such as the Ipad.

The code base is mature.

The source is free and documented.

The licensing is an MIT license, which is flexible and does not require source code redistribution.

There are some third party tools that integrate with the engine, to allow asset loading etc. These could replace the need for using <redacted engine> formats and remove the need for conversion in engine. These tools are unofficial however so may not be entirely useful.

Negative features

By specification, features must be implemented on both DirectX and OpenGl, this leads to a situation where specific technologies fail to overlap and gaps are left in the API.

The multiplatform abstraction is large and consists of two major classes. These are almost impenetrable so would leave us in a difficult position were we to need lower level functionality or hacks for the Ipad. It also create a lot of redundant function calls, eating CPU cycles unnecessarily changing state.

The animation system is robust, however does not include physics based animation. This would lead to issues of integration when a 3rd party solution is found, potentially requiring much work.

The scene management would be largely obsolete as the game and <redacted engine> will provide that and is matured in the in-house development pipeline.

As a personal observation and its maintainers. Few of the developers involved work in rendering technology in the games industry. This means that the engine may make assumptions on its use that are incorrect in real time development. Eg proper object abstraction vs performance.

If we modify the engine we may have to spend time committing our code back to the main project or risk creating a fork that is incompatible with the trunk patches and updates.

Although many small independent projects have been released using the engine, few AAA quality projects have employed it. Torchlight is one such game. The graphical style does not use dynamic lighting and is heavily dependant on high quality art assets. Indeed early developer posts indicated that a fix function pipeline was used.


PARTNERS