Ambient occlusion simulation for AAA projects

Started by
87 comments, last by _the_phantom_ 12 years, 8 months ago
Don't mean to derail the thread... which seems quite silly to me really... the algorithm is used in a game, but you can't tell which, but you want to make it free, but want it to be featured in a game first? And won't show anything to actually prove it? No performance figures, no quality comparisons. I'm not even sure what you would gain by having it used in some big game if you intend to make it public anyway, BF3 is not going to be called "BF3: AO by dummynull". If you want recognition, wouldn't it simpy be best to just compile useful information on what you have and talk at SIGGRAPH or something like that? If it really is as good as you claim and useful, then developers will take notice and it will be used.


I do admit they have done some amazing work with the shaders. It is most impressive, but everything else is just average (somethings even below average)...
Engine design today is just awful (not just Crytek, but every mayor player including Carmak and the rest). Engines are very limited and most of the development goes into graphic efects which in my opinion shouldn't even be part of the engine.

Consider this graphic engine design and compare it with how things are generally done and then make conclusion if thing that Crytek is doing is advance or not:
  • graphic engine without limitations - fully generic graphic engine (doesn't try to understand the data just how to send it to the graphic card) - can be written in less than 2000 lines of code and there you go, you have the holy grail of engine design and no one will ever be able to write better (DX11) graphic engine. You can send it voxel data, triangle meshes, sparse voxel octrees, + anything else that will be invented and you newer have to change a single line of code because the engined doesn't care about the data - it just cares about the rules for calculating appropriate rendering technique for the supplied geometry data, material data and render mode (lighting, depth, color, z-prepass..) - all this + the shaders can be put in game data (not engine). NO ONE DOES THINS LIKE THIS - everyone is harcoding effects/passes into their engines which is too much work and more and more limited. I hate when they do tech demo of some new graphic engine and then spend 15 minutes showing graphical effects. The moment they do that, I know that their graphic engine is crap. Instead, show my how flexible it is - will it be able to run something that will come up in 3 years from now without me having to "heavily modify it" (I hate when they say that) or buy new version of the engine.
  • Camera Space Shadow Mapping was developed as a single line of code implementation for shadow that will never fail you and will work for every case (even big pointlights). Basically it is just one function that generates 3 shader parameters. If at one point in time we decide to user sparse voxel ocrees or something else - no problem - in our shaders we just ignore those 3 parameters. With other engines multiple shadow algorithms are implemented and they take on most of the graphic engine code that will eventually be obsolete. The moment you implement some other shadow algorithm (cascaded, cube, volumes...) you've made assumption about how things will be render and that means that your graphic engine is not generic any more - it is not all-powerful and it will soon be full of old useless code....
Don't get me started on physic engine limitation, network limitations, spatial limitations, logical limitations... It will newer be able to run a game with land-able dynamic planets big spaceships and some decent multilayer (I expect nothing more than peer-to-peer MMO in completely dynamic world with out of scale objects). Basically it is designed for small things - small static levels where you can only shoot at things and that is all you will ever be able to do...


There's this thing called research (theory, lots of papers) and then there's the reality (practice, lots of games)... everything you mention is very good in theory, but sounds to me like crap in practice. You CANNOT create a top-quality and top-performing engine without optimizing for specific purposes OR you end up not making an engine at all... that is, you just invent another API layer, which is what DirectX is. Like with GUIs, one solution rarely fit all problems, so far GUIs can be either retained and immediate-mode, neither can be considered best... and both can't be unified or implemented by the other without sacrificing something, to some degree.

The visual quality and performance of crysis is testament to the work of skilled developers and engineers.
Could it be improved?... certainly could.
Would it be improved by making the entire 3D engine generic?... no.


EDIT: If your solution to a problem is to describe the entire problem, then you haven't solved the problem. That is, if your solution to making a 3D engine is to have it support every single game type and feature equally well... then the only thing you have done is add another layer of abstraction, with some degree of performance degradation.

Doing a lot of web development, I often hear people cry and call murder over rounding corners with images "it's semantic garbage!" and then proceed to convince everyone that people with browsers that don't support rounded borders get what they deserve. Which is rediculous... sacrifcing quality, end-result and often performance for some "very nice in theory"-argument about how things must be made... rather than just solve the issue, "make rounded corners with images".


Advertisement


There's this thing called research (theory, lots of papers) and then there's the reality (practice, lots of games)... everything you mention is very good in theory, but sounds to me like crap in practice. You CANNOT create a top-quality and top-performing engine without optimizing for specific purposes OR you end up not making an engine at all... that is, you just invent another API layer, which is what DirectX is. Like with GUIs, one solution rarely fit all problems, so far GUIs can be either retained and immediate-mode, neither can be considered best... and both can't be unified or implemented by the other without sacrificing something, to some degree.

The visual quality and performance of crysis is testament to the work of skilled developers and engineers.
Could it be improved?... certainly could.
Would it be improved by making the entire 3D engine generic?... no.


EDIT: If your solution to a problem is to describe the entire problem, then you haven't solved the problem. That is, if your solution to making a 3D engine is to have it support every single game type and feature... then the only thing you have done is add another layer of abstraction, with some degree of performance degradation.


Let's stick to the graphic for a moment.
I can't really think of a case where performance of my engine would be more than 5% slower than anything that has been hardcoded into Crytek engine.
I mean, if we are using the same shader (only mine is in game data and their is hardcoded into engine) performance will be the same (presuming that we have the same quality state management, and I have all the traversal methods that Crytek has - which I have, and more). Post-process effects will run at the same speed in my and in their engine, and regular rendering also, only I won't have to bother with engine bugs because my code is closed - engine is EXE file that never changes, and game is in game data. 100 engineres can parallely try different things without affecting others - this can't be said for working with other engines.
Give my an example on something that you think wouldn't render as fast as in Crytek engine, and I'll try to explain how to properly configure engine to do so.
I've studied every angle (type of rendering, post-process) and simply can't find something that my engine can't efficiently render.


P.S. I'm sorry - I'm derailing the tread (but everything has been said so I guess it is not that big of a deal).

Let's stick to the graphic for a moment.
I can't really think of a case where performance of my engine would be more than 5% slower than anything that has been hardcoded into Crytek engine.
I mean, if we are using the same shader (only mine is in game data and their is hardcoded into engine) performance will be the same (presuming that we have the same quality state management, and I have all the traversal methods that Crytek has - which I have, and more). Post-process effects will run at the same speed in my and in their engine, and regular rendering also, only I won't have to bother with engine bugs because my code is closed - engine is EXE file that never changes, and game is in game data. 100 engineres can parallely try different things without affecting others - this can't be said for working with other engines.
Give my an example on something that you think wouldn't render as fast as in Crytek engine, and I'll try to explain how to properly configure engine to do so.
I've studied every angle (type of rendering, post-process) and simply can't find something that my engine can't efficiently render.


P.S. I'm sorry - I'm derailing the tread (but everything has been said so I guess it is not that big of a deal).


Well, from what I can tell, you haven't really made a 3D engine (comparable to Crysis), you've made an abstraction layer on-top of your graphics API. What I mean is, to me it sounds like you are abstracting shaders, vertex buffers and such, then you have done little more than added some make-up to the graphics API... you have not really made an engine. Please correct me if I'm mistaken.

Crysis is a bunch of textures, shaders and triangles... but what the actual engine supports a voxel/heightmap-based terrain, occlusion culling, animation, realtime destruction, procedural vegetation, HDR, multiple lights, realtime radiance transfer, approximated global illumination, etc, etc. This is what I would consider the 3D engine, to support these things, efficiently and with a simple API. And I'm confident that they employ a lot of optimizations specifically tailored for their overall environment and game priorities. That is, an engine is no good if it says "you can do all of those things, but you need to implement them yourself", add to that that sometimes you may want max performance, sometimes you may want steady performance, etc.

To be clear, I don't consider something to be an engine if you need to build an engine on-top of it. That is, Ogre3D is an API in my opinion (which sounds like what you are talking about), logic normally shouldn't interface directly with Ogre3D, it should go through your own engine that sits on-top of Ogre3D. I've tried Ogre3D and for a lot of things it is certainly great, but for a lot of things it's just too much and too convoluted, and for some things it's simply not enough, requiring you to extend dozens of carefully designed classes to implement something that should be simple. And for other things, the interaction with it becomes unnatural and cumbersome. Ogre3D is so generic that nothing is actually easy to do with it, it always feels like building a spaceship... unlike an 3D FPS engine that may be super easy to work with, but may make it harder or impossible to go outside what is provided. It's a trade-off.


Again, derailing the thread really but I find this interesting... but shout and I'll stop.


I think Syranide hit the point, having a layer that potentially can do everything everyone else can do is not a challenge. But even with a lot of people and having such a layer it takes a long time to create all those interaction and content and pipelines and tools... that's what makes the engine. Look for example at the Unreal Engine, it has a lot of middle ware, middleware that any other developer could license as well, but integrating everything into one consistent and working software is the real challenge.

Maybe you can project it to another topic, lets take sport. you, as a human have the potential to do all kind of sports, you can call arnold schwarzenegger, or maradona or schuchmacher or... to be "below average", because you could point out some weakness they have, but that 1. doesn't make your skills/strength etc. superior, just because you know which workout 'could' potentially lead to better results and 2. yes, nobody could show you anything that you couldn't potentially achieve with your body, does that put you on the same level as those people who won the highest championships or released the best looking game for years?

And that makes it immature from your side, it shows that you clearly lack experience and usually that's the most desired thing companies are looking out for. Making techs and knowing theories are just fundamental things, knowing the real world is something harder and if you really had an offer from crytek, you should have accepted it. 1. if you have really the skills, you could have moved the technology forward, you could present it and make a lot of developers use it like nowadays everybody uses some kind of SSAO 2. you'd for sure learn something as well, maybe the other side, not just the theory, maybe you'd even understand WHY they have been not interested in your technology, but wanted to give you a chance, it's for sure for a reason.

There are a lot of ppl that would probably work for free to have such a chance and throwing that away because you, without experience, judge the quality of their work to be inferior to your, doesn't put you in a bright spotlight, sorry.





Well, from what I can tell, you haven't really made a 3D engine (comparable to Crysis), you've made an abstraction layer on-top of your graphics API. What I mean is, to me it sounds like you are abstracting shaders, vertex buffers and such, then you have done little more than added some make-up to the graphics API... you have not really made an engine. Please correct me if I'm mistaken.



Correct, I've misguided you. Graphic engine is just small dumb module of the engine (other modules are: physic engine, terminal engine, scripting engine, input engine, sound engine, network engine, etc).
All these sub-engines are controlled by Master Engine that has complete control over the powerful data-structure. Master engine is actually the only one that understands the data-structure.
Every sub-engine has access to scripting engine (which runs scripts in parallel) so terminal engine (used for Main Menu, HUD, virtual in-game 3d computers, road sings,...) can sometimes do something to the graphic engine (set some render special render variable)...
Also, scripting engine has access to all other engines: so if I want to draw some windows application withing my virtual 3d computer, I would do something like this:

To my 3D object that represents 3d computer terminal, I would assign script called "TerminalScript" which goes like this:


void TerminalScript()
{
SetTerminal(20, 20, "WindowsXPLook");
StartWindow(0,0, 10, 10, "My Window");
DrawButton("bla")....
if( ButtonPressed ("bla") )
{

}
EndTerminal();
}


So basically, my engine has only one object type that can be anything: planet, character, 3d computer teminal, camera, big space ship, ....
Everything is generic...
If I want chat window I won't hard-code it like everyone else - I will update script of a 3d object that represents camera (only sets camera matrix of the graphic engine) to draw small window to over HUD that will be used for chatting (same goes for the main menu).

This way I can make chat window in my "space ship" so that I can communicate with other space ship that has similar script on one of its terminals.

If I want VOIP I wont hard-coded it. If will take a "rock", put it on a programing platform (all in-game) and write send/receive network script for it that copies from wave-in buffer into send buffer and from receive buffer to wave-out buffer....

No hard-coding - completely generic engine design (not just graphic). 1000 developers can create new object types simultaneously all in-game...
----


OK - this is way off-topic - but you get my point...

I've attached 3d terminal picture that i started building - notice tile editor in one of the windows. That is the begging of a terminal that will be used to completely build custom space ships (interior and exterior)....

http://i.imgur.com/4rrhW.jpg
4rrhW.jpg

Correct, I've misguided you. Graphic engine is just small dumb module of the engine (other modules are: physic engine, terminal engine, scripting engine, input engine, sound engine, network engine, etc).
All these sub-engines are controlled by Master Engine that has complete control over the powerful data-structure. Master engine is actually the only one that understands the data-structure.
Every sub-engine has access to scripting engine (which runs scripts in parallel) so terminal engine (used for Main Menu, HUD, virtual in-game 3d computers, road sings,...) can sometimes do something to the graphic engine (set some render special render variable)...
Also, scripting engine has access to all other engines: so if I want to draw some windows application withing my virtual 3d computer, I would do something like this:

...


Ah, that explains a lot. While I think it's a nice idea, and as being a developer it definately feels close to heart, "wanting to do everything". My personal experiences and "maturing" have really come to lead me on a different path entirely, I long held firm to the idea that "more = better" and that code should almost be treated as something holy, perfecting it over and over. This is all debatable and different situations require different approaches of course.


http://en.wikipedia....platform_effect

"The inner-platform effect is the tendency of software architects to create a system so customizable as to become a replica, and often a poor replica, of the software development platform they are using. This is generally inefficient and such systems are often considered to be examples of an anti-pattern."


Is something I've become acutely aware of, looking back I've realized that, often rather than solving the problem once, it was easy to fall into this pattern of first creating a "class" that was really flexible and could do a lot more than I actually needed (but my brain said, "you might!"), and then create another tailored "class" that would expose a much simpler interface to the first class. And in the end, the first class was never perfect, it was never fully implemented, there were flaws and it made certain optimizations very hard... which made me never want to reuse it as-is in another project anyway, I would copy-paste it and refactor it simply because it made more sense.


To me it seems like you've something so generic that it doesn't really provide anything other than what a handful of 3D API utility classes could mostly solve (or you have indeed introduced lots of limitations). Instead you have now artifically limited what is practically possible, but not theoretically (and developers are bound to your framework)... it's hard to put into words, but consider instancing, something that is hugely important in SOME games and often easy to achieve, how do you facilitate that for all situations across completely generic "sub-engines" without making a mess. Another common side-effect is that "hard stuff becomes easier" and "simple stuff becomes slower". The generic nature means that you can rarely assume anything about anything, hence, a lot of optimizations goes out the window... performance is about cheating and shortcuts.

From my limited experience, these kinds of APIs often provide beautifully simple example code, but horribly degrade in practice when all the special-cases and quirks start to appear (and you have to start worrying about performance)... you start to slowly circumvent the API in order to provide functionality and optimizations the API is unable to provide. And suddenly it has all degraded into a worse spaghetti-hell than the you were trying to avoid.


What I'm saying is, it seems to me that either you've made something so insanely generic and non-invasive that it could only possibly be a bunch of utility classes (which might not be a bad thing), or you have introduced an unknown number of limitations and pitfalls, and this is pretty much "the law". You cannot solve all problems with one solution, it is quite impossible, the larger you make the problem, the more convoluted solution you get... if you solve all problems with one solution, then you are back to where you started (inner-platform effect).

To avoid misunderstanding, I'm not hating on your engine, it's probably a very very neat thing you've made.
I just find that it can be very insightful to discuss various topics at times and there's no point being tippy-toes about opinions.


I must say, I'm really enjoying this thread and I'm very sorry it is a highjacked one :D.

I understand what you are saying, and must admit that a few years ago I was thinking the same things, but now I would like to explain why my opinion has changed about some things (especially about graphic).

So let me start:
Over the last 10 years things have changed dramatically in the graphic department. Today all the heavy stuff happens in shaders so it really doesn't matter if shader came with the game or the engine. If the shader is well written it will perform equally in both cases (specially post-process effects). On the CPU side you have a lot of repetitive stuff when dealing with graphic. For example, when you change render targets (switching to some other rendering mode), first you must remove the old ones, bind them with their appropriate shader textures (will be used later) and set the new rendering targets, possibly modify viewfrustum. This is one of the boring operations that happens a lot so I will lay it down through the configuration like this (simplified deferred rendering):


- render target N
- used as shader texture: _tNormalMap
- dim = 1920x1200
- format = RGBA

- render target C
- used as shader texture: _tColorMap
- dim = 1920x1200

- render target D
- used as shader texture: _tDepthMap
- dim = 2048x2048
- format = F32

- render target L
- used as shader texture: _tLightMap
- dim = 1920x1200

...

- render mode ClearBuild
- render target 0 = C
- render target 1 = N
- render_quad = Y
- assigned renedring technique = ClearBuild

- render mode Build
- render target 0 = C
- render target 1 = N
- render quad = N
- [list of rules for determining appropriate rendering technique for different combinations of geometry and materials]

- render mode ClearDepth - used for shadow mapping to clear shadow map texture
- render target 0 = D
- render_quad = Y

- render mode LightDepth
- render target 0 = D
- [list of rules for determining appropriate rendering technique for different combinations of geometry and materials]

- render mode Light
- render target 0 = L
- draw quad = Y -- or some special geometry for certain light type

- render mode FinalPass -- this will do: _tColorMap*_tLightMap and send it on the screen
- render target 0 = NULL
- render_quad = Y


So how would a simple deffered rendering script look like


void RenderingScript()
{
StartRendering(); // Does nothing in DX11
SetRenderMode(ClearBuilld); // this will draw quad that will resed color and normal scene map
SetRenderMode(Build); // This only prepare rendering to use rules "Build" for rendering geometry (it doesn't render any quad)
RenderFrontToBack("SOLID"); // This is Main Engine function that will go through every visible objects and call rendering of their meshes. Main engine doens't know what mesh is (it only knows its ID and send that to the graphic engine + some info - position matrix, rendering distance, ....)

// At this point we have scene color map and scene normal map...

Now we need to render shadows and lighting:
RenderShadowAndLighting("ClearDepth", "LighDepth", NULL, "Light"); // Third parameter is NULL because it is only used for forward renderers
// This function will go through every qued light - for those that have shadow it will create shadow map using ClearDepth i LightDepth techniques and accumulate light using Light technique into render target L
// Of corse i didn't have to use this function - i could have done everything manualy but why (I have CSSM :)

SetRenderState("FinalPass"); // This will compose final scene: sceneColor*sceneLight

SetRenderStare("TransparentRendering");
RenderBackToFront("TRANSPARENT");
EndRendering(); // Does Present in DX11
}


Here I've implemented simple deferred rendering system and I didn't have to to all the boring stuff (set shader variables, render stares, render targets,...)
This is all done in the background.,
There is no much code.
There are no errors.
Everything is simple.
Surprisingly, engine doesn't have any overhead. The same would be done in Cryengine but with a lot of hardcoding.

OK, I haven't explained how rules for determining correct rendering technique work but it is extremely optimized rule processing system.
State management happens in the background.
Instancing won't work in the first frame because engine still doesn't know what to expect - but in the next frame it has a lot of information and if rules are configured correctly it will know how to pick appropriate technique.

Also, this all seams to constricting, but it isn't. In 99% of cases will work quite fine. In remaining cases we can overwrite everything with scripting. We have scripting on level of mesh, geometry, material and even vertex definition (indexed vertex blending can have script that will set bones for the current object + mesh offset matrices).
Scripting is our friend - specially if you have extremely powerful and fast scripting engine like we do.

Huh, that is just small explanation for how graphic can be configured to be completely part of the game, and not the engine (except configuration file - and even that can go through script - but then it would be too messy to find because you would have to dig through game packages)...


To conclude - if I prove to you that graphically speaking I can generically achieve identical rendering (in speed and look) like in Cryengine (down't to a single render state and render target change) then you must admit that this engine is much better than Cryengine because it can do everything that Cryengine can + every than any other engine will be able to do - and I don't ever have to change a single line of code in my engine.

Hooh - that is about graphic.

I could go about scripting (I'm greatly disappointed how others are working) and physic (every physic engine today has one mayor limitation that just bugs me) and network (I have p2p MMO in completely dynamic world with out-of-scale dynamic object and let's face it - that is at least 3 generations above what everyone else has).

Damn, I like to talk about my engine so much :D

... and that is why I couldn't force my self to work in Crytek - I'm opening my own game development studio in a few months. Their engine can't do 5% of what my engine can. I couldn't work on such inferior technology - no way...
"mire 4.5K euros a month"

You turned down E54k a year? E54k a year is a pretty good salary. And for games development??? Blimey. Again, most people here would kill for that sort of offer -- it's the aspirational goal of their life.


"working on technology that is more inferior than my home projects"

There's a lot more to games than the graphics engine. Look at this year's surprise hit. His characters have corners...
"So let me start:"

I can't see where the magic technology is. I can see that you've implemented a "hard and soft layers" architecture involving scripting to connect component parts together. Could you be more specific about what it is I'm missing?

There's a lot more to games than the graphics engine. Look at this year's surprise hit. His characters have corners...


Exactly, but Crytek doesn't know this. They think graphic is everything...

This topic is closed to new replies.

Advertisement