Posted by CodeDemon
on 15 November 2011 - 10:49 PM
You can apply machine learning to anything that requires optimization of a set of parameters or functions controlling the transformation of data. Generally, it's used in cases where searching the entire space of solutions has non-trivial complexity, and you simply want to find a good enough solution that isn't obvious.
I can think of at least one area within real-time computers graphics that might bear fruit. Occlusion culling often uses general purpose hard-coded heuristics to determine "good" objects or surfaces for use as occluders, and may rely on input from content designers or programmers to flag which objects should be used for occlusion or to generate acceptable bounding volumes or primitives to use when rendering into the occlusion buffer. Using machine learning, you could have the system do all of the hard work for you in determining good heuristics that are custom tailored for a specific scene, level, or sub-region/sector within a level, and for generating better occlusion geometry for static (occluder planes) or dynamic (optimal LOD for an occluder mesh) datasets, duration of time before reconsidering an object as an occluder in implementing temporal coherence, etc. There's probably an endless number of things you could optimize for.
A starting point would probably involve simulating the contribution of various objects as occluders across the different regions of a level as a preprocessing step, generating feature vectors that capture the camera position & orientation, a measure of contribution from objects into the occlusion buffer, perhaps some other information, and then performing clustering analysis of the feature vectors and optimizing for least amount of over-draw and least cost of performing the occlusion culling pass. Note that your feature vectors will be very large as they have to account for every possible occluder in a given level data set, and so you they will be n-dimensional where n is probably in the thousands or tens of thousands. You could probably implement a lot of this to run on a GPU using DirectCompute or OpenCL.
Alternatively, you could record a number play throughs and use the play-back for your simulation and analysis phase and combine the results. Think of it as profile-guided optimization for occlusion culling.
I'm not sure how much of this kind of stuff is already done in COTS occlusion systems like Umbra, but I seem to recall the original paper on it used simple heuristics and statistical geometric methods to generate occluder sets and the like.
It's true that templates offer a form of compile-time polymorphism, but ultimately in the context of a single run-time call site (your update loop), you still need to map compile-time bindings into run-time bindings. So no, you aren't going to find an easy way to eliminate the virtual call there while maintaining your existing design.
So you're left with changing your design. If you want to get rid of virtuals in your GO/scene graph system, you should divide your system into two, have a flat game object/component sub-system which may still uses virtuals for updating and your heirarchical scene graph where you can pull the node-update and render calls out of the node classes in to a heavier-grained node manager/scene world class, and optimize everything in a data-oriented fashion. Your game objects and components keep a reference to the scene nodes they're interested in.
You can even eliminate the virtual calls from the game object sub-system, just by making your update code more general and/or by hard-coding update calls in your main update loop to different class types. This is some times done to maximize performance, at the cost of increased maintenance.
EDIT: Just want to clarify that what you're essentially wanting to do is reduce the virtual update calls (or calls into your scripting engine) to just those scene nodes that actually need game logic. If you have an independent physics sub-system, and you want to give a scene node some physics, you just need to associate a rigid body object within the physics sub-system with the scene node, for example. The physics sub-system can be written in such as was as to reduce or eliminate virtual calls as well. No need to use a general purpose polymorphic game object here if you design things right.
do you know of any books that deal with this specifically? I have been wary of buying books in the past for the reasons in your post, but if there is one that deals with parallelism better I'd love to read it.
I'd be very interested to know about such a book if you knew such one, too.
Many people like to argue otherwise, but game engine design that truly utilizes paralellism is in its infancy, if not non-existant.
Not sure I'd be picking up a book on it this early - you'd be putting a lot of trust in an author. Rather, study concepts and toss around your own design ideas. Unlike traditional game engine design, you might not be reinventing the wheel.
I'm not aware of any such book, but Chris has it right, it's too early to tell exactly how things will pan out. It's new ground.
The type of parallelism we see in today's game engines is the result of a mostly top-down approach architecturally. In other words, it's been pretty much bolted on. It tries to logically partition work within different sub-systems into separate threads. This is where you get your typical two-thread architecture where game logic and rendering are broken into separate threads, or your three-thread architecture where physics is performed within its own thread distinct from the game-logic and rendering. Many engines also do their I/O asynchronously, loading and processing resources in the background. Newer engines targeting DirectX 11 or a future version of OpenGL may take advantage of deferred rendering contexts, farming out some of the work in building the command buffers / display lists, alleviating the load on the main rendering thread. Engines optimized for the PS3/Cell are obviously a bit different due to the SPEs.
This is all perfectly fine and good for current hardware, but in the end, none of if these models scale all that well beyond that. Try to take such an engine and reuse it on a new hardware platform with a couple of more cores, and you'll find you won't be able to do much more with it without serious modification. Yeah, sure, maybe you can do a little bit more, but you're no longer able to maximize the hardware. You can't design scalable systems around logical partitions of specific modules and sub-systems.
The same goes for trying to design it around a single type of concurrency pattern or concurrent-language idiom. There is no one-size-fits-all solution, there is no silver bullet. You can't constrain yourself to a single recipe. This is why I think developers saying things like "software transactional memory is the great panacea for everyone's scalability woes" are quite misinformed. Do you compose your data structures using a single type of algorithm? Do you try to use an associative hash-table with open-addressing to solve all of your container needs? Of course not, that's madness! So in order to build scalable systems, you need to understand all of the tools you have at your disposal, their benefits and their trade-offs, and apply them correctly.
Thus you need to dive into the deep end and build everything from the bottom-up with scalability as one of your prime goals. You need to look at ways of maximizing read-sharing, minimizing write-sharing, and coming up with good mechanisms for distributing/balancing work among threads, using your entire repertoire of concurrency tools and recipes.
But I digress. From the ToC of the book, I don't think it even covers the two-thread game logic + render model, and if it does, I'd be surprised if it did more than just gloss over it.
I've looked through the table of contents. It looks like it covers a variety of practical implementations for different subsets of what goes into building a game engine. But it focuses more on where the industry has been instead of where things are going and it looks like it tries to keep things pretty simple using off-the-shelf open source libraries like SDL, Ogre, Bullet, etc. Not to say that this is bad, especially for not someone who is relatively new to the subject. I just don't think it's indicative of where the industry may be heading currently on the PC and in the next console generation, where you need to really think about thread scalability and parallelism. If you're looking at building a next generation engine that will be able to maximize current and future hardware, this book probably won't be much help.