Jump to content
  • Advertisement
Sign in to follow this  
  • entries
  • comments
  • views

GDC2008 day 1

Sign in to follow this  


The first day of GDC 2008 is done, and I've finally found a few minutes to post an update here. This is not a full article, as those will be posted the week after GDC.

This was one of two tutorial days. I spent most of the day in the "Advanced Visual Effects with Direct3D" tutorial. I've attended the same-named session at past GDC's, but there is rarely much duplication due to the rapidly evolving landscape of graphics hardware. Today was much the same. There was a D3D10+ bent, and it is clear that these speakers see D3D10 and its predecessors becoming the API of choice for 3D rendering in games on the Windows platform. Prototypes of algorithms that might be implemented in D3D11, eventually in hardware, are in development and testing.

The first section of the tutorial focused on D3D10 performance. There are, as with all 3D API's, a myriad of places where bottlenecks can exist, and that list is expanded with version 10 to include, roughly: input assembly, vertex shader, geometry shader, stream output, pixel shader. The core, high level message seemed to be that a naive port of your engine from D3D9 will not result in maximum D3D10 performance, and that you should exploit tools provided by AMD (GPUPerfStudio) and nVidia (NVPerfHUD) to help find and optimize the bottlenecks.

The next two sections discussed hardware tessellation of meshes, with some cool demos. Various GPU vendors have implemented vendor-specific techniques for hardware tessellation in the past, and this seems to have not been extremely popular. My feeling is that it has a fighting chance now. There certainly are compelling reasons, including a reduction in memory bandwidth between the CPU and GPU, ability do more complex animation/physics/etc. pre-tessellation then create detail post-effect on the GPU during tessellation, etc. A key message was that the geometry shader is not the best mechanism to use to implement tessellation, since the stream output is not necessarily as optimal as possible and also because of the inherent limits in the amount of stream output possible from the GS.

Up next, a section on multi-GPU (MGPU) development. The section discussed various approaches to distributing the rendering load across multiple GPU's, then focused on various pitfalls to watch out for when coding for MGPU, e.g., how peer-to-peer copies of data---copying of buffers from one GPU to the other via (maybe, depending on implementation) the CPU has the potential to eliminate the speedup to be gained by using MGPU, if done improperly.

The next section discussed DX10.1 and the future of DX. I won't enumerate the new features of DX10.1 here, but plan to briefly discuss them in a post-GDC article. I will say that one of the demos included AMD-implemented (presumably GPU-based) physics. No details were given about the physics implementation.

Alex Vlachos of Valve discussed various post-processing effects as implemented for The Orange Box. This was the only talk that focused on D3D9 vs. D3D10. The talk including dealing with difference in sRGB gamma correction on PC vs. Xbox360, their approach to tone-mapping for HDR rendering (and why), and their approach to implementing motion blur.

Next was a rapidly presented overview of several advanced soft-shadow-mapping techniques, including the relatively new Percentage Closer Soft Shadowing (PCSS) and Exponential Shadow Maps (ESM), and a technique for backprojecting texels.

The final section discussed the future of Direct3D, including current requests for features and some possible directions. Lots of requests for better integration with art tools for things like subdivision surfaces. Interest in higher quality texture compression, with support for floating point textures. Some experiments currently underway to discover potential improvements in image processing performance. The discussion ended talking about issues...benefits and pitfalls...of making GPU's more general purpose vs. remaining very heavily focused on graphics and its inherent massive parallelism.

Apart from that, I spent some time visiting with gamedev.net staffers and contributors, a great bunch. Went to dinner at a Thai restaurant with a handful of gamedev'ers, where the conversation ranged from possible improvements to gamedev, whether or not game design is easy, alternative languages (Haskell, Smalltalk, Python), and indie vs. mainstream games. Good stuff!
Sign in to follow this  


Recommended Comments

Was there much comment on the fact that Vista uptake is slow, and therefore DX10 uptake will also be slow? Or is the PC market not really what they were talking about?

Share this comment

Link to comment
No real comment at all on uptake rates. Just, it seemed, strong encouragement of PC/Windows game developers to move to Vista/DX10.

Share this comment

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!