Sign in to follow this  
bluntman

Modernizing my GL based renderer.

Recommended Posts

bluntman    255

So I came back to my main project after a couple of years away and GL has changed quite a lot! I have familiarized myself with the new 4.4 specs for the API and GLSL, and now am left with questions of best-practices.

 

My old engine used interleaved vertex arrays to provide vertex data, which appears to still be possible via glVertexAttrib using the stride parameter, but it occurred to me that it might be better from an engine design standpoint to use separate buffers. These buffers could by dynamically bound to vertex shader inputs based on interrogation of the shader program. i.e. use consistent naming of vertex shader inputs (e.g. 'vertex', 'normal', 'uv' etc.) and map these to separate attribute buffers in my geometry objects. This would essentially make my vertex formats data driven by the shader, and allow for easy detection of mismatches between vertex data and shader requirements. Good idea, or overkill (or just a total misapprehension!)?

 

The second question I have is regarding buffer backed uniform blocks. It want to use the unsized array feature (last member of block can be an unsized array, size defined by your API call) for light specifications, material specifications to match material IDs (my renderer uses deferred lighting), and cascaded shadow frustum matrices. Is this an appropriate use, or is there a more canonical method? 

 

My head is buzzing with new ideas and I haven't even got to the tessellation stages yet (something about awesome water)!

 

 

 

 

Share this post


Link to post
Share on other sites
bluntman    255

Okay, I will rephrase: where can I find out the intended usage patterns for the various new features? The spec explains what the features are and their syntax, not their rationale (usually), or the specific problems they were intended to solve...

Share this post


Link to post
Share on other sites
Promit    13246

Regardless of what API or version you use, there will always be a pending collision between engine design, driver design, and underlying hardware design. I can tell you that at one point in time, the NVIDIA driver didn't really support separate vertex streams (in most or many cases?) and would simply manually interleave your attributes before sending it to the driver. Modern advice is to use one buffer for each block of vertex attributes that are updated together at the same frequency.

 

The trouble is, there's hundreds of similar little kernels of advice that are hardware specific, arch specific, and slowly slide and mutate over time. Keeping track of it all is maddening. And to top it off, best practices for GL are not always optimal. I've chosen to stick to the recommendations in the spec and wiki for the most part, and experiment with specific things where I think special tricks might help. There's no good answer though. NV and AMD will sit down with major game companies and help them fine-tune, because they're the only ones who really know.

Edited by Promit

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this