Jump to content

  • Log In with Google      Sign In   
  • Create Account


Vincent_M

Member Since 16 Jan 2007
Offline Last Active Yesterday, 08:50 PM

Posts I've Made

In Topic: OpenGL 3.0+ And VAOs

07 September 2014 - 02:57 PM


 
Vincent_M, on 31 Aug 2014 - 3:21 PM, said:
This also brings up another question I was wondering: do VAOs provide more efficiency, or are they there for convenience for programmers?
VAOs are purely a software feature (as far as I've seen), that is, the GPU doesn't have any knowledge of them. They're supposed to cut down on time spent validating the vertex attributes, switching buffers, but YMMV. Here's a good write up on when benefits can be seen or not seen: http://www.openglsuperbible.com/2013/12/09/vertex-array-performance/

I did see that post, and it looks like there are efficiency benefits for VAOs, but if it's just software, then I find it kind of unnecessary outside of it being forced upon you in OpenGL 4.x. My own state manager was a wrapper for whenever I switched FBOs, shader programs, VBOs, textures, glEnable/Disable, and enabling/disabling vertex arrays. The way the vertex array portion worked was that whenever I swapped my shader, and my GraphicsContext class recognized it as swapping to a different shader than the one currently in use, it'd enable/disable the difference vertex arrays from the last bound shader because GraphicsContext also has its own client-side set of bools to keep track of which attribute arrays were currently active internally.

 

For example, let's just say my currently-bound shader only requires 1 vertex attribute array enabled, so only array 0 would be activated. Then, let's say later on in the frame I need to activate my lit-and-textured shader that takes 3 attribute arrays. It'd activate arrays 1 and 2 only since 0 was already activated. Then, when the next frame is drawn, and I need to go back to the single attribute array shader, it'll swap, and deactivate attribute arrays 1 and 2 all. This is simple to the user drawing something because all they have to do is call GraphicsContext::UseProgram(Shader *shader), and pass in the shader object they require. Now, I'm not sure how efficient the software implementation is, but if my objects were grouped up by shader, then by state, etc you're really not calling glEnableVertexAttribArray()/Disable too much! Now, glVertexAttribArrayPointer() gets called per legit shader swap, however, but there's ways of further optimizing that using the massive VBO buffer mentioned above, and also referenced in Graham Sellers' post above.

 


Look at the AZDO presentation (google) to see which order you should render things in, then figure out which features make sense for you and go from there.
Short of using any synchronizing functions (such as glGet*) that stalls the entire pipeline, you're going to be fine. AZDO requires GL 4.4 btw. I think.

Ironically, I haven't needed to use any glGet* functions outside of glGetString(GL_VERSION) at startup to print the implementation string for logging purposes. The guys over at Steam mentioned in their video regarding porting their engine over from DirectX to OpenGL that their Source Engine uses glGet* for nearly ever state query they need as they believe that all states systems deviate, at least slightly. I can see how this is true in some cases of the OpenGL State, but when it comes to things, such as glEnable/Disable, writing a wrapper for setting/getting has always worked for me. Of course, my engine only assumes single-context rendering...

 

But yeah, GraphicsContext::SetGLState(unsigned int state, bool enable) -> pass in ANYTHING, and internally, it'll check if that state's value is in an STL vector already for enabling, or check if does not exist for disabling. If enabling, but the state doesn't exist in the STL vector, then call glEnable, and add it to the vector of states. If disabling, it'll check to see if the state is in the vector, in which case it'll remove it from the STL vector and call glDisable. The method even returns a bool on if it successfully state changes or not. Same with GraphicsContext::UseProgram(Shader *shader), GraphicsContext::SetActiveTexture(int target, Texture *texture), I have one for FBOs, etc.

 

This cut down quite a bit of gl* calls in generate on mobile devices using OpenGL ES 2.0, and I could assume it'll only do more justice on desktop environments with instancing.


In Topic: What does a material contain

02 September 2014 - 11:30 AM

I've been looking at materials and shaders as two sides of the same coin lately. The shader is the algorithm in which your graphics are processed, while the material would be the configuration that sets up your shader's uniforms. Now of course, you will see additional input coming into the shader in the form of what I call scene uniforms such as your model-view-projection (or pvm, possibly) matrices, lighting data, camera data, etc, and that should be kept separate from your material system. It wouldn't be a bad idea to provide status flags in your shader class that'll inject source code into your shader providing the names of those uniforms when lighting is enabled, depth and/or alpha is enabled, and so on, but that's outside the scope of your question lol.

 

In this concept, I'm assuming a 1-to-1 relationship between shaders and materials, and that your materials should always match up to your shader. The way I'd ensure a correct match-up is to allow the materials to hold a pointer to your loaded shader and a dictionary (in C++ we call them STL maps) that'll hold the uniform's handle as the key, and whatever generic data you like as the value. Whenever that shader reference is set, the dictionary is cleared, and re-populated with whatever uniforms were put into your material block in your shader. You shader could have a copy of all your uniform locations ahead of time so that you don't have to run a series of glGetUniformLocation() every time you assign a material to a loaded shader. ;) On top of that, since remembering arbitrary shader handles is impractical for us, your Material class could provide methods to get handles by name referenced in the shader:

 

GLuint Material::GetProperty(string name);

 

That would return the handle to you based on string name where you could cache the uniform to get around string look-ups. Setting material data would be similar to how HappyCoder's doing it, except I'd provide a more generic route which would allow you to provide not just basic datatypes, but whole, complex structures built from those basic datatypes as you can do in GLSL like so:

 

Material::SetProperty(int handle, IMaterialProperty *property); // set property by handle (fast, inconvenient)

Material::SetProperty(string name, IMaterialProperty *property); // set property by string look-up (slow, convenient)

 

Assume that IMaterialProperty is a probably empty abstract class (C++) or interface (C#). It could even just be a void pointer (C++) or object (C#). Anyway, my current setup is just like HappyCoder's where I have an upload method for each type of vector, matrix and color format as I'm still coming from OpenGL ES 2.0/OpenGL 2.1. Once I wrap my head around OpenGL 4.x better, I'll have an efficient system in place. Hopefully, I'll be able to upload whole blocks of uniforms for things like skeletal data to the shader with a single gl* API call instead of 3 UploadVector4() calls per bone, then I have to convert them to mat4s in the vertex shader.

 

Now, that doesn't mean your mesh doesn't have to be limited to just 1 materials. You could provide multiple materials for multiple render passes. For example, you'd have a "regular" material for all your blur passes, but then you'd maybe have a glow material with a low-res alpha map and color for your "glow" material for the glow pass.


In Topic: COLLADA vs FBX... Are They Worthwhile for Generic Model Formats?

02 September 2014 - 10:57 AM

I'm excited to try it out. In the meantime, I setup the FBX SDK, and began exporting models from Blender (FBX 7.4). Surprisingly, the SDK with its constant API changes, still work. Btw, if I were to write a community-driven, open source, engine utilizing the FBX SDK, would I be able to distribute my engine? I understand that I wouldn't be able to distribute Autodesk's own SDK along with it, but if I were to distribute the built binaries on my end, would I have any issues there?

 

Btw Eric, I checked out your C4 Engine back in 2011, and it's impressive! It inspires me to become better at what I do.


In Topic: Sharing OpenGL Buffers Across QGLWidgets

01 September 2014 - 09:41 PM

I was looking through my posts, and realized I didn't respond back to you. I read your post when I was at the gym a weeks ago when you posted it, tried it, and it worked. You solved me issue, and it's fixed a few things. Sorry for leaving you hanging on the results. Thank you!


In Topic: Would you Still Play Nintendo 64-Quality Games?

01 September 2014 - 07:54 PM

What do you mean by "N64-quality"? I wouldn't play old console games simply because I don't like the blocky pixelated look, but I certainly would play lots of those games if they were rendering at a decent resolution and no other changes beyond the minimum to make the art work.

 

For instance MarioKart on the GameCube is awesome, and not because of the graphics.

That's actually what I was referring to. We played Zelda on our Nintendo 64 that as hooked up to our HDTV, and it's hardware resolution was so low, that it looked really blocky on our TV. Now, when playing the game through an emulator where you could actually draw the scene at a higher-resolution frame buffer, it looks really, nice --even without higher-res textures. In fact, people sometimes produce higher-res textures for those games as I've seen with Banjo-Kazooie. Anyway, speaking of hires textures, they can make a huge difference as well: http://siphil.blogspot.com/2011/03/nintendo-64-in-high-resolution.html


PARTNERS