Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 16 Jan 2007
Offline Last Active Oct 17 2014 10:17 AM

Posts I've Made

In Topic: DLL-Based Plugins

16 October 2014 - 07:22 PM

Yeah, they'd be native DLLs. As far as allocating memory goes, I won't be freeing it up across boundaries. When the Editor application loads the DLL up, it'll have a pretty hands-off interaction with the plugin. All plugins will have Create and Destroy() functions that'll be called immediately after loading the DLL and right before unloading the DLL. It's up to the plugin's author to allocate data in Create() and release it in Destroy(), and the Editor application won't ever do any of that directly.


The Editor application will only make function calls to the DLL, and won't access any data directly. As far as exception handling goes, here's a scenario I've thought of for updating the game DLL:

void Editor::Update(float elapsedTime)
     try {
     if(game != null)
     } catch(Exception e) { ... }


Assuming my "game" pointer is pointing to the Game subclass in my DLL, and valid, it should run, hopefully! I put the try-catch block there in case there is an exception because I don't want the Editor to crash if the Game DLL causes an exception. Would this be considered as exception handling across binaries? Also, my engine's code might be a separate DLL as well.


If this is a pretty hairy way to approach a plugin system, are there any other good alternatives? I've thought about implementing LUA into my Editor for plugins, but I'd want C++ for testing the game.

In Topic: 1 GPU or 2 GPUs?

14 October 2014 - 11:22 PM

Sorry for the late post. I'm getting into 3D modeling, and I'd like to do some offline stuff as well as real-time modeling for games. I've found that my current graphics card (GT 610) is trudging along with even basic games right now. One thing I'd like to do is work on animated shorts in Blender. Being able to make quicker renders without it taking hours would be nice since I'll have one of NVIDIA's most powerful comsumer-level GPUs running using Blender's CUDA-enabled renderer.


I also thought that dual GPUs would be a headache. I just wanted to get others' opinions before I went for it. I might start off with just a single 8GB stick of memory since that'll be upgradable to 32GB in the next few years.

In Topic: OpenGL 3.0+ And VAOs

07 September 2014 - 02:57 PM

Vincent_M, on 31 Aug 2014 - 3:21 PM, said:
This also brings up another question I was wondering: do VAOs provide more efficiency, or are they there for convenience for programmers?
VAOs are purely a software feature (as far as I've seen), that is, the GPU doesn't have any knowledge of them. They're supposed to cut down on time spent validating the vertex attributes, switching buffers, but YMMV. Here's a good write up on when benefits can be seen or not seen: http://www.openglsuperbible.com/2013/12/09/vertex-array-performance/

I did see that post, and it looks like there are efficiency benefits for VAOs, but if it's just software, then I find it kind of unnecessary outside of it being forced upon you in OpenGL 4.x. My own state manager was a wrapper for whenever I switched FBOs, shader programs, VBOs, textures, glEnable/Disable, and enabling/disabling vertex arrays. The way the vertex array portion worked was that whenever I swapped my shader, and my GraphicsContext class recognized it as swapping to a different shader than the one currently in use, it'd enable/disable the difference vertex arrays from the last bound shader because GraphicsContext also has its own client-side set of bools to keep track of which attribute arrays were currently active internally.


For example, let's just say my currently-bound shader only requires 1 vertex attribute array enabled, so only array 0 would be activated. Then, let's say later on in the frame I need to activate my lit-and-textured shader that takes 3 attribute arrays. It'd activate arrays 1 and 2 only since 0 was already activated. Then, when the next frame is drawn, and I need to go back to the single attribute array shader, it'll swap, and deactivate attribute arrays 1 and 2 all. This is simple to the user drawing something because all they have to do is call GraphicsContext::UseProgram(Shader *shader), and pass in the shader object they require. Now, I'm not sure how efficient the software implementation is, but if my objects were grouped up by shader, then by state, etc you're really not calling glEnableVertexAttribArray()/Disable too much! Now, glVertexAttribArrayPointer() gets called per legit shader swap, however, but there's ways of further optimizing that using the massive VBO buffer mentioned above, and also referenced in Graham Sellers' post above.


Look at the AZDO presentation (google) to see which order you should render things in, then figure out which features make sense for you and go from there.
Short of using any synchronizing functions (such as glGet*) that stalls the entire pipeline, you're going to be fine. AZDO requires GL 4.4 btw. I think.

Ironically, I haven't needed to use any glGet* functions outside of glGetString(GL_VERSION) at startup to print the implementation string for logging purposes. The guys over at Steam mentioned in their video regarding porting their engine over from DirectX to OpenGL that their Source Engine uses glGet* for nearly ever state query they need as they believe that all states systems deviate, at least slightly. I can see how this is true in some cases of the OpenGL State, but when it comes to things, such as glEnable/Disable, writing a wrapper for setting/getting has always worked for me. Of course, my engine only assumes single-context rendering...


But yeah, GraphicsContext::SetGLState(unsigned int state, bool enable) -> pass in ANYTHING, and internally, it'll check if that state's value is in an STL vector already for enabling, or check if does not exist for disabling. If enabling, but the state doesn't exist in the STL vector, then call glEnable, and add it to the vector of states. If disabling, it'll check to see if the state is in the vector, in which case it'll remove it from the STL vector and call glDisable. The method even returns a bool on if it successfully state changes or not. Same with GraphicsContext::UseProgram(Shader *shader), GraphicsContext::SetActiveTexture(int target, Texture *texture), I have one for FBOs, etc.


This cut down quite a bit of gl* calls in generate on mobile devices using OpenGL ES 2.0, and I could assume it'll only do more justice on desktop environments with instancing.

In Topic: What does a material contain

02 September 2014 - 11:30 AM

I've been looking at materials and shaders as two sides of the same coin lately. The shader is the algorithm in which your graphics are processed, while the material would be the configuration that sets up your shader's uniforms. Now of course, you will see additional input coming into the shader in the form of what I call scene uniforms such as your model-view-projection (or pvm, possibly) matrices, lighting data, camera data, etc, and that should be kept separate from your material system. It wouldn't be a bad idea to provide status flags in your shader class that'll inject source code into your shader providing the names of those uniforms when lighting is enabled, depth and/or alpha is enabled, and so on, but that's outside the scope of your question lol.


In this concept, I'm assuming a 1-to-1 relationship between shaders and materials, and that your materials should always match up to your shader. The way I'd ensure a correct match-up is to allow the materials to hold a pointer to your loaded shader and a dictionary (in C++ we call them STL maps) that'll hold the uniform's handle as the key, and whatever generic data you like as the value. Whenever that shader reference is set, the dictionary is cleared, and re-populated with whatever uniforms were put into your material block in your shader. You shader could have a copy of all your uniform locations ahead of time so that you don't have to run a series of glGetUniformLocation() every time you assign a material to a loaded shader. ;) On top of that, since remembering arbitrary shader handles is impractical for us, your Material class could provide methods to get handles by name referenced in the shader:


GLuint Material::GetProperty(string name);


That would return the handle to you based on string name where you could cache the uniform to get around string look-ups. Setting material data would be similar to how HappyCoder's doing it, except I'd provide a more generic route which would allow you to provide not just basic datatypes, but whole, complex structures built from those basic datatypes as you can do in GLSL like so:


Material::SetProperty(int handle, IMaterialProperty *property); // set property by handle (fast, inconvenient)

Material::SetProperty(string name, IMaterialProperty *property); // set property by string look-up (slow, convenient)


Assume that IMaterialProperty is a probably empty abstract class (C++) or interface (C#). It could even just be a void pointer (C++) or object (C#). Anyway, my current setup is just like HappyCoder's where I have an upload method for each type of vector, matrix and color format as I'm still coming from OpenGL ES 2.0/OpenGL 2.1. Once I wrap my head around OpenGL 4.x better, I'll have an efficient system in place. Hopefully, I'll be able to upload whole blocks of uniforms for things like skeletal data to the shader with a single gl* API call instead of 3 UploadVector4() calls per bone, then I have to convert them to mat4s in the vertex shader.


Now, that doesn't mean your mesh doesn't have to be limited to just 1 materials. You could provide multiple materials for multiple render passes. For example, you'd have a "regular" material for all your blur passes, but then you'd maybe have a glow material with a low-res alpha map and color for your "glow" material for the glow pass.

In Topic: COLLADA vs FBX... Are They Worthwhile for Generic Model Formats?

02 September 2014 - 10:57 AM

I'm excited to try it out. In the meantime, I setup the FBX SDK, and began exporting models from Blender (FBX 7.4). Surprisingly, the SDK with its constant API changes, still work. Btw, if I were to write a community-driven, open source, engine utilizing the FBX SDK, would I be able to distribute my engine? I understand that I wouldn't be able to distribute Autodesk's own SDK along with it, but if I were to distribute the built binaries on my end, would I have any issues there?


Btw Eric, I checked out your C4 Engine back in 2011, and it's impressive! It inspires me to become better at what I do.