• Content count

  • Joined

  • Last visited

Community Reputation

110 Neutral

About e.s.

  • Rank
  1. What is normally done is to utilize a translatable language to persist stored data in.   Most common, is XML with XSL.  The beauty of XML is that you can "Serialize" game state objects TO and FROM XML.   Rule Number 1: Your Game's Save File Schema/Layout definition WILL change between different versions of games.  Why?  Because you will want it to, to add cool stuff.  And, because you will learn a lot.   So, pick a save file format, optimize it for your game. As you are making your game, make save files. Make new features that you want to persist in saved files.. Change your game's save file schema Make a translation tool that will upgrade your saved file from version .98a to .98b before you even release the game.   In this way, you already know that you can patch your game with a save file fix, (there are a ton of exploits around getting through save file hashes, etc), and you may want to fix this in a version 1 patch anyway.   XML/XSL is my favorite choice.  You can digitally sign saved files, use hashes, (in ways that are unhackable), to reduce the risk of people hacking saved files.
  2. Most of the loss of performance is /not/ the draw calls, but a Game Engine's utilization of them...    From an architectural point of view, the code hint that you may not be implementing standard "Separation of Concerns" patterns, (ensuring that methods/classes don't do more than one thing, but rather responsible for doing a specialized purpose).    In other words, at this stage, I believe everyone would agree that getting your engine do to what you want is more important than the performance.   For example, you are using a "renderfunc"  and "rendermesh" function names: Okay, I know this sounds a little out of left field for being, nitpicky, but bear with me ... Rather than, RenderScene, -> RenderEachLayout  -> RenderBufferSet -> RenderBufferPartitions ->  or some other rendering step in the context of your Game Engine's OWN Pipeline.   In a game engine, there is /massive/ importance on the efficient management of the associations between Modeling entities, specifically: 1. Scenes to Layouts 2. Layouts To Buffers 3. Buffers to Buffer Partitions, Shaders, etc 4. Model to: (Mesh, Behaviors, Textures, Shaders, etc) 5. Many Textures to Many Models and Meshes 6. Many Meshes to Many Models 7. and the list goes on and on and on and on.   The GPU operations are /fast/ ....  Managing memory, garbage collection, managing collections in memory that are hundreds of megabytes, etc, is one of the biggest game performance hits.  This is where a lot of your optimizations will need to occur.  As horrible as it sounds, use place holder shaders, textures, etc at first, (greyscale gradients, etc), to ensure your Game Engine's Framework is working right.   In other words, pick a game engine feature, set of features that you want, program towards them, and find out if those features are running slow or not, then performance tune them.   Stage 1 Features: 1. Rendering Multiple Instances of the Same Model definition.  (sidewalk squares) 2. Rendering Multiple Instances of the Same Model, but with different Shaders,  (stars, the same car but different color, etc) 3. Rendering Multiple Types of Models, (Tree, Sidewalk Square, etc) 4. Pre-Defined animations for a subset of models, (rotations, movement/translation in a circle pattern, movement along a Bezier, whatever). 5. User Driven Movement, 6. etc.   There are a /lot/ of design patterns that can be used to organize the complexity of the code, so that the application will scale appropriately with the features you want.   You may think that implementing facades, multiton, or factory design patterns are a /lot/ of overhead for a small program, but in actuality this is not the case as compilers optimize and inline what needs to be.    Still, the point that I am trying to make, the use of Design Patterns, and the avoidance of Anti-Patterns in a game engine is what is going to always be what will determine your game engine's performance story.   Then, there are fancy shmancy GPU render operations... See, the funny thing about fancy shmancy GPU operations is that /only if/ your architecture is "sound" in the first place, can you implement GPU operations on a priority basis, (maybe ensure that that that fancy shmancy model never attempts to get rendered more than once a frame, or maybe once every two frames, (there is a LOT of overrendering in games).   Architectural techniques and optimizations are probably where you want to start off first when facing performance problems, and regardless, you will be able to do more with your engine anyway.
  3.   What do you mean by stage?  I was thinking that a Vertex Shader /was/ a stage.  I am trying to bind 3 vertex shaders via the input layout, (which apparently won't work), or at least 3 shaders into the DeviceContext, using VSSetShader .... Can there only ever be 1 shader "pipeline"  per DeviceContext?
  4. Thanks, I am just trying to find the most efficient way to have different pixel shaders used by the same type of mesh, but colored differently.  For example, I would like square and triangle models in the vertex buffer, and for the pixel shaders to act differently based on instance data....   Is there a good way to accomplish this?
  5. Thanks!   What is the best way of appending to: ModelRenderer::StaticSubResourceData.pSysMem = model->GetVertices(); ?   Is it better to make a single array of ALL vertices and make this assignment?  Or better to use streams?   Knowing that a VertexBuffer can only be associated with 1 stride size is very helpful.   Thanks again!
  6. If I have two Vertex Arrays, one for Square, and one for Triangle; what is the best way to bind them to the VertexBuffer? Is this possible with objects that have different numbers of Vertices? Is this possible with objects that have different buffer element descriptions, (one with float4 position data, and another with float3 and texcoords, etc). I have about 100 static model templates that I would like to bind to the vertex buffer, and a few thousand instances associated with those models in an instance buffer.   I can DrawInstanced/Indexed and point to the appropriate pointer in the vertex buffer for reading, (I think), so I am just trying to figure out the best way to get the vertices into the vertex buffer in the first place.   Thanks!   Currently, I am binding one Shape, (a triangle), to the subresource like this:  
  7.   Just a few* questions for clarification, if you don't mind:  (I also cleaned up my initial question.)   Are there any issues with passing in a null pointer?   If I use multiple draw calls for models that require different shaders, do I just release the buffer, bind all the new vertex instance data, and draw?  Or do I have to re-bind/re-set all of my shaders as well since the input layouts will be different?   DrawInstanced allows you to effectively point at a different model template in one buffer, and instance info for that model from another to render the model + instance data.    Is it possible to have the VertexShader define multiple output types so that a different pixel shader could be called?   Thanks for your help!
  8. Here is some code that I used to bind multiple buffers at once...  Granted, your situation is a bit different with the constant buffer, but the syntax should be close.    TBH, I prefer to push one buffer at a time, (I set the constant buffer in one activity, then set the static vertexbuffer data, then set instance buffer data every frame or so).   HTH        
  9.   Do the other shaders have to have the same exact byte size?  How do you load the other shaders into the DeviceContext and associate them with specific models in the VertexBuffer?
  10. Hello!   I have tried to phrase my question in multiple ways, mostly because I don't think I understand the complexity of this particular topic, so thanks for your patience!   Yet another rephrase: 1. Can I write Vertex Data to the VertexBuffer for two different objects, (square, triangle), with either the same or different input layout? 2. Can they both use different vertex shaders, pixel shaders, and textures? 3. Could I write a sort of Shader Selector in HLSL that reads a parameter from the input element and dynamically chooses a different pixel shader or texture?   Related C++ Code   When I create an input layout, can I do this without specifying an actual Vertex Shader, or somehow specify more than one?       When I set the VertexShader and PixelShader, how do I associate them with a particular model in my VertexBuffer?  Is it possible to set more than one of each? ?