Jump to content
  • Advertisement

Jemme

Member
  • Content count

    31
  • Joined

  • Last visited

Community Reputation

155 Neutral

About Jemme

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Do you mean like this: Reflection Lumberyard had a talk about reflection I think I'd have to try find it again.
  2. The storage method for component data is kind of free choice, XML, Json, LUA are fine tbh, i do have a custom BinaryIO class in the low-level engine to read and write binary, i haven't set up endian flip or anything though. A previous project of mine had an asset pipeline tool that dynamically displayed the "import" templates when you selected assets of different types so you could change there settings liek compile as DXT1 etc. It used xml for that, but yeah serializing and deserting in binary is a possible route for the component data, but for now i will probably do it in lua or JSON so i can quickly edit things until i get an editor and automate it. Like Kylotan said i need to just get on with it and refactor and improve things as i go, that's probably the main thing a "junior" needs to learn to do otherwise you get caught up with the thousands of different possibilities and limitations of every possible implementation that you never start.
  3. I'm building everything from scratch, all the low-level, dx11, gl etc. In the past many years ago I used xna and sfml for specific projects like build an RPG etc. But now I'm doing everything and I want it to be reuseable, I'm going the route of having a game logic system which initialises all the logic systems and using lua for behaviour scripts and JSON for components initialisation on entities.
  4. Are you storing all your data on the RenderDevice? for example lets say i have a Mesh which needs a Vertex buffer and index buffer. are you just storing them as handles like VertexBufferHandle inside the mesh but creating the actual buffers on the device such that: void Init(RenderDevice* device, char* data) //function in mesh? { //Load data into some internal representation like MeshData VertexBufferDesc desc; //agnostic desc NOT GL or DX //fill in descusing MeshData device.CreateBuffer(desc, &vertexBufferHandle) } Then when you submit your DrawItem your just passing in all the handles for the vert, index and constant buffers for the RenderDevice to just fetch and set from its pool's? you would think the fetch via a handle would be slower then just a pointer chase. But cache could be better?
  5. There isn't really a good way to implement the game logic / components that interact with the underlying engine, these are the common approaches which i have used in different projects with the pro/cons. OOP Inheritance: Pro: Makes sense to normal programmers, easy to explain. Works for simple systems Con Deadly diamond Monolithic class hierarchies Multiple inheritance Deep wide hierarchies bubble up effect Entity Component pro Easy to implement / explain Works for most systems Allows many Entities of different types Con Not friendly to cache pointer chasing Non-sequential component updates virtrual function, vtable cache miss spaghetti pointer chasing in the Update loop Entity Component System pro Easy to implement / explain Works for most systems Allows entities of different types Data Orientated Design (contiguous) No vtables ID's allow reallocation + avoid dangling pointer con Claimed cache benefits fall away when you have to chase a pointer to a GameObject to get a Transform for a meshrender or physics system to act upon it. Using ID's causes dependency on systems. e.g (RenderSystem.GetComponent(Entity.ID)) so now the script / other system has to suddenly know about every other system it needs data from as opposed to just having a pointer to the data. Still spaghetti pointer chasing in the update loop. There the common way's you see being talked about, another method is to use a message system which makes sense. However, should you really have a message for every possible thing that can happen + every system having to know about every possible thing that can happen and just ignoring it. You could store a position with your MeshRenderer if a message is received to update position you can just update it that way it is cached with the MeshRender, the physics can cache its version, yes its duplication of data but ti stops pointer chasing. So basically every system/method i have encountered improves something at the cost of making something else less efficient.
  6. I guess its more about where game logic should live, in the XNA SFML style games, the main Game class becomes bloated with a bunch of systems for logic in most examples. Because the framework is custom though your initializing all the engine systems then the logic systems in the same place. It feels like the engine should be a self contained system, but both the game logic and the engine need to be hooked into a Main loop. the logic needs to know about the entities and so does the engine, it needs to sync up the engine specific components like MeshInstance. The World or Scene stores lists of Entities which have ID's and handles to the Engine specific components, but the world/scene shouldn't really have the logic? because technically the Scene/world is part of the engine in the scenes a small engine should provide the ability to create its entities and sub-system compoentns independent of what ever game it is. For examples, a lot of games will Use Mesh's so the Scene should let you make an Entity with a transform and a mesh to render for you, what ever logic you want to apply shouldn't factor into the World/scene at all. So this leads down to the option of having a separate GameLogic system that stores all the game specific systems within it, like menu system, enemy spawner etc. that all hook back into the world manager using the entity id's. Other systems have a BehaviorComponent that they attach and update on each entity like unities mono behavior but logic in the component can sometimes be seen negatively. There is industry wisdom though, that could pro/con the placement of logic. Im going to try the GameLogic method and see how that goes. Thanks
  7. Hello, I have built games in the past in unity and XNA, android SDK etc. but it’s always been quite specific, when building a small Game Engine or more accurately a sub-set of code that is reusable across projects the tendency to structure the project falls towards composition of objects that are more "generic" than toward inheritance-based models. For the Engine specific sub-systems, we have components such as a MeshInstance/Renderer that contains a Handle to a Mesh in a MeshLibary and a handle to a Material from the material library which in turn hold PSO data etc. We can also have Components such as SoundSorce, which are managed by an AudioSystem and contains the data to specify what sound, at what volume etc. These systems can be Set up and shut down from within the Engine/Game class such as: void Engine::Run() { if (Initialize()) { //Main loop call the systems //Dispatch Events } ShutDown(); } void Engine::Initialize() { //Init all sub-systems FileSystem.Init(); RenderSystem.Init(); PhysicsSystem.Init(); AudioSystem.Init(); WorldManager.Init(); //Creats Entities } void Engine::ShutDown() { //shutdown all sub-systems WorldManager.ShutDown(); AudioSystem.ShutDown(); PhysicsSystem.ShutDown(); RenderSystem.ShutDown(); FileSystem.ShutDown(); } But where does logic go? in smaller games like XNA or SFML you generally create some managers, let’s take the example of darkest dungeon. They have various managers like: Darkest Dungeon Manager Campaign Selection Manager Estate manager Party formation manager Town manager etc.. All the logic and systems to run the game are based within the managers, and objects contain scripts of data that are fed thorough the systems. In a small reusable code base how would you separate the logic from the engine, should like the smaller games all the managers just be shoved into the Engine Class even though that goes against a reusable data-driven framework? Should they just be stuck in a GameLogic system that's is initialized in the Engine Initialism function. And how do people tend to connect data scripts to the other various engine systems without causing too much game specific coupling. For example, you can use an Event System, but firing and event such as DAMAGED_BY_ZOMBIE and having the internal engine respond to that seems to break the separation from the low-high level of the engine system. Would be great to here some opinions from the community on this subject as it is quite a vital and potentially problem prone aspect of engine/game development. Thanks.
  8. Ah okay that makes more sense, I use handles so I guess the PSO can be stored and referenced that way , for the xor does that mean your making common states like: Microsoft Common States So you can reference them in the drawitem like: drawItem.blend = BLEND_ALPHA; That would work and keep the size down , stop leakage and save time on state swapping. Thanks for the clarification so far :)
  9. Hello State based render architecture has many problems such as leakage of states and naive setting of states on each draw call, a lot of different sources recommend stateless rendering architecture which makes sense for DX12 as it uses a single object bind the PSO. Take a look at the following: Designing a Modern GPU Interface stateless-layered-multi-threaded-rendering-part-2-stateless-api-design Firaxis Lore System: CIV V Is this not causing the same problem though? as you are passing all the state commands within a DrawCommand object for them to be set during the draw call? yes you are hiding the state machine by not exposing the functions directly but you are just deferring the state changes to the command queue. You can sort by index using this method: Real Time Collision: Draw Call Key But that means each DrawCommand is passing in the entire PSO structure (as in the state's you want) with each command and storing it, just for you to sort by the key and elect the first object to bind its PSO for the rest within the group to use. It seems like a lot of wasted memory to pass all the PSO in to use just one, although it does prevent any slow down from swapping PSO for every single object. How are you handling state changes? am i missing some critical piece of information about stateless (note i am aiming towards the stateless method for DX12 just want some opinions on it :)) Thanks.
  10. Yeah sure, any further ideas are welcome, unity works in a similar way with there "shader" defining properties and some in between code which compiles into variants like glsl and hlsl. https://unity3d.com/learn/tutorials/topics/graphics/gentle-introduction-shaders I like how they describe properties in a way that makes them seem like they don't know how it works: "The properties of your shader are somehow equivalent to the public fields in a C# script" There system seems to be dynamic because you can write a new shader with a variety of parameters like you suggested, it can be loaded by the inspector, that bit is easy if anyone needs information on that check Mike McShaffry Game Coding Complete page 790 i did it in wpf probably works similar with other UI libraries. It does seem like unity use per parameter setting: https://docs.unity3d.com/ScriptReference/Material.SetInt.html Which isnt great, it also means the Material class must store a list of all possible types list a list of ints, floats, vec4 etc so that ic an handle any of the parameter types it has to set. im definitely going towards the hard coded materiel or the block allocation method. Unless they keep the MaterialFile as a resource and just use the Material class as a pass through to the shader but still its not as good as submitting a block.
  11. Yeah i am fully aware of the inefficiencies of the per parameter update, i want to use constant buffers and uniform buffers which is where this question originated, i have always used constant buffers in DX11 and recently in opengl used the uniform structs. Just never for a full blown system, for example all my entities in the entity component system jsut used a defualt material but its now time to overall it into something actually dynamic. Unity's style of system is the use case id like so nearly all materials will use a "permutation" of a standard shader and just pass there individual settings in like what you said, avoiding the need for "material types" like wood etc. So from here there are too possibilities: Dynamic chunk allocation with no additional material derived from the base for the types or Derive from the base material with the hard coded structs like you suggested, a bit similar to Ogre 2.0 HLMS system. Thanks everyone, its given me a lot to consider with how the system will progress :)
  12. Inserting idea, so allocate a block of bytes and set the byte data based on the values required, i never thought about it that way. So if it for example states Matrix4D i could calculate it requiring 64 bytes and know based off the other data its offset into the byte chunk. I already have a binary IO class that handles byte data, so i could just take stuff form that for the calculations. The structs can be simply labeled UserData a bit like physx does for keeping a reference to your entities,and materials will usually have only 1 proprieties structure so you can upload it to a single explicitly labeled slot that comes after the PerRender and PerObject data.
  13. Okay that makes sence for the shader production side, however what is the method to actually pass the data to the correct parameter slots in the generated shader? old directx's like xna let you set parameters based on the name,im pretty sure gl does as well. But dx11 + use constant buffers and gl can use uniform structs. With all the different permutations how can the cpp side store the data and put it in a struct to pass it without it being hard coded? or is there a way to set each parameter individually, although that seems a bit less efficient than passing them all in one go. Thanks for the help so far :D
  14. Hello, Im wondering if anyone has examples of how they handle material and shader systems, particularly in the association of uniform/constant buffers with the correct slots. Currently i define a Material file in xml (it could be json or anything else just xml for now): <MaterialFile> <Material name="BasicPBS" type="0"> <!-- States--> <samplers> <sampler_0>PointWrap</sampler_0> </samplers> <rasterblock>CullCounterClockwise</rasterblock> <blendblock>Opaque</blendblock> <!-- Properties --> <Properties> <color r="1" g="1" b="1" a="1"></color><!-- tint color --> <diffuse> <texture>SnookerBall.png</texture> <sampler>Sampler_0</sampler> </diffuse> <specular> <value r="1" g="1" b="1" a="1"></value> </specular> <metalness> <value>0</value> </metalness> <roughness> <value>0.07</value> </roughness> </Properties> </Material> </MaterialFile> The Sampler, Raster and Blend Blocks are all common reusable blocks that a material cpp can point too based on the ones within the xml, it is similar to xna and dxtk: https://github.com/Microsoft/DirectXTK/wiki/CommonStates Common Data such as the camera view, projection and viewProjection matrix can be stored in a single PerRender Constant and the meshes data: world, worldProjection etc. can be stored in a PerObject constant buffer that can be set by a world/render manager each time it draws an object. The material will also obviously store a reference to the shader that has been loaded by the shader manager, and reused if its already loaded etc. That leaves the material properties, how can they be handled in a dynamic way? or will you have to explicitly create individual material classes that use material as a base and override using virtual's (yuk) to achieve setting the individual property types? for example you can have materials that have a texture for diffuse but just values for roughness and metallic, or yuo can have one with all maps etc. this means there has to be permutations. How can that be handled in a clean way? Thanks
  15. Jemme

    C# + AS3 + C++?

    Most still use of for editors, check out game coding complete, it has a pretty good editor section about using XML for objects , dynamic UI and communication with the engine. It's a. Great kick start to experiment with.: https://www.amazon.com/dp/1133776574/ref=olp_product_details/146-9356820-0862526?_encoding=UTF8&me= Unfortunately I haven't used boost, most would say STL is not good enough or boost isn't and use your on, but chances are the people that wrote STL know what there doing.i haven't bothered replacing STL yet but do use slot maps and things for hawing instead of there map. To clarify the degree comment, I meant a pure mathematics degree rather than an applied field, for example my degree was computer animation (awful) they covered the bare minimum without showing how it works etc. Lengyel book focus alot on proofs and accurate math notation which can be good and confusing if you want to quickly learn and apply seeking many sources might help. I prefer drawing maths to understand it on a whiteboard or visualising things, although I would not recommend trying to visualise quaternions xD. Lengyel engine series book is alot cleaner and more of a best hits which is a great spring board. Doing the framework just lets you separate talking to the computer and implementing structures to manage that communication. For example the engine doesn't care how stuff gets drawn, like dx11 or even a. Ray tracer it just wants to say draw this thing. And the framework doesn't care gown ngine structures a scene or scene graph it just gets given data and outputs a result. Also the editor is a separate program but uses the engine but as a editor DLL version so in a way it goes: Framework > engine > engine DLL editor layer > editor Specific knowledge can be gained from books, game cording complete has a big section on scene graph, a scene manager just stored scene and loads them etc. The thing is alot of these company's built there engines along time ago and learned the ropes , then they just iterate adding and removing over time. Like fallout 4 is still gamebryo at its core there geck is still probably based on Todd's original core, although interns of usability engineering it's not a smooth tool to use.... You won't get to the same level as they have because it takes a long time and alot of people, so aim for something a bit more realistic for example make a framework to render this scene: https://developer.nvidia.com/orca/amazon-lumberyard-bistro Have some particle effects, some butterfly etc. Don't aim to have for example a full animation engine that lets you build and tweak themselves just make an animation engine that plays them back and blends them and that's it. My advice would be if you really want a job in a specific engine filed focus on that, building a full engine isnt worth it without a very specific goal like portfolio or its part of some special commercial project like simulation of power plants or others.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!