Jump to content
  • Advertisement
  • entries
    21
  • comments
    28
  • views
    34223

About this blog

A mad mans journey to his own game engine.

Entries in this blog

 

[C++] ArrayView-class

So I've been quite busy with refactoring the rendering and other systems of the engine for the last few months, and once I'm done I'll make some more entries for the graphical render-system. Today I've been wondering if there is no cleaner solution to having to pass an arbitrary array to a function. I generally tend to fully qualify the container in such a case, but thats not always possible (and quite frankly not the flexible). What I'm talking about is a case like that:DrawItem* Compile(const StateGroup* pGroups, size_t numGroups);{ for(size_t i = 0; i
Having to pass in the pointer and size separately to the function seems unsafe, C-esque and redundant to me. Not being able to use range-based for-loops is a minor annoyance. But there is no other native way, if I potentially want to support any source of array (raw fixed-size array, std::vector, std::array, dynamic allocation, ...).
So I came up with what I call an ArrayView-class. What it does and how it works is actually quite simple, which makes me wonder why I havn't seen anybody else come up with something like that:DrawItem* Compile(ArrayView groups){ for(const auto& group : groups) { }}
Yeah, thats the gist of it. I'm using a templated class to pass a view to an array, with a known size. The cool thing is, now it doesn't matter where this array comes from:void printArray(ArrayView view){ for(const int value : view) { std::cout vector = {1, 2, 3, 4, 5}; std::array array = { 1, 2, 3, 4, 5 }; int raw[] = { 1, 2, 3, 4, 5 }; int* pDynamic = new int[5] { 1, 2, 3, 4, 5}; const auto vectorView = makeArrayView(vector); const auto arrayView = makeArrayView(array); const auto rawView = makeArrayView(raw); const auto iteratorView = makeArrayView(vector.begin() + 2, vector.begin() + 4); const auto pointerView = makeArrayView(raw + 1, raw + 3); const auto dynamicView = makeArrayView(pDynamic, 5); const ArrayView* views[] = { &vectorView, &arrayView, &rawView, &iteratorView, &pointerView, &dynamicView, }; for(const auto pView : views) { printArray(*pView); } }
You can construct it from std::vector, std::array, compile-time arrays, even random-access iterators (probably a tad unsafe), and a raw-range of pointers (even more unsafe, but meh).
________________________________________________________ So... any reasons nobody has come up with something like that yet? Is the problem I'm trying to solve so uncommon/unimportant that nobody cared? Or are people still wary of using templates?
For me, I think I've spent my ~hour to write this quite well, as this will probably kill my conditioning to just put "vector" everywhere, even when I could just pass in a regular array. :D For those of you interested in using this or trying it out, I've attached the header-file, alongside a visual-studio natvis-file. Requires C++11 (VS2015 or higher), and you'll have to replace the AE_ASSERTS with your own assert-macro, if you use one. This is the first iteration just after writing it, so there's probably still lots of stuff missing, but I consider it useable :)

Juliean

Juliean

 

Designing a high-level render-pipeline Part 2: Views & passes

Last entry: https://www.gamedev.net/blog/1930/entry-2262313-designing-a-high-level-render-pipeline-part-1-the-previous-state/ Thoughs about how to change things: So despite not working on 3d-rendering for a long time, over the time I've been able to not only see whats wrong with the old rendering system (or the lack of the same), but also kind of had a plan already in the back of my mind on what to change. I just needed to think about it and put it in words as well as a rough overview of the new system. This is what I came up with: Requirements for the new system: At first I though about what I wanted from the new system. Here are some points I already mentioned, as well as some new: - Being able to setup multiple scene-views easily. I just want to be able say "There is a scene-view widget, you have your own view of the scene, go render it and show it to me". - Not having to deal with resource management. What I mean by this is, not having to worry about creating render-targets/zbuffers myself, instead that system should allocate them all by themself. Kind of indicated by the point above, but that point I'm making is, I really do not want to be bothered by this in any way - specially talking about stuff like ping-ponging rendertargets for effects that read and write from the same texture at the same time. - Less setup-overhead. The whole thing with the datafiles was fine at first, but at least after I integrated the group-system, it became a nightmare. I want to be able to create my render passes, tell them approximately where they belong, and be done with it. - Easier debugging. Sometimes, that deferred render-execution caused a lot of issues when debugging things, since everything was executed from the same place and thus it was hard to say where things originally came from. I like the idea of not having a "Draw()" call that actually draws, but at least if I draw my whole gui, after that is done it should start the rendering process. So the system I initially came up with to fullfill all those requirements actually looks like this: View: As a new top-level system, there is a view. A view has a specific size, and an output-rendertarget of that size. It internally can have a zbuffer, and multiple other rendertargets as needed.
A view internally owns a unique render-queue.
A view also has a camera and an according cbuffer, though those vary based upon 2d/3d. A view also has a pointer to a scene, that it should show.
A view can be resized, which will cause all its internal rendertargets and zbuffers to resize as well (did I mention resizing of the game render-part was a nightmare with the old system as well?)
A view can be told to Render(blend-factor), which will clear all the rendertargets (that was awkward in the old system as well), trigger the customizable rendering-process (we'll get to that in a minute), and then sort and execute its render-queue. There is a lot of code associated with all that, but just to give you an idea how you would implemented the 2d/3d-specific stuff, here is the actualy 2d-view implementation:void RenderView2D::OnRenderView(double alpha){ const auto cameras = GetCameras(); const auto& vScreenSize = cameras.first->GetSize(); const float vScreen[4] = { (float)vScreenSize.x, (float)vScreenSize.y, 1.0f / vScreenSize.x, 1.0f / vScreenSize.y }; // update camera const auto rCamera = cameras.first->GetCameraRect(); const auto rOldCamera = cameras.second->GetCameraRect(); math::Rectf rInterpolatedScreen; if(rCamera != rOldCamera) rInterpolatedScreen = rCamera * alpha + rOldCamera * (1.0 - alpha); else rInterpolatedScreen = rCamera; // set 2d camera constansts auto updateBlock = gfx::getCbufferUpdateBlock(*this); updateBlock.SetConstant(0, (const float*)&rInterpolatedScreen, 1); updateBlock.SetConstant(1, (const float*)&vScreen, 1);}void RenderView2D::OnUpdate(double dt){ // aquire camera from entity world ActiveCameraQuery query; if(GetWorld()->GetMessages().Query(query) && query.entity.IsValid()) { auto pCamera = query.entity->GetComponent(); AE_ASSERT(pCamera); SetCamera(&pCamera->camera); } else { static const gfx::Camera2D camera(math::Vector2i(640, 480), 1.0f, math::Vector2f(0.0f, 0.0f)); SetCamera(&camera); } BaseRenderView::OnUpdate(dt);}
Aaand... thats pretty much it, as far as logic you have to concern yourself. A 3d-implementation would look quite similar to this.
Note that views can also be used in different ways - for example, there is a separate view for rendering the UI, which doesn't use a camera like this. Passes: So now we have our view/camera-setup, but what do we actually show? Thats where another object comes into play, the RenderPass, or simply Pass. A pass is always owned by exactly one view, and a view is composed of multiple passes.
A pass can specifiy which rendertargets it wants to write to, read from, as well as what zbuffers to use.
A pass has its own cbuffer, which it can fill with pass-specific data.
A pass can render one or many primitives, depending on the implementation (there are helper-classes like FullscreenPass, which will only render an effect over the whole rendertarget; other passes render all the entities, ...).
A pass has its own stage, which allows it to insert primitives to the views renderqueue. So the pass is like a composite to the view, that determines what is rendered, and how it is shown. As an example, here is a pass that applies a smooth fade-out per postprocessing:void ScreenFadeRenderPass::Update(double dt){ const float fadeValue = m_pScreenSystem->GetFadeValue(); SetShaderConstant(0, &fadeValue, 1);}render::FullScreenPassDeclaration ae::base2d::ScreenFadeRenderPass::GenerateFullscreenDeclaration(void) const{ return { L"ScreenFade", L"Base2D\\Effects\\ScreenFade", { L"scene" }, { L"sceneProcessed", L"", gfx::TextureFormats::X32, false }, 1, false };}
Thats pretty much it! As mentioned, this uses the FullscreenEffect-wrapper, so it has little stuff to do. So a view will have instances of different passes, and when renderer, it will first tell all those passes to render, before executing its queue.
The more interesting part is the declaration. Every pass needs to give some information as to what its doing: - Its unique name
- An effect that it is using (this is fullscreen-pass-specific, see the use of FullScreenPassDeclaration, instead of RenderPassDeclaration)
- An array of render-targets it wants to read from
- An array of (optional) rendertargets it wants to create
- An array of render-targets it wants to write to (in case the pass creates some targets, this is used)
- The size of the cbuffer
- Whether or not it is using a zbuffer So as you can see, unfortunately I didn't solve the problem of having to create/reference manual rendertargets yet. Also I still have to do ping-ponging myself (the pass I showed wants to read from scene and technically write back to it, so it has to create another render-target to do so).
Just to focus on that for a second before that recap, that is a big deal. Not only does it mean that each pass would have to know whether the pass before it actually did write to another texture, it also makes removing certain passes even for testing nearly impossible. Recap & outview: The view/pass-system already was a huge step up. Instead of having render-code inside entity systems (I know thats pretty cringe-worthy, I just really didn't have a better idea at the time), it will now reside inside passes, which can dynamically be composed to form a render-view. As I mentioned though there are a few problems left: - Creating render-targets and actually assigning them to passes has to be done manually.
- Ping-ponging rendertargets is still a huge problem
- Determining the order of those passes hasn't much improved. I started by adding different layers (world, postprocess, ... ) that systems could assign themselves to, but other than that it was "first come, first served" in terms of registration order. So what did I do to combat this? I can't even tell you how I came up with this anymore, but I had the idea for implementing a visual language akin to the visual scripting I already implemented for this. I'll show this in an separate article, though - prepare for some pictures, after all this text :) Next article: https://www.gamedev.net/blog/1930/entry-2262315-designing-a-high-level-render-pipeline-part-3-a-visual-interface/

Juliean

Juliean

 

Designing a high-level render-pipeline Part 1: The Previous state

So I've stopped writing journal entries on my project for a while. Didn't really feel like it anymore, after I took a hiatus for multiple months. Since then I've been working on multiple things, mostly cleaning up old messy parts of the engine. One of this things, that turned out something pretty cool in my mind, was the redesign of the rendering pipeline, that I wanted to share with you (can't promise I will keep up writing articles, but I'll try my best as long as it fulfills me). For now I'll split this into 3 articles that I'm going to write at once, with another 1 or 2 coming in the near future though. What I mean by high-level renderer: So just you know exactly what I'm talking about. By high-level renderer, or rendering pipeline, I mean the system in place that allows for rendering the world, post-processing effects etc... which pulls it all together. Say I have entities/models that I want to render, as well as some terrain, then some effects like HDR, SSAO, etc... all those in the correct order form a rendering pipeline, and the high-level renderer I'm going to present that I just designed allows to write all these things in isolation, and pull it all together so that you can see the fully renderer scene. What I had to begin with: The renderer originally was one of the first things I wrote, all the way back ~3 year ago. As you can imagine, it was one of the messiest parts of the engine, as knew very little compared to know. To be perfectly honest, I didn't even have any high-level rendering framework/pipeline up until now. What I had was a low-level abstraction based up some of the information found in the classic frostbite rendering architecture post found in these forums: https://www.gamedev.net/topic/604899-frostbite-rendering-architecture-question/ as well as some other forms of abstraction for primitives like textures, meshes, ... but on top of that, implementing specific rendering effects was a total free-for-all. There was some things that I kept, but others were just... well, you'll see in a minute. The good: Of the things I still kept to this day (in some form), there was a system of render queues and stages. A queue would take render-primitives with a sort-key, and execute those after the whole queue has been filled. Primitives are not directly assigned to the queue, but instead go through a stage first - the stage know which render targets to render the primitives too, and acts as part of the sort-key (so that whole stages are processed in order).
Thats a system I quite liked, it allows for deferred rendering of primitives, which is not only good for the sorting-part based on ie. depth and material, but also makes it easier to write code for rendering, since we do not have to think about the order in which primitives are drawn ourselfs. The bad: What wasn't that cool, at least after some time toying around with it, was how those queues and stages were set up. I used data-files for it, which is fine, but since I was using a plugin-based architecture, things quickly became messy. I ended up needing to implemented a "group" system on top of it all, so that passes could be inserted at the correct point. This would usually mean I'd have on file for declaring the groups in the main-render plugin (since I wanted to support 3d and 2d):
And plugins would then add stages and queues to those groups, like so:// water plugin
Not only is it annoying to write per hand, and confusing as f*ck, but also prone to error and hard to debug. Things that were the most annoying are actually:
- Render targets had to be declared manually as assets, and referenced by name (later by asset UID, at this point it became even more confusing and nearly impossible to write by hand). - There is no [s]easy[/s] way (no, just none) to allow multiple view with this. Just think of at minimum having the four view showing different sides of the scene you have in modeling-tools, but can also expect to being able to use in a game engine - just cannot be done. This is due to many reasons: the entire rendering process is part of the same thing (includind the editor-UI that finally outputs the rendered scene); render-targets being declared and used via assets; and many, many more - and you haven't even seen that code using that. - Generating render-effects purely in code was horrendiously difficult. Not going into to much detail, but specially trying to write some render-code that is instantiated multiple times was a horror mainly due to the reference-by-name-thingy. The ugly: But the really bad stuff started when you tried to actually implemented a rendering pass. Oh boy. So aside from what I just mentioned above, there are no other systems in place. So I relied on some really fugly, dirty stuff - like abusing Entity-Systems for implementing things like an SSAO-render pass (mainly due to being able to pass messages). I'd usually grab the stages that were defined in the rendering data file, and then just feed it the rendering-primitives in an hijacked "update" function. At first, handling things like camera constants was also a nightmare. I ended up just having a separate cbuffer for each stage, where I just put the camera data manually in all the systems that needed it. Later I moved to having a global cbuffer for that, and have the camera system put the constants in, but now the cbuffer was global to the renderer, which wasn't that nice eigther. This was also another reason why I would have never been able to do stuff like have multiple scene views, or the like. That was actually the point were I started to think about reworking this system - I've been working on allowing multiple views for the sake of preview etc... for some time, when I pretty much just had to rewrite the renderer in the process. I'm still very much pleased with what I came up with at the end, which ended up solving all the problems mentioned about so I'll show you that in the next article, which is right there: https://www.gamedev.net/blog/1930/entry-2262314-designing-a-high-level-render-pipeline-part-2-views-passes/

Juliean

Juliean

 

Designing a high-level render-pipeline Part 3: A visual interface

Last entry: https://www.gamedev.net/blog/1930/entry-2262314-designing-a-high-level-render-pipeline-part-2-views-passes/ To start of, let me tell you that I was basically finished with the 2d-rendering after what I presented to you in the last article was done. So just for some amusement, I'll have to tell you right away that everything I did after that was pretty pointless and costed me an additional 3-4 weeks that any person who wanted to primarily work on their game could have spent on that. Oh, well. In medias res: So after I tortured you with nothing but text for the last two entries, lets start with some imagery that showcases what my latest effort was all about: Visual editing of a render-pipeline: So what you can see is how a render-view is implemented using this visual scripting, if you will. Lets explain, how the whole thing works: Lets start at the left side. The green "Output" node is actually the entry-point of the view. The red output-pin does two things: a) determine which pass to render next and b) pass a render-target to it. So the output-node obviously passes the output-render-target. Also the blue output-pin passes a zbuffer, in this case the views default zbuffer.
Also take a note of the black image under the pin - this is a preview of the output-render target at the time that this pin is executed. Right now its black since this is the beginning, but you should soon see what every stage adds to it. The next node, with the grey bar entitiled "Sequence" is a control-node. Its purely for cosmetical porpuse, allows to position other nodes vertically instead of horizontally and, as you may have quessed, renders its output-pins in descending order.
You should have noted the little images under each output-node. The first pass-node with the purple bar, called "WorldPassOpaque", is an actual pass, implemented much like you have seen in the last article. It will write to the render-target passed in through its input-pin, and use the zbuffer connected to it as well. This pass renderes all opaque entities (I'll get to why it does exactly that in a second). The next grey node called "TilemapSelector" is another primitive control node, usually called "Selector". You can assign custom names to your primitives for cosmetical and practical purposes. So what does a selector do? It chooses which adjacent pass is going to be executed, based on some boolean. To be precise, this selector is a SelectorSequence, a Selector that also has a "done" tag.
As you can see by the lack of orange top-border and no preview-image, TilemapPass has been selected, and TilemapMode7Pass has been ignored. One more important part, if you noticed the white input pin - this is for when a node wants to read from an render-target. It accepts any regular output-pin as an entry. The rest works exactly like that, until after the last pass the view is done. Now with that out of the way, lets talk about some details. The nodes: About the nodes themselves. There are pass-nodes and control-nodes so far, in the near future I also want to support view-nodes, so you can ie. put a separate world-view, that you can use to also render reflections for your game view. But what can the nodes do? Oh, a lot of things, actually. For example, they have attributes. Like I said, they can have a custom name, but also user-defined/pass-specific attributes, as you can see here: For a screen-tone pass, you can set the screentone actually while editing the pipeline! On the code side, right now the pass still has to write them to the cbuffer manually, but they get written to a member-variable of the pass directly, so its no big deal.
This is similar with all the other nodes - a selector has a bool which path to take, the WorldPasses are just one pass with a "transparent?" boolean, and so on. Aside from that, nodes are only used in the setup-stage - they output resources, passes, are used to evaluate the actual path to take... but after that, they are thrown away and not used during the runtime. I initially had a different design, but decided to have it this way, as this guarantees minimal overhead, as well as gives a cleaner design IMHO. The preview: Actually, one of the things that really pushed me into building this visual system was the possibility of per-pass preview, as seen with the little thumbnails under each output pin. In other engines, you usually have a dropdown-list of some sort that lets you view different render-outputs right at the view. Now with this system, you can inspect it right where it originally came from, especially helpful if you are currently working on the pipeline, so you see instantly, live what each change does to the output. One consequence of this is though, that certain things don't work how they used to. See how there is an opague and separate transparent world pass? This is because for this preview to work, every pass has to render its primitives at once (its not implemented that way without the preview attached, but still it has to function like this), meaning you cannot just have an "WorldPass", that outputs opague and transparent sprites, and then have a tilemap-pass that renders the map geometry (since all transparent sprites have already been rendered by then). Obviously you can eigther put the tilemap first (which will cause overdraw), or, as I've done here, split it into two separate passes. Also, this is how the full working area actually looks like, with the preview. I've chosen a different setup for the attributes, to show you what the other passes actually do: You can see that you can select and preview an entire scene on the top left, aside from the little thumbs, also you can tune the nodes attributes just for the preview, without changing the nodes settings. Some design-insights: Regarding control-flow, I wanted to have a rather minimalistic approach. Meaning pretty much the only thing you can do is to choose whether to go up or down on a Selector-node, and its depending on an external variable. I might have a "switch" statement of some sort at one point, but I really didn't want to put too much logic or scripting into this. Its just placing nodes, connecting them, and being happy about it. One implication of this is also that when a certain attribute changes, ie. the selectors boolean, then the whole view has to be re-generated. I view this as ok, since its lot easier than figuring out which part of the pipeline to throw away, etc... and it shouldn't be that much of an issue, since usually this would only be used to branch seldomly-changing parts of the pipeline (like in my example, it would choose between Mode7 or regular tilemap rendering, which changes only when I enter or leave the worldmap). On the other hand, I wanted to give full control over the programmer. For example, they could always just write a pass that internally switches a shader without triggering a full pipeline-flush, if they so desired. Also in the future I might apply optimizations that could really spare me destroying the whole thing, if ie. it only switches between two passes with identical input/output signature. Also one thing that is important to note is that all render-targets, zbuffers etc... are handled automatically, this time around. I just set whatever format the view wants to output, and thats it. Also, ping-ponging now works automatically, too! See the write/read-input pin differenciation for that. Conclusion: I'm actually not fully done, there are a few points left, like: - How to encapsulate multiple passes (ie. the tilemap vs. mode7 - thing is actually something that comes from the Tilemap plugin and should not bother the user of the plugin)
- How to set the attributes at runtime
- Some potential improvements for the future, thinking about 3d rendering
- ... I'll handle this in a separate article at another time, I've written enough for today, I think :D _______________________________________________ So before I finish, I'd like to ask you, the reader what you think of a system like this. Appearently I haven't seen anything like this so far, at least not with an visual interface. In my point of view, this is an execellent application for visual scripting, since its little logic and more data-flow between systems, in this case passes. It should allow for easy understanding and extending of the pipeline, both by core-developers, as well as potentially third-party users (think of trying to extented the render-pipeline of something like UE4 - which isn't really easy ATM). Do you agree? Do you disagree? What are your thoughs on general, do you see any potential for improvement, or do you maybe dislike the idea as a whole? I'd glad if you let me know!

Juliean

Juliean

 

Visual scripting: Control structures

Last entry:
https://www.gamedev.net/blog/1930/entry-2261155-visual-scripting-basics/

Last time, I introduced you to the visual scripting system of my engine. I introduced you to the basics of the scripting system. The next important question is: How do you create a control flow? I've already talked about basic flow from node to node, but what about branches, loops, etc...? There are obviously some special nodes for that, which I'm going to show now:

Basic nodes:

Lets start with the very basic nodes, which make up functionality that is present in any imperative programming language.

Condition:



The condition node is the equivalent to a regular "if". It evaluates based on the attribute whether to execute the true or false branch.

ForLoop:



ForLoop does exactly what the name says, a loop which executes in the range of first to last index. The "body" graph is called for each iteration while "index" contains the current index. "Finished" is called after the last iteration.

WhileLoop:



The next straight-forward node. Loops until "condition" equals false. This node essentially only works with an attribute connection present, otherwise you get an endless loop. This loop also has seperated body/finished outputs. There is currently no break available - its kind of hard to do this as a node, Unreals way is to have a "break" input, but I don't support multiple input pins yet, so maybe this is to come in the feature (probably when I really need it).

Those are essentially the basic control flow nodes. Some future includes might be a DoWhile (was kind of weird to implement before, with recent changes I could easily do it but will see if it is needed), probably a loop-break, while some other things like "continue", "goto" are not really needed and/or too complicated to implement. Now, before we look at some more advanced control structures, we need to get an essential concept of the scripting language out of the way:

Wildcards:

A wildcard is kind of the equivalent of C++ templates. Its used as a placeholder for any type, which means any attribute input can be connected to it, which will resolve the wildcard to a certain type. Wildcards have a special set of rules:

- A wildcard must be eigther declared array or non-array.
- A wildcard essentially operates on the four primary data types (POD, Object, Struct, Enum), and while it is possible to specialize a node for ie. a very certain object, its not really what they are for.
- A node can only have one wildcard type, meaning that the first connected attribute determines the type of all wildcard inputs, as well as outputs. Without this limitation, wildcards would be both very difficult to implement, as well as confusing to use.

You are going to see what a wildcard looks like in some of the control structures below. You will see that wildcards is a quintessencial concept really needed in a visual language like this, as otherwise you'd be struggeling to implement many functions over and over and over again for different types.

Advanced nodes:

Sequence:



The first node which goes agains normal programming habbits is the Sequence node. Sequence does just that, execute its output pins one after another (this also includes waiting for timed nodes, etc...). This node is not really needed, as the same functionality can be achieved by directly connecting the last node of the "first" path to the first node of "second", but its vital to keep scripts ordered, since it allows you to decouple sequences of actions. This is primarily important since it makes extention of the script easier (which is an immanent problem with most visual scripting languages). Its kind of hard to explain if you haven't worked with a visual language before, so I'll try to give you a practical example:



Looking at the script below, it controls two parts of a cutscene: A character appearing with an animation while music and screen fades; plus the player turning to the character and engaging in a conversation with him. If I didn't use sequence here, I would eigther have the control flow come from the far right event of the upper branch (which you can't even see now) to the "TurnDelayed"-node, which is very bad to keep overview of the script, as well as that it makes extention of scripts hard (if I eigther need to insert a node before TurnDelayed or after the upper right node, I'm screwed with having to draw a connection between nodes that aren't even on the same screen). I really noticed the difference between newer scripts vs. the cutscenes I designed right after I implemented the language (might post a screen of those here just for the shits and giggles). But enough of this simple node, lets continue.

ForEachLoop:



I already told you about the ForLoop. Now you could just use a ForLoop and direct array access with the index to simulate the "ForEachLoop", but its just much easier to have this one node if you want to iterate over the whole array at once. Here, you can also see a wildcard. Depending on what type of array is put in, the node will return the exact same type of single element. Index is also returned, in case it is needed for deletion etc..).

Parallel:



Now for the really neat stuff. Since the focus of my language lies on timed actions, you just cannot always execute nodes one after another. Imagine if you wanted to fade the screen as well as the music. Both nodes take a certain amount of time, meaning in a regular call graph you can only fade the screen AND THEN the music, or vise versa. But what if you want to fade them both at once? In comes this hand node, which really does what it promises: It executes all connected code branches parallelly. Yeah, its just simulated parallelism without multiple threads, but its behaviour is kind of the same (minus the need to care about shared state between threads). Some truly awesome things can be done with it, so its not just a product of having to deal with timed nodes, but more like an easy tool for simulated paralellism.

Block:



Another tool specially made for parallelism and timing, is the "Block" node. It blocks until the condition is fulfilled, meaning that once this node is called, the rest of the callgraph will be put on halt until said condition is met. Not only can this be used for synchronisation, but also ie. in combination with a while-loop to execute regular code independant of the tick trigger.

Conditional:



Pretty much just the equivalent to the "Condition ? X : Y" in C++, but since calculations aren't just part of the regular call flow, its used much more often. For example, in const-methods you can't even call any other non-const methods or call structures like "Condition", so for returning values based on condition from those, Condition is your thing to go with. One optimization I made for this node specially is the ability to only evaluate calculation nodes based on access, ie. if the conditional is "true", only the "true" variable will be accessed, potentially saving a lot of computation power (and also being more correct in case of nullptr-checks with conditionals; a problem which unreal has due to them always evaluating both paths).

SkipFrame:

A very minimal node with only one in/output, which I didn't make a screenshot for. Its main purpose is to avoid infinite loops by telling the current call graph to wait for the next frame, kind of like Unities coroutines. It can also be used to wait for values that are not present until the next tick, like ie. entities used to not be fully initialized until the next frame, which meant that if you wanted to create an entity and use some of its scripts functions, you'd have to SkipFrame.

________________________________________________________

So thats it for this time. I've given you an overview of some of the basic and advanced tools the scripting language has, which allows it to do pretty much everything other languages can, with some features specifically targeted at timed/parallel code. Next time, I'll be talking about the details behind this timing, and show some practical exampels of it being read. Thanks for reading!

Juliean

Juliean

 

Visual scripting basics

Last entry:
https://www.gamedev.net/blog/1930/entry-2261095-assets-creation-data-processing/

The last few weeks, my work has primarly been focused around my visual scripting system, so I decided to make a startup article to present the whole system to you.

The premise:

I assume everyone should know about scripting and its purposes. Whether you are building your own smallscale game(engine), or use a large engine like Unreal/Unity, you will eventually come across scripting, for writing an entire game with cutscenes, etc... in pure C++/any other compiled language is no fun.

For those of you who haven't heard about visual scripting yet, its a step further in terms of scripting. Instead of writing text-files in the scripting language of your choice, you place nodes in a graphical user interface and connect them to form a script. There are many advantages (in my regards), and also some disadvantages over regular scripting, but I'll get to that later.

The backstory:

I used to have AngelScript as my engines scripting language for some time, so props to Andreas Jonsson at that point. While there where not any immediate problems with the language, I felt it didn't fulfill my requirements once I started porting my 2D-action-Rpg to the engine. The main reason was: Cutscenes. For those of you who don't know, the Rpg-Maker that I was using back then had this thing simply labeled "events". It was sort of an early visual scripting, though very limited. You could place an "event" on the map, and then place (predefined) commands in order to form a cutscene. Though poorly in terms of customization, it had some neat features: Like having the ability to place a "wait" command, which would delay execution of the "script" until a certain amount of time, without halting the rest of the application. Customization would happen via "call script" nodes, where you could place RGSS (ruby) script code... which sucked ass on many levels, mainly because the size of the text areas of those nodes was so limited.



So my goal was: I want a similar functionality (wait without hanging the rest of the application, ...), but improve upon it. So I searched for visual scripts, and came across Unreals Blueprint system... and immeditely loved it. So I decided to make my system as a combination of the best of both worlds, and this is what I've ended with now.

The script systems basics:

(DISCLAIMER: My system highly resembles Unreals Blueprints in terms of looks. I took great inspiration from their system, I'm not ashamed to admit it. Graphics have been designed by hand though, and implementation is completely indifferent to theirs)

For regular coders and scripters, there are some details that might be interesting/new. In text, you just write line for line, and without control-structures, the lines are executed one after another. In my visual scripting system, there are two call flows: Execution, and calculation.



Execution flow:

Execution of nodes flows from left to right, following the white connections you can see in the image. Each node can only have one input (currently), but can have one or more outputs, which are called as warranted by the node (Eg. Condition calls true or false based on its attribute input). All regular nodes have to be connected to this flow, otherwise they won't be executed.

Calculation flow:

Nodes can also have attribute input, colored for the specific type. There are the following builtin types:

Bool (red)
Int (Cyan)
Float (Green)
String (Orange)

Then there are application supplied types:

Objects (Blue)
Structs (Pink)
Enums (Yellow)

You may notice that those are identical to the types from my typesystem I presented in an earlier article. Yep, this is where that type system originated from.

Regarding the flow on those attributes, calculation/evaluation-order in this case is from right to left. Once the node requests its attribute value, all appended nodes are evaluated to the end, and the values returned until they arrive at their destination. Obviously, there has to be a special type of node that supports this call, which I refer to as "const node" ("pure" in Unreals context).

__________________________________

Following this, you can see how the script executes. Now, there are a few more basic concepts to get a running script though:

Trigger:



Wheres the entry point of such a script, you may ask? Those are the so called triggers. A trigger node is called from application, at a certain point in time, e.g. when the script is first executed, or in each tick. Due to the possibility of a node with a timing (like wait) being in the execution graph, a trigger cannot be assumed to finish its full execution when it returns to the application, therefore return values are not possible. Also, a trigger can only be called once before its connected graph has finished, so therefore calling it again before that will dismiss the call.

Script class/instance:

Aside from the scripts execution itself, each script is in and of itself a "class", which can have exactly one parent. Before we are to talk about inheritance though, we have to adress all the class-based resources that a visual script has:

Variables:




A script class can have member variables, just like a regular class. Those have a certain type, can eigther be single or array, can be const, can even be static, can eigther be public, protected or private, and can be setup for automatic serialization (more about all those in a different article). A variable has a default value, and can further be accessed in the call graph, via getters and setters (except for const).

Methods:



A script class can also have member functions. Those have most brandmarks of traditional c++ members: They can be regular or const (see point about execution/calculation graph), can have attribute input/output, can be static, and can eigther be public, protected or private, as well as (pure) virtual. A method has an in/out-node, and the graph has to be connected between them. Attributes are supplied via in-node, and return-values are taken as connected to the out-node. Trigger nodes cannot be placed in a method, and methods have only access to fitting class properties (ie. no non-const methods in const method, no instance-variables in static method).

Custom triggers:



Aside from methods, its also possible to give a script class a custom trigger. Those have the same requirements as regular triggers. They can only be placed in the main execution graph, their call (via activation node in this case) will not block, and any additional call while still running will be disregarded. Whats the point of those over methods? Them being non-blocking is the main case. Say I have a virtual "OnUpdate"-method, which checks for the player to jump, as well as some other actions. I can now model the jump as a straight-forward execution of timed nodes - the only problem is, if I do this straight in this or any other method, it will block the "OnUpdate"-call until the jumping has finished. If this is what I want, good, but if I want to execute other code in the OnUpdate-method while jumping takes place, I have to make a "ExecuteJump"-Trigger, and put the jumping code there. Kind of specific and technical, I hope everyone understands this, if not, I'm going to go into more of a detail about the timed nodes in another article.

Inheritance:

Now about inheritance. Only single inheritance is allowed. Other than that, its just like C++: You can override and virtual method, and have to override any pure ones. You can call all public/protected parent methods and triggers, as well as access all those variables. One additional requirements is that if an application trigger is placed in the parent, its child cannot also place it. So if a parent wants to execute code on startup, and allow any child to potentially also do that, it has to provide a virtual method (not so sure if this is so good anymore though...).

___________________________________________

So, thats about it for the basic of a script-class. In the entity-component system, there is a "Script" component, which allows instantiation of such an event for an entity, which allows the script to execute based on the triggers.

Thats if for this time. Obviously this topic is much more deep just on the design-part of things (not even talking about coding), since estatics and usability is a huge factor for every visual scripting system. So I'll add a few more things in future articles, specially some details about my own implementation (regarding timed events). Thanks for reading, until next time.

Juliean

Juliean

 

Assets: Creation & data processing

Last entry:

https://www.gamedev.net/blog/1930/entry-2261068-advanced-asset-handling/

So after I've talked about the theory behind the new asset system, I'll now go on to show some of the implementation details. I'll start with the more interesting stuff, namely the creation of the actual assets.

The asset class:

To understand whats going on, we first need to look at how the actual assets are setup. First major point, assets and their respective data are seperate entities, so they are setup using composition rather than inheritance. For this case, there are two asset base classes:// The generic asset base classclass BaseAsset{public: // .... const std::wstring& GetName(void) const; const std::wstring& GetFile(void) const; const std::wstring& GetAssetFile(void) const; const std::string& GetFileData(void) const; unsigned int GetType(void) const; const core::Variable& GetAttribute(const std::wstring& stName) const; const VariableMap& GetAttributes(void) const; bool HasAttribute(const std::wstring& stName) const; // ....}
It stores the generic asset information like name, file, attributes etc... . Then, we have a derived class simply called asset. This class is templated, because... I really like the CRTP, and it makes our lifes much easier (as it usually does):templateclass Asset final : public BaseAsset{public: typedef sys::Pointer DataPointer; Type* GetData(void) { return m_pAsset; } private: DataPointer m_data;}
Type equals to the actual asset type. So an asset of a specific type is defined as asset::Asset, for example. Whenever we need to process assets generically, we can use the BaseAsset, which can be safely casted down to the Asset type using a specific templated Cast method. Well,
its almost safe, we assume in this case that there is no other class derived from BaseAsset. Its a price I'm willing to pay for not having to use another virtual method, but it can be easily made more safe anytime.

Regarding to why composition over inheritance, its not just because "well everyone says its better, but there are some implicit benefits to this:

- Reloading of assets is much easier. We can change the internal data of the asset by well, simply changed its data. No need to recreate it and fix all external references, or provide a "Reload"-method, or some such shenanigans, just straight-forward replacement of the interior data.
- Seperation of asset declaration data and the actual implementation data. This is mostly for memory layout concerns. If we had the actual asset derive from BaseAsset (say gfx::ITexture), then all the Data members of the BaseAsset-class would be part of the memory the ITexture-class, and
therefore had to be accessed whenever we want to use any data member of the ITexture class. Of course this depends on whether the parent members are in front or behind the actual classes members, but you get the idea - whenver we only want to work with the asset implementation, we only work with the
implementation, and nothing else. To counter the cost of an additional dereferencation, we would normally use a special memory allocator which would place the content of the data pointer right behind the asset class, which would give us the best of both worlds.

The IO-class:

Okay, enough reasoning about why its done this way. Lets look at how this data pointer is loaded. For this, a specific base class has to be derived:class BaseAssetDataIO{public: virtual ~BaseAssetDataIO(void) = default; virtual BaseAsset* CreateAsset(const std::wstring& stName, const std::wstring& stFile, const std::wstring& stAssetFile, const std::string& stFileData, VariableMap&& mVariables, bool loadData) const = 0; virtual BaseAsset* CreateAssetEmpty(const std::wstring& stName) const = 0; virtual void SetAssetData(BaseAsset& asset, const std::wstring& stFile, const std::wstring& stAssetFile, const std::string& stFileData, VariableMap&& mVariables) const = 0; virtual void LoadAssetData(BaseAsset& asset) const = 0; virtual bool SaveAsset(const BaseAsset& asset, std::string& stDataOut) const = 0; virtual void HandleImport(BaseAsset& asset) const; virtual void HandleFinishLoad(void); virtual void OnUnloadAsset(const BaseAsset& asset) const = 0;};
This one has a lot of virtual methods, but most of them are only needed for certain special cases. For example, the OnUnloadAsset-method is only there if the asset data is registered somewhere and require deregistration, like a script class would. Also, this class is actually not what a user would themself use -
there is yet another templated class in between:templateclass AssetDataIO : public BaseAssetDataIO{public: virtual sys::Pointer LoadAssetData(const std::string& stFileData, const VariableMap& mVariables) const = 0; virtual bool SaveAssetData(const Type& data, std::string& stData) const { return false; }; virtual sys::Pointer OnCreateDefault(const std::wstring& stName) const { return sys::Pointer(); }; virtual void OnUnloadAssetData(const Type& data) const { }}
It overrides most of the base classes methods (and declares them final), but offers equivalents to them, just that now we are not dealing with the abstract BaseAsset-class, but with the concrete type. Also, this allows for greater generalization - CreateAsset, SetAssetData and LoadAssetData of
BaseAssetDataIO all use the "AssetDataIO::LoadAssetData" method to generate their data member.

Okay, I lied, there is yet another class before the user actually generates their IO class. Well, it isn't actually required, but let me explain...

See the "mVariables" parameter in the LoadAssetData-method? This is a map of variant instances, which represent the parameters required for data processing, like texture size, etc... Of course, one could now do this:sys::Pointer TextureAssetIO::LoadAssetData(const std::string& stFileData, const VariableMap& mVariables) const{ const auto vSize = mVariables[L"Size"].GetValue(); const auto loadFlags = mVariables["LoadFlags"].GetValue(); const auto format = mVariables[L"Format"].GetValue(); // load data here}
but I can tell from experience that this starts to suck ass quite fast, especially since you'd have to write a declaration for this too:asset::LoaderEntryVector TextureAssetLoader::GenerateDeclaration(void){ return { { L"Size", core::generateTypeId(), false }, // bool parameter is "isArray" { L"LoadFlags", core::generateTypeId(), false }, { L"Format", core::generateTypeID(), false } };}
So I finally wanted to be able to 1) generate the asset attribute declaration from a method and 2) auto-expanding the parameter map. I'm not going to show you the actual code for this, because its an ungodly horrible template mindfuck (I might make a coding horror entry one day...), but it lets you declare
your asset loader like this:#pragma once#include "ITextureLoader.h"#include "ITexture.h"#include "..\Asset\AssetIO.h"#include "..\Math\Vector.h"namespace acl{ namespace gfx { class Screen; class TextureAssetLoader : public asset::AssetDataBindingIO { public: TextureAssetLoader(const Screen& screen, const ITextureLoader& loader); sys::Pointer OnLoad(std::string stData, math::Vector2 vSize, LoadFlags loadFlags, TextureFormats format); static asset::LoaderEntryVector GenerateDeclaration(void); private: const ITextureLoader* m_pLoader; const Screen* m_pScreen; }; }}// TextureAssetLoader.cpp#include "TextureAssetLoader.h"#include "Screen.h"namespace acl{ namespace gfx { TextureAssetLoader::TextureAssetLoader(const Screen& screen, const ITextureLoader& loader) : m_pLoader(&loader), m_pScreen(&screen) { } sys::Pointer TextureAssetLoader::OnLoad(std::string stData, math::Vector2 vSize, LoadFlags loadFlags, TextureFormats format) { if(stData.empty()) { if(vSize.x == 0 || vSize.y == 0) vSize = m_pScreen->GetSize(); return m_pLoader->Create(vSize, format, loadFlags); } else return m_pLoader->Load(stData, loadFlags, format); } asset::LoaderEntryVector TextureAssetLoader::GenerateDeclaration(void) { return { { L"Size" }, { L"LoadFlags", LoadFlags::NONE }, { L"Format", TextureFormats::UNKNOWN } }; } }}
Note that you still have to write a declaration, since I don't have a way of reflecting parameter names and default values. But its already way easier than it was before, and once I'll go about implementing a parser/compiler for my custom reflection data, even writing this declaration would be passe.
Now the load function takes the expanded parameters, as well as the data loaded by eigther the or tag, and can use this information to create the asset.

And, belive it or not, but thats actually all you have to do add a new asset type. Obviously, the implementation inside ITextureLoader is a bit more complication (standard texture loading procedure, you know), but one tiny registration-function call and you have added an asset which is fully integrated
into the asset pipeline with (almost) all features. Of course, you also have to implement out the saving routine as well as the loading of a default resource (in case at some point the actual asset is missing). Also, you obviously have to implement an editor interface for this type of asset, which I'm
going to show maybe another time. But in comparison to what I had to do before to add a new asset type, minus the new functionalty every asset gets, this is just so much easier.

So yeah, thats it for this time. Next time, I'm going to talk a bit more about the behind-the-scenes implementation of the asset system, but I hope you've got a good idea how easy it is to deal with specific asset types in a void here. Thanks for reading, and until next time!

Juliean

Juliean

 

Advanced asset handling

Last entry:

https://www.gamedev.net/blog/1930/entry-2260953-asl-geometry-tessellation-shader/

Despite being extremely busy the last weeks, I still found some time to put into developing the engine. This time around, I decided to tackle one of the ealierst features/problems I originally put in: The asset management.

The beginnings: K.I.S.S.

A short overlook of how the system originated. I recall that this was one of the very first things I built in. Which makes sense, you can virtually do nohing without having some sort of texture/mesh/... loaded. In keeping
with the KISS-principle, and the longing not to overengineer, I put a simple manual system, where for each type of resource, a few classes had to be declared:

- A cache class, which stores all loaded assets, and allows access by a key (which was usually a string name, since I mainly referenced assets manually back then). There is a templated class called core::Resources for this,
which acts like a std::map, with a few extra features. It is possible to declare a core::ResourceBlock, which records all loaded assets between an Begin()/End()-call (used for seperating asset handling for plugin/game/scene assets).
But the instance of this cache itself had to be instantiated manually, which usually happend in a Package class of the module (Gfx, Entity, Audio, ...), and be passed to the modules context as a member, so others can access it.

- A loader class. This had no regular interface alltogether, but usually it got a reference to the cache, as well as any other resources/assets it might need to access (material needs textures, etc..). It than has a load-method which
takes a file name parameter. Then, assets where ordered in a XML list file, like this:
ArkWalkArkRunArkJumpArkRunJumpArkShadow
This is due to the fact that first of all I didn't want to reference plain files directly as assets (since I've made very bad experiences with this in the Rpg-Maker), and also I usually had extra parameters I had to store, which
didn't fit in e.g. a plain png texture file. I had no toolchain back then, so I just manually put all assets in the list, which wasn't a big deal for the first few, small projects.
Again, this loader was manually instanced, placed in the Context-struct, and called when needed.

For a time, it was good...

... but after a while, I started to run into more and more troubles. Well, not exactly troubles, but as my editor/toolchain grew, and I ended up adding more and more types of assets, as well as working on a project which involves
thousands of seperate assets like textures, sounds, scripts... , it showed that the old system isn't sutaible anymore. So I decided to go a step forward, and adapat a dynamic, generalized asset handling like e.g. in Unity or Unreal4.
There are some very good, objective reasons for this:

- Less (duplicate) code produced. To be fair, the loader/cache-thingy isn't all that big of a deal. However, in editor mode, I had to declare so much things for every single asset, which couldn't even be generalized without having
all assets share some common code in the first place. I had to declare a seperate asset window, with a tree (which involves different callback-interface implementations), a pick-controller for selecting an asset in the rest of the editor,
etc. etc. . Having a general asset management would allow me to only generate the code for each asset manually, that is really needed (parsing the asset data, the editing/viewing tools, what Icons to use, ...). Creating new assets, deleting/renaming them, would all
be unified.

- Automatic import of new assets. Although I already managed to implement "Import X" functions for most assets, it still needed to happen in the editor, which specially made adding multiple assets tedious, as well as adding some to the
plugins (since I have no toolchain for those). With the new system, there will be no list in which assets are stored. Instead, on startup (in editor mode), the asset folder will be scanned for all files, which will then be available in the project.
I could have done that in the old system, but I would have had to write it for each asset seperately.

- Hot reload of assets. A small, but very important feature. I don't know how many times I've had to restart the engine just because I wanted to replace a texture with another. Again, could have been done in the old system, but with O(n) complexity for each asset.

- Partial saving of assets: Becoming more and more important as the project grows, in the old system I had to save each type of asset all together (since you can't just partially update an XML-file on the disk. Even worse, I had no "dirty"-flag mechanics,
meaning pretty much that the whole project had to be saved every time I pressed Ctrl+S. After a while, this does show, and I don't want to wait 5 seconds every time I save. With the new system, I can mark assets as dirty, which will then be saved seperately of everything
else.

- Binary packing: Another thing I didn't think about back then, was how to handle actually producing a distribution game package. I used to just give out the working project, with all the XMLs and the plain image textures. For both space consumption
as well as loading speed I want to offer packaging in an engine-specific binary format. Doing this with the old system was borderline impossible, but with the new asset system, data loading and asset generation is now seperated, allowing my to get my asset data from
whereever I want, without the loading routine caring (expect for whether or not its binary, ofc).

- Storing of user attributes. Actually, if I'm not mistaken I belive this is a feature that most other engines don't have, but I felt I really needed. When making my 2D Rpg, I always had the problem of "where do I store information about the character textures?".
For example, a character texture might have 1-4 directions (usually only 1 or 4), 1-X frames, and it will be displayed with a certain animation speed. Where do I store this information? Its not part of the base texture format, since I'm not making a
fully fledged 2d Rpg Engine. In the Rpg-Maker, this used to be part of the file name (f*ck me, if I ever changed the number of frames in a texture... not even talking about that this has no visual editor support, so f*ck it), so not an option. I first though
about having an external database, where you could link textures to this sort of information. But well, with the new asset format, its quite simple. I allow so called UserAttributes to be stored in the asset declaration. Those can be added by any plugin,
with an arbitrary name and type, which can then later be accessed by the plugin code, and of course edited in the editor. Due to natural loading order, only textures the are part of the project will have those attributes. A very good solution for the problem in my regards,
and spared me a few headaches.

- Reference counting/return of default values. A simple issue with my older system was that I didn't have any reference counting for my assets. Well, this usually wasn't a problem but it also didn't allow for any discarding of assets while e.g. a level is running,
so for that sake, introducing references via the generalized asset system for future unloading of unused assets to save memory is certainly a good thing. Also, I ran into trouble with loading order/missing assets, which my new system now allows me to completly surpass.
This is due to the asset system now being able to generate an "empty" asset with default values (checkerboard texture, empty script...) in case the asset is not being loaded while accessed. This means less hussle checking for nullptrs when cross-accessing assets,
as well as no more losing of stored references when somewhere an asset isn't loaded/isn't loaded in the right order.

- Removed the need for manual cross-access of assets and parsing of attribute types. Again, more of a code-duplication/complexity problem than anything else. So usually, assets would need to access other assets. In the old system, this would mean
I had to pass the cache-class to the loader, read out the key in the correct type, access it, usually check for nullptr-values & emit a warning if so, and so on. For attributes (texture size, mesh type), I had to access the text-value from eigther xml-node
or string, and parse it to the correct type, making range-checks for enums all the time, etc... . Now that I already have a global type system, both of those issues can easily be pulled out of the specific asset format. Every generic asset now has an "attribute"
header, where all those informations are. The name/type of each attribute is declared when registering the asset type, and the loading/parsing is done in the generic asset layer, so that the actual loader just gets whatever he needs via function parameters.
Way, way easier, and removed a lot of direct coupling between different asset caches/loaders.

Phew, I had much more to say about this than I originally planned to. But I also think its important to talk about this. I've heard many discussions concluding that generalizing asset management isn't worth it, and that you should just KISS. Well,
KISS is certainly the way to go for 1) starting a project and 2) for small/medium-scale (throwaway/one-time)-projects. For anything of larger scale, as you can see, there are many benefits for having a generic asset layer. As mentioned before,
you can certainly do all those features mentioned about if you have your seperate hand-written asset Cache/Loader-classes, but the amount of near-identical duplicated code you'd have to write is ungodly. I didn't fully count how much LoC I actually saved
from porting the system, but I think it was something along the lines of -10k (overall 180k for the engine right now), whereas the generic asset libary has around 3k LoC. Now I know this isn't the best of metrics, but you get the idea. Saving 7k LoC written
while providing a ton of new features is certainly a win.

The new asset file format:

Okay, I think this post is long enough already, but I just wanted to give a brief glimpse at how the new format looks like. For those of you already familiar with Unity/Unreal, it shouldn't surprise you that there is a file with the ending "aasset" generated
for every imported asset. This is how it looks like:
TextureAssetArkHitArkHit.png00NoneUnknown0.034

For editor-mode assets, I decided to go with plain XML, for version-controls sake (we actually had real troubles due to Unreals binary assets in our recent university projects). So lets go over the format quick:

- The Type-Tag stores the type identification string, as registered in the type system. This will be used to lookup which type of asset has to be generated.
- The Name-Tag stores the name by which assets can be referenced. This is pretty much obsolete, I just had it there because it made porting a bit easier, since I've got to refine the whole access/reference part to support duplicate file names
in different plugins/folders, etc...
- The File-Tag references the data-file. As you will see in the implementation, the new asset format seperates asset and data generation. In this case, it references a texture file, but for anything that is already engine specific (prefabs, scripts...),
there is an alternative Data-Tag, which stores the asset data inline (so that I don't have to generate a seperate file for anything that will exclusively be edited by the engine). Having a File-Tag on the other hand is useful for images.
I also didn't like Unreals approach of storing all data completely inside the asset and not checking in the source file at all, since this way I often had to ask our artists to give me the texture seperately so I could make a small edit and reimport it.
- The Attributes-Tag stores the base attributes of the asset, which are declared at asset registration. Due to this declaration, the parser knows exactly which type it has to translate those values to. A complicated template-mindfuck will than expand
the parameter list, so that the loader gets those as parameters of the type he specified, but you'll see more of those in the future.
- The UserAttributes-Tag work similar to the normal Attributes, but as I mentioned its application/plugin-supplied.

And thats it. I currently don't have a binary format, but implementing this shouldn't be a problem.

So far, thanks for reading. Next time, I'll go into more detail about the actual implementation of the asset loading/handling.

Juliean

Juliean

 

ASL: Geometry & Tessellation-Shader

Last entry:
https://www.gamedev.net/blog/1930/entry-2260794-acclimate-shading-language/

After a long time of not being productive again, I've finally come around to fully implement the current-gen API features to my engines shading language, now called "ASL" (acclimate shading language).

One thing I noticed when learning geomentry, domain and hull shaders in DX11 was that those had quite some bulky syntax. IDK, its straightforward and not that overly complicated to understand, but there is quite a lot to remember about the exact syntactical structure of those things. So in my own implementation, I tried to keep the requirements for coding any of those shaders as simple as possible.

For the following shaders, I assume you already have at least a basic understanding of the concepts.

Geometry-Shader:

So lets have a look at how a geometry-shader looks like, shall we?geometryShader{ input Stage { matrix mViewProj; } primitives { in: point[1]; out: triangle[4]; } main { float3 vRight = float3(0.0f, 0.0f, 1.0f); float3 vUp = float3(1.0f, 0.0f, 0.0f); vRight *= 0.5f; vUp *= 0.5f; float3 vVerts[4]; vVerts[0] = in[0].vPosition.xyz - vRight - vUp; // Get bottom left vertex vVerts[1] = in[0].vPosition.xyz + vRight - vUp; // Get bottom right vertex vVerts[2] = in[0].vPosition.xyz - vRight + vUp; // Get top left vertex vVerts[3] = in[0].vPosition.xyz + vRight + vUp; // Get top right vertex for(int i = 0; i , 1.0f), mViewProj); Append(); } }}
This is a simple geometry-shader, expanding a point to an floor-aligned quad. The input block, as known previously, offers cbuffer-variables supplied from the application side.

The "primitives" block is new and geometry-shader only, and is there for definition of what primitives the geometry-shader gets, and what it should output.

Inside the "main" block, you can access vertex shader output via "in[X]" (uses the vertexShader-blocks "out" definition not seen here), and write to "out", calling "Append()" after each vertex is written. "Restart()" can be called to emit the next (seperate) primitive.

The geometryShader can eigther output its own vertex structure to the pixel shader by specifying an "out" block, otherwise it will use the previous shaders output structure.

So in conclusion, geometry-shaders using is quite easy using the ASL. Note that this also parses to OpenGL4 (unfortunately I havn't implementated OpenGL-support for the upcoming shader-types, so I can't say for sure they will work there also).

Tessellation-Shader (Hull & Domain):

Tessellation was one of the key-features for current-gen hardware. Especially those always deterred me from getting started with it, since there is really a whole lot of things to keep in mind for both Hull & Domain-Shader, at least in DX11. You have a constant hull function, the regular hull shader function, with inputs of multiple vertices as well as the constant data, with PointID/PatchID and UV-coordinates for the Domain-Shader... lets just say, its even more little things to keep in mind syntactically than the geometry-shader.

Hull-Shader:

But lets have a look at how you write them with ASL:hullShader{ settings { domain: QUAD; partitioning: FRAC_EVEN; outputtopology: TRI_CW; outputcontrolpoints: 4; inputcontrolpoints: 4; } out { float3 vPos; } constOut { } constFunc { constOut.Edges[0] = 1.0f; constOut.Edges[1] = 1.0f; constOut.Edges[2] = 1.0f; constOut.Edges[3] = 1.0f; constOut.Inside[0] = 1.0f; constOut.Inside[1] = 1.0f; } main { out.vPos = in[PointID].vPos.xyz; }}
The "settings" block obviously is hull-shader specific, as you define the tessellation-paramters.

"constOut" and "constFunc" are the constFunction, which is a completely seperate function in DX11. There, you calculate the tessellation factors, and any other data that is shared between the entire patch. Note that the "Inside/Edges" are defined for you automatically based on what domain you specify.

in the main-function you have a few things defined for you. Obviously "in" is the output of the vertex shader, as an array with as many elements as you specified as "inputcontrolpoints". PointID is the ID of the current vertex, and there is also a PatchID for the ID of the entire patch.

Domain-Shader:

And now for the domain-shader:domainShader{ input Instance { matrix mWorld; } input Stage { matrix mViewProj; } out { float4 vPos; } functions { float3 Bilerp(float3 v0, float3 v1, float3 v2, float3 v3, float2 i) { float3 bottom = lerp(v0, v3, i.x); float3 top = lerp(v1, v2, i.x); float3 result = lerp(bottom, top, i.y); return result; } } main { float3 vPos = Bilerp( in[0].vPos, in[1].vPos, in[2].vPos, in[3].vPos, UV ); out.vPos = mul(float4(vPos, 1.0f), mul(mWorld, mViewProj)); }}
So there is really little special about that here. You have an "in" block with as many controlpoints as you specified in "outputcontrolpoints", using the hull-shaders "output" for the vertex elements. "UV" is defined for you, depending on what you have got - for a quad its float2, for a triangle it would be float3. In the "functions"-block, I've specified a Bilerp-function, which could probably be somewhat pulled in the standard-repertoir of the shading-language, as it is probably needed extremely often.

And thats it! You can do all your displacement-mapping etc.. here, since the domain-shader also has access to a "textures"-block, and so on...

Conclusion:

So for those of you familiar with the Domain/Hull-Shader, I think it should be clear that my shading language is taking quite a bit of work. From what I've seen during building the parser for those, you normally have to specifiy lots of duplicated data, e.g. while "outputcontrolpoints" exists in DX11 also, you still have to manually write it as the array-size in the hull/domain-shader input signature. You also always have to figure out the correct combination of Inside/Edge-Tessellation-Factors yourself, as well as what UV you want as input in the domain-shader.

So thats it for now. Next time, I'll probably talk about some more specific block-types that are needed for developing shaders, like "constans", "extentions", and probably also more complicated stuff like templating shaders (which I partially still have to implement). Thanks for reading, and up until next time, I've attached a somewhat complete shader example of the tessellation terrain shader I'm currently working on, based on the Terrain Tessellation example from NVIDIAS DX11 SDK.

PS:

For those of you that reguarely work with plain HLSL/GLSL-shaders, I want to ask a question: Given what you have read so far, would you rather write your shaders in a language like mine (given that there is full documentation, etc...) or still rather write your shaders in a plain shader language, and why? Would be very interested to hear some different opinions.

Juliean

Juliean

 

Acclimate shading language

Last entry:
https://www.gamedev.net/blog/1930/entry-2260721-a-custom-variant-class/

Now for something completely different. Since I'm currently working on the graphics module, I thought I'll talk about one of the coolest features of the engine in that regard: The custom shading language.

Why a custom language?

I started the renderer as DX9, and later added DX11-support. And at some point, I also added GL4-support for a schools project. Now the render backend and interface is build so that you can write grahpic algorithms indifferent of the actually used API. The only problem was, I still had to have each shader in 3 different variations. I already noticed that all those different shaders actually just differed in syntax, but most actualy code was nearly identically. So thus my idea was to write a kind of meta-language, of which I can generate shaders for all different APIs, while just writing the shader once.

The syntax:

So syntactically, what I came up with is a concept of blocks & lines, that make up the base file format. Take a look at a vertex-shader// the definition of the vertex shadervertexShader{ // defines the vertex-attributes, handed over by the input assembler in { float4 vPos; } // defines the output of this shader out { float4 vPos; } // defines a block of shader constants, "Instance" is an engine-specific macro for a certain cbuffer ID input Instance { matrix mWVP; } // the main execution point of the vertex shader main { out.vPos = mul(in.vPos, mWVP); }}
So as you can see, the definition of the shader is really slim. Compare it to the equivalent shader in DX11:struct VS_INPUT{ float4 vPos : SV_POSITION;};struct VS_OUTPUT{ float4 vPos : SV_POSITION;};cbuffer Instance : register(b0){ matrix mWVP;}VS_OUTPUT mainVS(VS_INPUT in){ VS_OUTPUT out = {0}; out.vPos = mul(in, mWVP); return out;}
There is lots of more syntactical "garage" you have to take care of in plain HLSL5. My language automatically takes care of that, allowing for faster creation of shaders. I think most of what is in the vertex-shader I posted should be pretty self-explanatory, alongside the additional comments. Lets look at a pixel-shader next. The concept is pretty much the same:pixelShader{ // pixel shader only has an "out"-block, as input is implicitely generated from the vertex shaders "out" out { float4 vColor; } // this block is new - used textures are declared here textures { 2D Texture; } main { out.vColor = Sample(Texture, float2(0.5f, 0.5f)); }}
Again, pretty lightweight. Maybe I'm biased towards my own work, but I'd consider writing shaders in such a manner way easier and faster than "plain" HLSL/GLSL-shader. Of course you first have to know the syntax.

Writing actual shader code:

So writing actual shader code (what is in the "main" methods), its most of the time pretty straightforward. I've based the language syntax on HLSL5, and all the parser really has to do is replace some function names that are unique to the language (eg. Sample()), as well as replace all used types (float, matrix) for GLSL-targets. So if you are familiar with writing HLSL-code, then coding in the acclimate shading language should be extremely straight-forward.

Moar blocks pls:

In order to support most basic shader features, there are a few blocks that are shared between the different types of shader, while other blocks are shader-specific. Some of the shadered once you've already seen, but there is a few more:

- out: The output of the current shader stage, acts eigther as input for the next stage or writes to the framebuffer (in case of a pixel-shader)
- textures: The main declaration point for textures of all kinds. Eventually needs to be reworked a bit, once I want to support regular buffers, since they shader texture registers, at least in DX11.
- input: The equivalent of DX11-cbuffers, and GL4-uniform-buffers. The actual buffer qualifiers (Instance, Stage, ...) are given by the using framework, since its easier to remember a name than a number, and also easier to change. Thats one thing really hard to emulate right in DX9, everything else is really quite easily translated to the different shading-languages.

- functions: That block is new. Since you can't freely write code all over the place, for being able to declare functions, this block is needed.functions{ float calculateSomething(float4 vPos) { return 0.5f; } float calculateSomethingElse(float2 vTexcoords) { return vTexcoords.x / vTexcoords.y; }}
Wrapping it up:

So thats are the basics of the language. I consider it even pretty good objectively, and really don't want to write shader in plain HLSL/GLSL anymore. Thats not even all though - there are a few more features, which I'm going to talk about next time. For example, there are geometry-shaders, but also direct support for permutations, and a plugin-system for shaders. Thanks for reading, and see you next time!

PS: As a practical example, I've attached an SSAO shader written in this language. Enjoy!

Juliean

Juliean

 

A custom variant class

Last entry:

https://www.gamedev.net/blog/1930/entry-2260689-generic-execution-routine-for-type-based-code/

So this time, as promised, I'm going to describe how the heart of the type-system, the variant class, is being designed/used. It uses both the TypeId and the generic execution routine, plus some additional template magic. This class is called "Variable".

The data:

Looking just at the private data this class stores, this is what we have:
class ACCLIMATE_API Variable{private: void* m_pData; TypeId m_type; bool m_isArray;};
That means there is a 16 byte overhead associated with every variant class, plus whichever is needed with the object/enum/struct wrapper. This seems a lot, but appears to be neglectible in practice - benchmarks show that using ie. this variable class vs. a pure float for generic calculations takes up only roughly twice the time. I'm going to show these benchmarks in more detail at another time though.

Construction:

So one thing that we want to be able to do is create an instance of the variable from any given type, like this:const float value = 20.0f;const core::Variable variable(value);
For that reason, the variable-class has two templated constructors (something that I really haven't seen used so often.
templateVariable::Variable(Type& value) : m_type(generateTypeId(value)), m_isArray(isArray::value){ m_pData = initData::value>::Call(value);}templateVariable::Variable(const Type& value) : m_type(generateTypeId(value)), m_isArray(isArray::value){ m_pData = initData::value>::Call(value);}
At first I used to have many explicit ctor overload, but using the type-routines presented before, it is easy to generate the type-id dynamically, and determine whether the type is an array-type statically via templates. So we only need one const& and one non-const& ctor. The stick with templated constructors is,
that you can only have the template be resolved implicitly via an argument, which in our case is actually just what we want.
The initData-template takes care of generating the correct memory allocation operation. It basically resolved to only the allocation itself at compile time, which is pretty neat.
On the other hand, we have a constructor that takes only the TypeId:Variable::Variable(TypeId type, bool isArray) : m_type(type), m_isArray(isArray){ if(m_type != invalidTypeId) m_pData = CallByType(m_type.GetId()); else m_pData = nullptr;}
This constructor now uses the dynamic execution routine, to perform the very same dynamic allocation.

Destruction:

Since we have a dynamic allocation in the constructor, we somehow need to clean that up in the destructor. Now in the constructor, we don't have the type-information available in form of a templated type. Again, we can make use of the dynamic execution:
Variable::~Variable(void){ DeleteData();}struct deleteData{ template static void Call(void* pData) { delete (Type*)pData; }};void Variable::DeleteData(void){ if(m_pData) { CallByType(m_pData); m_pData = nullptr; }}
In this case, the deleter-struct is really simple, since the type to be deleted is already passed in directly.

Access to the data:

The last part to discuss, is how to access the data of this class. At some point, the value that a Variable-class is storing has to be accessed directly. For that manner, there is a SetValue/GetValue function pair:templatevoid Variable::SetValue(typename returnSemantics::value value){ ACL_ASSERT(m_type.GetType() == getVariableType::type); ACL_ASSERT(m_isArray == isArray::value); setObjectData::value>::SetData(m_pData, value, m_type.GetId());}templatetypename returnSemantics::value Variable::GetValue(void) const{ ACL_ASSERT(m_type.GetType() == getVariableType::type); ACL_ASSERT(m_isArray == isArray::value); return getObjectData::value>::GetData(m_pData, m_type.GetId());}
The returnSemantics is used again to quarantee correct semantics (value, reference, pointer) for whichever type is being used. Two assertions check whether variable-type matches the required type (those two methods are not there for altering the internal type). And then we have two more magic templates, which
basically resolve to a simple static_cast at compile time. Now we can make use of the class like that:void function(core::Variable& variable){ int value = variable.GetValue(); // return value templates are a thing, you just have to always specify the template per hand value += 25.0f; value *= 2.0f; variable.SetValue(value);}int someValue = 50.0f;core::Variable variable(someValue);function(variable);sys::log->Out(variable.GetValue());
Which internally resolved to the following code (without the type safety checks):void function(void* pVariable){ int value = *(int*)pVariable; value += 25.0f; value *= 2.0f; *(int*)pVariable = value;}int someValue = 50.0f;void* pVariable = new int(someValue);function(pVariable);sys::log->Out(*(int*)pVariable));delete pVariable;
There is really nothing more to it, so the only additional cost for using the variable-class lies in the additional data overhead, and the memory allocation/fragmented memory access.

Eliminating the costs:

While both these costs appear to be neglectable in practice due to my benchmarks, there are still a few options to making the cost even more insignificant. This is just some stuff I've come up with, but didn't apply yet:

- Store the value of POD-types inside the pointer. Obviously, beginning with string and ending with structs, the more complex types require some additional information which does not fit into the 4 byte of the void pointer (I'm going to talk about those in more detail in another entry).
But for int, float and bool, we could do some seriously evil memory hacking and just misuse the void-pointer to store the value directly, instead of allocating a seperate memory area. There are a few concerns here, mostly regarding the size of the pointer/type itself, but I think those could
be very easily hidden inside the class. I really do think it is worth it in the end though, since it saves us 4 bytes in total space of the class, an allocation/deallocation, and a pointer access/dereferencation.

- Pack the type-information to reduce the class size. Obviously, the TypeId has not been written with compacity in mind. We require a total of 12 bytes to store all the type information, 4 each for the static and dynamic type, as well as the isArray-information. The static type only has a very limited number of bits.
As for the dynamic type, I also don't regard 4 billion possible types as necessary for my needs. So I could really just compact the type-id from 12 into 4 bytes, having the array take up 1 bit, static type the next 3, and the other 28 bits (we don't need unsigned) be reserved for the dynamic type, still leaving
place for 268 million different object/enum/struct types. I would still have to check whether gain of only having to access half the memory really outweights the cost of having to bitshift each individual type information out of the packed structure, but I'm pretty sure it does, specially since the new type size (8)
would still be byte-aligned.

Practical usage:

So at the end of this article, some quick examples of what you can really use this class for. As for now, I've only shown code where the variable class replaces a direct pass of an int to a function, which is not really practical. So here is what you can do:

- Settings: I use the variable-class in the settings-system. This allows me to store all known types as settings - be it simple PODs, up to structs (for screen-size), enums (for graphical settings), and objects (the windowskin-texture for a 2d rpg). This all happens without manual parsing, so you can just say:void BaseEngine::OnSettingsChanged(const game::SettingModule& settings){ auto pScreen = settings.GetSetting(L"screen"); auto vScreenSize = pScreen->GetValue(); //...}
- (Visual) scripting: I'm also using this class for my visual-scripting system, in fact this is the instance for which I came up with it. It is used to represent paramters/return values of functions, etc... I'm using it for the editor/GUI as well as for the runtime, which again is quite fast enough for now (though
I still could do some very serious optimization on that). But just judging how you can use that, its really again very easy:void WrapperForSomeScriptFunction::Execute(double dt, Stack& stack){ auto speed = stack.GetAttribute(0); auto vDirection = stack.GetAttribute(1); auto* pEntity = stack.GetAttribute(2); auto pPosition = pEntity->GetComponent(); const auto vTargetPosition = pPosition->vPosition + vDirection * speed; stack.ReturnValue(0, vTargetPosition)}
- Serialization: The Variable-class obviously fits very well for making the loading of type-information uniformely. There is ie. a VariableLoder::FromNode-method which takes an XML-node (which has type-information and the value) stored, and returns a variable-instance. There is obviously an VariableSaver::ToNode-equivalent.
Those are both used not only for serialization of the scripting, but also the entity-system (which was the original use-case I presented a while ago), and for game-state-serialization.

Wrapping it up:

So thats it for this time. I hope I managed to get across how useful such a variant-class is, especially since I've tailored mine exactly to the needs of the system. Next time, I'll cover how objects/enums/structs are handled in more detail. Thanks for reading!

Juliean

Juliean

 

Generic execution routine for type-based code

Last entry:
https://www.gamedev.net/blog/1930/entry-2260667-a-simple-and-fast-dynamic-type-system/

Last time, I've introduced my runtime type system. So this time around, I'll talk about how one can actually run code based on the type system.
Remember that there is a static type, which divides into multiple PODs, Objects, etc... ? This allows us to check on a type-id and execute code based okn what it is:if(typeId.GetType() == core::VariableType::INT){ // type can be statically casted to int}else if(typeId.GetType() == core::VariableType::OBJECT){ // type is stored in "Object"-wrapper. While we can't access the actual type just based on that // this wrapper-class has a few methods to help us with that.}
Now in practice, sometimes one might have to query for a specific static type. Usually, you are doing that using the specific TypeID generated by core::generateTypeId().
If you require only the static type, most of the time there is a block of code that performs a different routine for each specific type. This looks like so:switch(type){case VariableType::BOOL: break;case VariableType::FLOAT: break;case VariableType::INT: break;case VariableType::STRING: break;case VariableType::OBJECT: break;case VariableType::ENUM: break;case VariableType::STRUCT: break;default: ACL_ASSERT(false);}
Normally you would also have to check if the type is an array, and make a separate switch block for that. Yikes! Now the switch-statement is actually okay, in the current design there is really no good way around
that for what we want to do with it. But I still want to encapsulate it, so that a user of the type-system doesn't have to write an actual switch-statement for executing type-based code. Here comes template magic again.
We make a callByType()-function, that takes a functor, and an unlimited amount of parameters (C++11 variadic templates, yay):templateauto callByTypeSingle(VariableType type, Args&&... args) -> decltype(Functor::Call(args...)){ switch(type) { case VariableType::BOOL: return Functor::Call(args...); case VariableType::FLOAT: return Functor::Call(args...); case VariableType::INT: return Functor::Call(args...); case VariableType::STRING: return Functor::Call(args...); case VariableType::OBJECT: return Functor::Call(args...); case VariableType::ENUM: return Functor::Call(args...); case VariableType::STRUCT: return Functor::Call(args...); default: ACL_ASSERT(false); }}
So this hides the ugly switch statement, by calling "Call" on the functor, with the type passed as template argument to this Call-method. Note that I use the decltype-mechanic for determining the return type of the Functor::Call().
Sounds complicated at first, but its really not hard to use. For example, this is how you perform a type-to-string-conversion:struct toString{ template static std::wstring Call(const Variable& variable) { return conv::ToString(variable.GetValue()); } template static std::wstring Call(const Variable& variable) { return variable.GetValue(); } template static std::wstring Call(const Variable& variable) { auto& object = variable.GetValue(); return objectToString(object); } template static std::wstring Call(const Variable& variable) { auto& en = variable.GetValue(); return enumToString(en); } template static std::wstring Call(const Variable& variable) { auto& s = variable.GetValue(); return structToString(s); }};std::wstring valueToString(const Variable& variable){ ACL_ASSERT(!variable.IsArray()); return variable.CallByTypeSingle(variable);}
The toString-Functor has multiple overloads of a static templated Call-method. This is due to the fact that objects, enum, and structs have custom toString-methods, and all POD-types can be converted using the conv::ToString-function.
It is also possible to optimize certain types, like directly returning the value of the variable in case it is already a string, instead of having to pass it to the conv::ToString-method.

So there you go. This is how you develop functionality for the type-system. The syntax is a little bit verbose, but you can always hide it behind a helper-function like I did it here. As you have seen in the code example above,
I've used a "Variable" called class, which acts as a variant. This is what makes the type-system usable. I'll explain it next time. Thanks for reading!

Juliean

Juliean

 

A simple and fast dynamic type system

Last article:

https://www.gamedev.net/blog/1930/entry-2260651-ecs-iii-queries-and-horrible-component-serialization/

The type system:

As I've been briefly talking about last time, the need for handling things like serializing components automatically brought up the need for a custom dynamic type system. What are the requirements of this type system?

- Must allow to write generic code for handling any type (i.e. a GUI value box that takes any type and allow modification of it)
- Should be as fast in comparion to pure type access than possible (so no virtual, no RTTI)
- Type info must be serializable (safe across compilers, so again no RTTI)
- Inheritance information is not required (the intended use doesn't account for base/derived classes)
- No class should have to be modified to support the type-system (so no deriving of base classes, etc..)

So C++ RTTI is out of the question anyways. The way I ended up with is by having a type-id with a static and a dynamic type. Whats the difference? This is the static type:enum class VariableType{ UNKNOWN = -1, BOOL, FLOAT, INT, STRING, OBJECT, ENUM, STRUCT};// query the VariableType of any c++-typegetVariableType(); // BOOLgetVariableType(); // STRINGgetVariableType(); // OBJECTgetVariableType(); // ENUM
Those are the "primitive" types known to the type system. As you can see, template magic will play a huge part in the type system. As to why I have exactly those values:

- The first four are all POD. The sepeartion between the POD-types is there because it makes it easier to deal with bool, float, int and string, which are all just stored by value in the type-system. As you'll see, the type-system realies heavily on a mechanism that calls a templated functor, which will receive the target c++-type, based on this enum. So having all POD-values seperate makes it very easy to write a generic routine, which you'll see later.

There is also a second component to the Type-ID, called "ImToLazyToThinkOfAGoodName", or just "ObjectType" (because thats what is was originally used for). Its just an unsigned int, and the use is explained below.

- Pretty much all non-POD classes are dealt with as "OBJECT". Objects cannot be created by the type system, they are instead referenced by a key from a pool. Objects support optional weak-reference counting (SFINEA ftw). Objects have to be registered with their C++-type and a name. They then get a dynamic ID (CRTP strikes again):ObjectRegistry::RegisterObject("Texture", textureAccessor);ObjectRegistry::RegisterObject("Cbuffer", nullptr);isObject(); // trueisObject(); // falsegetObjectType(); // = 0getObjectType(); // = 1getObjectType(); // error: is not an object
Internally there is an "Object" wrapper-class, which uses said CRPT I already employed in my Entity/Component system to generate a linear type-id. Not that the RegisterObject() function takes an "accessor" as second parameter, which handles the access of an object by its key. nullptr is passed for objects that have no pool or are not accessible by a key, like the runtime-state object of a script.

- Then there is ENUM. This is pretty straight-forward, its an enum (duh...). It also uses the type-id from before, just that registration and access differs a little bit:core::EnumRegistry::RegisterEnum(L"BlendMode");core::EnumRegistry::RegisterEnumValue(BlendMode::NORMAL, L"Normal");core::EnumRegistry::RegisterEnumValue(BlendMode::ALPHA, L"Alpha");core::EnumRegistry::RegisterEnumValue(BlendMode::ADD, L"Add");core::EnumRegistry::RegisterEnumValue(BlendMode::SUB, L"Sub");isEnum(); // trueisEnum(); // falsegetEnumType(); // 0getEnumType(); // compile error: Is not an enum
Enums use a similar wrapper class for the ID (and for storing the enum itself).

- Struct is a new type of thing I introduces a few weeks ago. One thing that was missing previously was aggreggation of types. For example, if I have a Vector3, previously all components would have to be used seperately (which sucks). Objects are out of the question because those are allocated on the stack and have a few other requirements that does not apply to aggregate types like vectors. The solution was to introduce a seperate concept:core::StructRegistry::RegisterStruct>("Vector2f");core::StructRegistry::RegisterStruct>("Vector2i");core::isEnum>(); // truecore::isEnum>(); // falsecore::getStructId>(); // 0core::getStructId>(); // 1core::getStructId(); // compile error: Is no struct
Very similar concept to the other two. But here is the tricky part: How do we differentiate between a struct and an object, on compile time, mind you! The (temporary) solution was to require a struct to have a very specific static method, which also describes the structs known members:const core::StructDeclaration& Vector2f::GetStructDeclaration(void){ core::StructDeclaration::EntryVector vEntries = { { L"X", core::generateTypeId(), offsetof(Vector2f, x) }, { L"Y", core::generateTypeId(), offsetof(Vector2f, y) } }; static const core::StructDeclaration declaration(vEntries); return declaration;}
I don't like it very much, because it requires modification of the class. However, I really don't see a good alternative here, and one could just write a wrapper-class for the type system, having this method and the real class composited. As a final note, I'd just like to put out there that structs are fully recursive - a struct can have any other type, as well as another struct. It just cannot have one of itself, for well known reasons

Usage:

So that is all the types that are currently used, maybe there will be more in the future. I've already put in the source-code how you can query the type of an object, enum, struct, etc... but is there also a way to use the type-system without having to explicitely know/tell what a type is? Sure:class ACCLIMATE_API TypeId // this is returned by the function below{public: TypeId(VariableType type, unsigned int id); VariableType GetType(void) const; unsigned int GetId(void) const; bool operator==(const TypeId& id) const; bool operator!=(const TypeId& id) const;};core::generateTypeId(); // FLOAT, -1core::generateTypeId(); // OBJECT, 0core::generateTypeId(); // ENUM, 0core::generateTypeId>(); // STRUCT, 1
Now we have both the static and dynamic type for any object. As I've writting "fast" type system in the headline, there you. Its just about comparing two integers to know if anything has a certain type. Most of those templates do not resolve at compile time, but even those only access a static integer variable for the dynamic type-id (and perform an increment on another one the first time it is accessed). I didn't make any benchmarks yet, but I belive this should beat RTTI any day.

For how we implement code that actually uses this, I'll tell next time. The article is long enough, so at the end, I'll just show you how to use this information in the "end-user" code:

Entities, Components, dynamic types, oh my!

To get back to the original problem, we want to be able to serialize components attributes without the user having to write XML (or any such code for that matter). Having the type-system available, this is very easy to do. We just make a "GetDeclaration"-method, similar to what a struct requires:const ecs::ComponentDeclaration& Collision::GenerateDeclaration(void){ const ecs::ComponentDeclaration::AttributeVector vAttributes = { { L"Size", math::Vector2(32, 32), offsetof(Collision, vSize) }, }; static const ecs::ComponentDeclaration declaration(L"Collision", vAttributes); return declaration;}
There is a lot of complex stuff going on in the background, which I'll partially enlighten you on later. But having this declaration available, one can now serialize the component in a somewhat safe manner (the offsetof is not very nice, I'm working on a better solution). Also note that after the name of the component "Size", there is a "math::Vector2(32, 32)". Normally you would write "core::generateTypeId()", but the former variant allows you to also set a default value for this component, which is then used i.e. in the editor when a new component is created.

Wrapping it up:

Boy, that got way long than I expected. But its a pretty complex system just design-wise, so I think its just fair. So next time, I'll explain more about the implementation of the system, i.e. how one can use this type-ID to execute code that depends on the static type, and possible about how we can use the type-id to store values in a variant-like manner. Thanks for reading!

Juliean

Juliean

 

ECS III: Queries, and (horrible) component serialization

Last article:

https://www.gamedev.net/blog/1930/entry-2260620-ecs-ii-messaging/

Queries:

Last time I mentioned that as a special case of the messaging in my ECS, I implemented queries. The queries syntax is actually quite similar to messaging:// 1. declare the query classclass CollisionQuery : public ecs::Query{public: CollisionQuery(const math::Rect& rect, const ecs::Entity& entity); const math::Rect rect; const ecs::Entity* pEntity; ecs::EntityHandle entityCollided; bool bCollision;};// 2. register a system for itvoid CollisionSystem::Init(ecs::MessageManager& messages){ // register this system to respond to the query messages.RegisterQuery(*this);}// 3. this is how you use it:CollisionQuery query(rect, *pEntity);if(m_pMessages->Query(query)){ // we got a response to the query}// 4. the system receives & handles the query:bool CollisionSystem::HandleQuery(ecs::BaseQuery& query) const{ if(auto pCollision = query.Convert()) { const math::Rect collRect = math::Rect(pCollision->rect.x, pCollision->rect.y, pCollision->rect.width -1, pCollision->rect.height-1); auto vEntity = m_pEntities->EntitiesWithComponents(); for(auto pEntity : vEntity) { if(pEntity == pCollision->pEntity) continue; auto pPosition = pEntity->GetComponent(); auto pCollisionComp = pEntity->GetComponent(); const math::Rect rect = calculateRect(pEntity, pPosition->v, pCollisionComp->vSize.x, pCollisionComp->vSize.y); if(collRect.Inside(rect)) { pCollision->entityCollided = pEntity; pCollision->bCollision = true; return true; } } return true; } return false;}
So what are the main differences in both systems?

- Messages can be received by N number of systems, while only one system can respond to a query
- Messages can alter the state of the system, while queries are passed in a "const" method
- Messages cannot be altered, but queries obviously can take output-arguments

Basically messages are for delivering information to the system, and queries are for extracting it.

Component serialization:

Now with all the I presented, the ECS is nearly functional. The only thing that was left is loading of components, and editor interaction. The design I am going to present was good enough for the beginning, but as things progressed, it really become aweful to work with

The intermediate file-format used in my engine is XML. So the first thing that came to mind when talking about loading components, was to have an interface that can be registered to a loader, and is called whenever a component is to be loaded:// 1. the interfaceclass IComponentLoader{public: virtual IComponentLoader(void) = default; virtual void Load(Entity& entity, const xml::Node& node) const = 0;}// 2. and a basic implementationvoid PositionLoader::Load(ecs::Entity& entity, const xml::Node& node) const{ const auto x = node.FirstNode(L"X")->ToFloat(); const auto y = node.FirstNode(L"Y")->ToFloat(); entity.AttachComponent(x, y);}// 3. which can be registered// this loader is executed whenever a "Position"-component-node is encounteredecs::ComponentLoader::RegisterLoader(L"Position");
The loader would then simply pick the correct interface from a map, and call Load on it. The same thing happening for saving, as you can imagine. At the beginning of the project, this was only half bad. There where only a handful of components, and writing the load/save interface did not really take that long a time. But as there began to be more and more components, this became really tedious. Not only did I have to implement out those interfaces, but a ton more (mainly for editor interaction). So adding a component could easily need 10+ files and take about half an hour. Its still not too bad in the grand scheme of things, but there certainly had to be a better solution.

And thus I built my own custom RTTI-system. Its only half-insane as it sounds... next article, things should start to get more interesting, when I tell you more about this type-system I cam up with.

Juliean

Juliean

 

ECS II: Messaging

Last article:

https://www.gamedev.net/blog/1930/entry-2260476-ecs-i-entities-components-and-systems-basic-design/

Its been a while since I had time for programming, the new addon of WoW kept me busy for some time. But now I'm back and ready for action. This articles topic is messaging in my engines entity-component system.

Why messaging:

Remember from the last article that in my ECS I have a bunch of systems, that perform actions for components. Those systems pretty much live in their own SystemManager and used to be encapsulated from any direct access. Recently I got lazy and just exposed the direct pointer to any system added, since I was getting tired of handling some complex editor-interactions by adding 100 types of messages. But before that, and for game-interactions, messaging is still the most intelligent way. It requires no direct coupling between a system and its notifier, all you do is send a message, and any system the registered to it will get it. I also heavily relied on templates here again. A message is defined as a class:struct UpdateCameraMessage : public ecs::Message{ UpdateCameraMessage(Camera& camera); Camera* pCamera;};
I'm using yet again the CRTP to generate a unique message ID, and allow a simple 3-step procedure for using messages:// 1. register the system for the messagevoid CameraSyste::RegisterMessages(ecs::MessageManager& messages){ messages.Register(*this);}// 2. send the messagemessages.DeliverMessage(camera); // ctor of the message will be called from the passed arguments// 3. translate and handle the messagevoid CameraSystem::ReceiveMessage(const ecs::BaseMessage& message){ if(auto pUpdateCamera = message.Convert()) { pUpdateCamera->pCamera; // do stuff with camera here }}
Kind of neat, isn't it? I have seen way wore message deliverings in my short career.

Stuff for the future:

At the point I developed the system I was at the edge of my capabilities, but now I'd like to go one step beyond. I'm not a hundred percent satisfied with the way registration/message handling works, and I think I can use templates to further make it easier. I'm thinking about having a system where instead of manually registering and checking for type, I inherit from a base-class that does this stuff for me:class CameraSystem : public ecs::System, public ecs::MessageHandler{public: void ReceiveMessage(const UpdateCameraMessage& message) override; }
I'm not to sure about the inner workings here, but I've seen something similar done in EntityX, so I'm sure its possible.

Thats it for this time. You may have noticed the the message passed is "const", so a system cannot export information to it. I've got a second system called Query for that, but I'll show it in the next article. Thanks for reading, and until next time.

Juliean

Juliean

 

ECS I: Entities, Components and Systems (basic design)

Last entry:

https://www.gamedev.net/blog/1930/entry-2260428-gui-iii-widget-configuration/

Motivation:

It took me a while to get back on writing since I've been busy. But now, as promised, I'm going to give an intro to the entity/component-system I've developed for my engine.

So back in my early projects, I used to have a "GameObject"-base class, with a virtual draw and render method. This always felt kind of clumsy to begin with. I had to have dozens of different classes for "Player", "Enemy", "Environment", "Background", let alone differnt "manager"-classes, since all of those sub-objects had to somehow be loaded, managed and interact with each other in a different way.

Now I've already heard about ECS back then, but never really wanted to use it. Position as a component?! Dafuq? Why would I ever want to... thats going to be so slow... everything needs a position... you all know the "arguments" one can make in his head, especially when being a novice. But I've learned a thing about permature optimization since then, also for a scalable game-engine ECS seemed to be perfect from what I read.

Getting started:

I still didn't feel comfortable enough to design a such vital system for the engine on my own, without some sort of insight at how it can be done (like I used to sometimes check Qt if I felt unsafe about something). So I was suggested EntityX at the forums. Check it out, its a really cool entity/component-libary:

https://github.com/alecthomas/entityx

Still, the rules for me are "no external libaries", so I just studied its source code and took a note here and there. So don't wonder if you see some sort of similarities.

The general design:

So lets talk about the actual design of the ECS I came up with. I wanted it to be useable as easy as possible. So there is an entity-class that you can create:ecs::Entity& entity = m_entities.CreateEntity();
This entity is basically empty upfront. It has a name (for displaying/serialization purposes). But to give it functionality, one needs components. A component is declared by deriving from a base-class in a very specific way:class Position : public ecs::Component{public: Position(math::Vector2 vPosition); math::Vector2 vPosition;};
Notice the class name is a template argument of its base class. This is called the "CRTP"-pattern, and allows a few very neat things. First of all, this allows each entity to have its unique runtime-id, without using hashes, random-number
etc. Except for DLL-issues with templates, is totally safe. This allows me to access components in the simpliest way possible:
Position* pComponent = entity.GetComponent();
Since each component class has its own ID, and each component instance stores its type-id, I can do a component lookup and (almost) totally safe cast, without any (C++) RTTI. Now the CRPT is useful for many other things too,
like implementing a "Clone"-method for all components, without having to do it manually for every component. Like that:templateclass Component : public BaseComponent // we obviously need a non-templated base component class in order for some polymorpic stuff{ BaseComponent& Clone(void) const override final { return *new Derived(sys::safe_cast(*this)); // safe_cast does dynamic_cast in Debug-build, but only casts statically in release mode }};
The CRPT is actually a really useful patternm that can be applied in quite a lot of places to reduce duplicated code.

Creating components:

Creating components works in a similar way. Since I'm using C++11, I've applied variadic templates in order to create an in-place function for attaching a component:entity.AttachComponent(math::Vector2(32, 32));
The arguments of this function are forwared to the components ctor. This is a shorthand forPosition* pPosition = new Position(math::Vector2(32, 32));entity.AttachComponent(*pPosition);
which safes a lot of unneccessary typing after a time, but also provides a bit more abstraction and possibility for extending code.

Component functionality (system):

Up until this point, you should have noted that the components have no functionality (except for their base classes), but contain purely data. In order to get functionality, one has to define a system. Systems work on
components, and are defined like this:class MoveSystem : public System{public: void Update(double dt) override;};
Notice again the use of CRTP. This is so I can do this again:m_systems.AddSystem();// ...m_systems.UpdateSystem
And what happens inside a systems update? Usually, the system goes over all entities with a specific component, and performs some action on them:void MoveSystem::Update(double dt){ auto vEntities = m_entities.EntitiesWithComponents(); for(auto pEntity : vEntities) { auto pPosition = pEntity->GetComponent(); pPosition->vPos.x += dt; }}
The advantage of this design is, that you can easily exchange and extend component behaviour without touching any non-related code. You can even do that at runtime by enabling/disabling specific systems. Cool, huh?

What to improve:

Obviously, there is a lot of features that are still missing, which I'm going to address in one of the next entries. The worst issue so far, which I havn't really fixed till yet, is that each entity really stores
its own components in a vector. This is bad for data locality and cache coherency, and I'm going to adress it at a time (once it actually starts to really matter and/or if I want to do some impressive benchmarking).
So far, its still working fine - 10000 objects with multiple components that are all being updated didn't pose a problem the last time I did some testing on this (of course the PC was half decent, but still). Here are two screenies
showing the results of this. Its exactly 10000 arrow objects (or where it 20000? damn, it don't even remember), each ~56 vertices, with I belive at least 2 shadow-casters, at 25 FPS with DX9 (framerate is obviously GPU-capped).

_______________________________________________

So next time, I'm going to talk about how to handle component communication (hint: messaging incoming). Thanks for reading, see you next time!

Juliean

Juliean

 

Gui III: Widget configuration

Last entry: https://www.gamedev.net/blog/1930/entry-2260413-gui-ii-widget-class-design/

Now that I've talked about how Widgets can be used, its time to discuss how they can be configured.

Positioning:

Widgets are manually positioned by specifying their position and size:Widget(float x, float y, float width, float height);
At first, it used to be just absolute values, but that turned out to make certain things, specially in-game GUIs harder than they needed to. So I changed position to be relative to their parents values. This however was not good
enough for editor-like GUIs where you want a button with a specific size. So for each of this components, you can specifiy which mode you want:enum class PositionMode{ REL, SCREEN, REF, ABS};
REL means relative to the parent widgets size. Widgets are positioned at the upper left corner of their parent regardless, but with REL and 0.5 you can make a widget eigther half as big as their parent, or position them in the middle of it.
SCREEN means relative to the screens size.
REF is pretty obsolute by this point. It means relative to a reference size which can be user-configured, and should allow easy rescale of a huge number of widgets, but it didn't turn out to be all the useful in practice.
ABS is used for absolute values, if you i.e. know that a button should be exactly 21 pixels high.

Then there is something I called "Cap metrics":void SetCapMetrics(int minW, int maxW, int minH, int maxH);
Its the max/min values of how big the widget can be. Its for example used make a window stay over a certain size.

Alignement/Center:

Next, we have alignement:enum class HorizontalAlign{ LEFT, CENTER, RIGHT};enum class VerticalAlign{ TOP, CENTER, BOTTOM};void SetAlignement(HorizontalAlign hAlign, VerticalAlign vAlign);
Alignement will move the widgets origin in regard of its parent. You could do the same thing manually by setting the x/y accordingly, but this will make it easier to still move the widget even if it should start at the parents lower right.

You can also set the position center of the widget:void SetCenter(HorizontalAlign hCenter, VerticalAlign vCenter);
Which does exactly what you would expect. Default is upper left, but if you want to position a widget with its middle point at a certain position, thats the way to go.

Padding/Size relation:

Then we have padding:void SetPadding(int x, int y, int w, int h);
Padding adds (or subtracts) a fixed amount of space from specific sides of the widget. If you want to have a widget that fills its parent, minus 8 pixel on each side, thats the way to go.

For making widgets have a certain size relation, there is also an option:enum class BorderRelation{ NONE, HEIGHT_FROM_WIDTH, WIDTH_FROM_HEIGHT};void SetBorderRelation(BorderRelation relation, float factor);
Its unfortunately a little clumsy to use in practice, and I'm still looking for a way to improve the terminoly. But if you set BorderRelation to anything other than none with a factor of 1.0, it will result in a quadratic widget. The factor
and the mode determines the exact look, so you can make a widget that is exactly twice as high as its width, and so on...

Functionality:

A widget can have keyboard-focus. It doesn't make much sense for each widget to capture focus on click, so I used a straightforward mechanism to decide how a widget reacts to a potential focus gain:enum class FocusState{ IGNORE_FOCUS, DELEGATE_FOCUS, KEEP_FOCUS};void SetFocusState(FocusState state);
IGNORE_FOCUS means just that. If a widget is issues a focus gain event, it totally ignores it, resulting in the current focus widget to stay focused.
DELEGATE_FOCUS is the default value and results in the widget delegating the focus gain to its parent. This does not have an immediate result, but depends on the state of the parent.
KEEP_FOCUS will make the widget gain focus, and the current focus widget to loose his.

This flag, in a combination with the child/parent system pretty much allows you to confiqure the focus handling as you would please. Most widgets will simply delegate the focus up to their parent. Imagine a label widget in a textbox.
Others will simply ignore it, like the window icon. And others will want to keep focus, like said textbox. You can also lock the focus on a certain widget:void SetFocusLocked(bool lockFocus);
However, sometimes it is not enough to just configure the widgets focus behaviour. For an icon on a button, you pretty much want this to also ignore all possible events, so that the button is full responsive. For this, you can disable specific widgets:void SetEnabled(bool bEnabled);
A disabled widget will still be rendered, but ignore all other events. All childs of an disabled widget are also disabled.

Thats it for this time. Thanks for reading! The next gui-related entry will go in-depth about rendering with all optimizations. Until that time, have a screenshot of what the GUI is currently capable of:


Because next time, I'll start to talk about the entity-component framework that I've been developing.

Juliean

Juliean

 

Gui II: Widget class design

Last entry: https://www.gamedev.net/blog/1930/entry-2260396-gui-i-learning-about-signalsslots/

So this time around, I'm going to talk about the engines GUIs main class, called "Widget". I took an approach similar to what QT does,
in that every widget has to derive from that class. I'm not going to show the whole thing, because it just is way too huge at this point.
Conceptually though, it looks like this:

Events/Signals:class Widget{public: // signals Signal SigClose; Signal SigClicked; Signal SigReleased; Signal SigActive; Signal SigMouseMove; //projected mouse position Signal SigDrag; Signal SigResize; Signal SigUpdateChildren; //events virtual void OnClick(void); //mouse clicked virtual void OnHold(bool bMouseOver); //mouse hold virtual void OnDrag(Vector2 vDistance); //mouse hold & moved virtual void OnMove(Vector2 vMousePos); //mouse moved virtual void OnRelease(bool bMouseOver); //mouse released virtual void OnMoveOn(void); //mouse moved on virtual void OnMoveOff(void); //mouse moved off virtual void OnActivate(bool bActive); //object focus altered virtual void OnClose(void); //object closed virtual void OnRedraw(void); //object needs to be redrawn virtual void OnDisable(void); //object gets disabled virtual void OnEnable(void); //object gets enabled virtual void OnParentResize(int amountX, int amountY); //parent object got resized};
So we have a bunch of signals that let other widgets or user code hook up to certain events. And then we have the events themselfs, represented by
virtual functions. Those are called i.e. in the input handling routine, and eigther perform basic widget routines like calling the correct signal, or
are simply implemented out empty just so that other widgets can derive from them, in order to achieve specific behaviour. For example, take a look at
this button-click event:void BaseButton::OnClick(MouseType mouse){ if(mouse == MouseType::LEFT) { // in case button is pushed down if(m_bDown) { if(m_downResponseState) // check if button is not locked Widget::OnClick(mouse); return; } m_cTimer.Reset(); // reset repeat state timer OnStateChange(2); // change button appearance } Widget::OnClick(mouse);};
Now I know that it is belived to be bad practice to derive from virtual functions like I did, for the reasons of
1) deriving from public virtual functions (seperation of interface/implemenation) and
2) calling base-function behaviour.

And I agree with this. Keep in mind though, this class was basically the first big thing I designed to be reusable, so there is probably a lot of design smells. Its functional though to this point,
and didn't get to cluttered to be unworkable (though I wouldn't want to know the memory footprint of this class due to all the signals and virtual functions, lol). Anyways, if I was to reimplement that
or refactore it at any point, I'd still need to be able to perform the base-classes behaviour conditionally somehow. As you can see in the button example, if the button is both down and locked, it should not emit a clicked
signal. This means I can't just do:class Widget{public: void OnClick(MouseType mouse) { SigClicked(); Clicked(mouse); } private: virtual void Clicked(MouseType mouse); // override this in child classes like button};
I think the closes to what I want would be to use a return value to determine whether or not base class function should execute:class Widget{public: void OnClick(MouseType mouse) { if(Clicked(mouse)) SigClicked(); } private: virtual bool Clicked(MouseType mouse); // override this in child classes like button};
Though I don't know if this isn't even more hacky. Maybe I'll figure out a better solution at some point.

Composition:

One last thing for this time around. I'm aware that inheritance isn't your bread and butter, especially for complex gui routines. Take a scrollbar for example. While I need class "Scrollbar" to derive from "Widget" in order to receive
events, I don't want to implement it completely new. A scrollbar is basically two buttons with a slider. This is why the widget class has a system for parenting other widgets.class Widget{public: virtual void AddChild(Widget& child); virtual void RemoveChild(Widget& child); virtual void UpdateChildren(void); // events virtual void OnParentUpdate(int x, int y, float f); //parent object got update};
A number of widgets can be added and removed from another widget. This will achieve the following things:

- Child widgets will have their origin in the parents position, so that they are chained together
- Updating widgets happens iteratively through the parent-child chain
- If the parent is set invisible, its child also aren't rendered
- If the parent is disabled (receives no events), its childs are also disabled
- Unless set otherwise, parents will clip their children on rendering

As to why those functions are virtual: This is to allow widgets to dispatch widget parenting to their own child-widgets. For example, a window has a content-area by default, where widgets are inserted.class Window : public Widget{public: void AddChild(Widget& child) override { m_area.AddChild(child); } void RemoveChild(Widget& child) override { m_area.RemoveChild(child); }private: Area m_area;};
For adding childs to the windows topbar, there can still be eigther a "GetTopbar" or "AddTopChild/RemoveTopChild"-function. This just reduces the amount of work and thought, since in 99% of the time one would want to add
widgets to the windows main area and not the topbar (at least I do).

Thats is for now, thanks for reading. Next time, I'll cover how Widgets can be positioned and configured to behave a certain way via properties.

Juliean

Juliean

 

Gui I: Learning about Signals/Slots

In order to make it easier for the reader, I decided to split the different modules up into multiple posts. This should also make it easier for me to write them when I got a few minutes to spare, too.

Getting started:

So at the beginning of the engine I wanted to make a 2d level editor. I already made a simple, horrible level editor for my mario world clone project. I tried to copy the style of handling GUI interaction from there. I don't have any actual code from then at hand, but it looked something like this:class TilemapController{public: // is called every tick void Update(void) { Vector2 vScrollDrag; if(m_pScrollbar->IsDragging(vScrollDrag)) SetOffset(vScrollDrag); else { Vector2 vDrag; if(m_pImage->IsDragging(vDrag)) DragMap(vDrag); else if(m_pImage->Clicked()) PaintTile(); else { Vector2 vPos; if(m_pImage->MouseMoved(vPos)) HighlightTile(vPos); } } } gui::Image* m_pImage; gui::Scrollbar* m_pScrollbar;}
For every widget and every interaction, I would have to put it at the exactly right spot in an update method, and check the interactions with if-conditions. Its extremly horrible, once you get more than very basic interaction.

Getting some advice:

Trust me, the real code back then was even more horrible. I just didn't know any better, but for some reason I started a topic on Gamedev.net on something relating the GUI. I don't recall the exact topic anymore, but I pretty quickly got adviced to drop this rigid structure, and instead got suggested using some sore of indirect message passing.

I started to look at how other GUI libaries handle this, and found that the QT-way of connecting signals/slots together, I quite liked. I didn't like that it required a heck ton of macros, so I searched for an easy and lightweight signal implementation. I found the "FastDelegate"-implementation on CodeProject (check it out here), and a signal-implementation based on that.

Cleaning things up:

With this 2-header-file signal implementation, I was able to drastically redesign and clean up my GUI code. First, thats how the signals are declared and used:class GuiWidget{ // this is called by the GUI input-handler void OnClick() { SigClicked(); } Signal0 SigClicked; Signal1 SigDrag;}
And then, how I use it:class TilemapController{public: TilemapController(void) { m_pScrollbar->SigDrag.Connect(this, &TilemapController::SetOffset); m_pImage->SigDrag.Connect(this, &TilemapController::DragMap); m_pImage->SigClicked.Connect(this, &TilemapController::PaintTile); m_pImage->SigMouseMove.Connect(this, &TilemapController::HighlightTile); } gui::Image* m_pImage; gui::Scrollbar* m_pScrollbar;}
Now thats much nicer, right? Now I can just connect signals in the initialization code, and the rest happens "magically" in the background. This also changed my whole outlook and programming. I realized that I don't have to write tons of nested if-statements for everything that is going to happen, and that started my longing for taking more care about code design, and not just blindly typing everything out as I go.

The final touch:

One thing you might have noticed is that each Signal needs a specifier of how many arguments it has, as each argument number is its own seperate templated class. This also limits the number of maximum arguments you can have, as in the default libary this was only implemented to 8.
Since I happen to use C++11 and already make use of varidic templates, I modified the Delegate/Signal implementation to use variadic templates, thus getting rid of much duplicated code, also eliminating the explicit specification of the number of arguments:Signal SigTest;Signal SigTest2;SigTest();SigTest2(0.0f, 1.0f);
Also, I sometimes used to have classes which just forwarded signals to another signal, which resulted in some tedious boilerplate-code, which I also reduced by adding helper-functions:// first, I used to do it like thisclass Foo{ Foo() { bar.Signal.Connect(this, Foo::OnSignal); } void OnSignal(int, float, float) { SigForward(int, float, float); } Signal SigForward; private: Bar bar;};// then, I noticed you could just connect the signal to the next one. The syntax was ugly and redundant, though:bar.Signal.Connect(SigForward, Signal::operator());// but it was quite straightfoward to put into a helper-functions:connectSignals(bar.Signal, SigForward);
I've attached the modified files to this thread.



The original Signals-Libary comes from this repo and is under the MIT licence:

https://github.com/pbhogan/Signals
http://opensource.org/licenses/mit-license.php

Thanks for reading. Next time, I'm going to talk about the actual widget implementation.

Juliean

Juliean

 

An architectural overview of the engine

In the last article, I talked about the story about how I got to start developing my "Acclimate Engine". This time around, I want to give a brief overview of how this engine currently works and looks like.

The engine itself is divided into 4 main modules : SDK, editor, player, and plugins.

- SDK: The main codebase of the engine. Most general-purpose and core code goes here, like the renderer, gui, entity/component, visual scripting, etc... . Below a screenshot of my first project made with this SDK, that time around still creating a dedicated exe for the project:



- Plugins: After my first project, I wanted the engine to become multi-purpose. I had multiple different games in mind, which would all require different functionality. Best example is being able to create 2D as well as 3D games. I didn't want to rely on hackarounds for this, and didn't want to expose functionality that isn't needed for a certain kind of game as part of the SDK, so I came up with plugin-support. Everything game-specific now goes into a plugin. For example, there is a camera class in the SDK, but there is no explicity camera handling, this is part of the plugins. Also the editor "game view" is something that must be programmed by a plugin, to support 2d and 3d games, without the unity-approach of having to view your 2d scene via a 3d camera in editor mode (wtf...). Also stuff like lighting, shadows etc... goes all into plugins. The SDK has a highly abstracted rendering framework, and the plugins use those to implement the actual rendering. This makes the SDK itself independand of certain technologies, more expandable and flexible for the future.

- Editor: For making anything but trivial (and limited in content) games an editor is absolutely necessary. I started to creating a fully-functional editor when I got tired of editing config-files via hand, and started to polish it up once I started to port my 10-hours gameplay Terranigma 2 project to the engine. So the editor basically is built of the SDK. I don't like the approach of developing everything editor-centric, so the SDK is independant of the editor, minus some minor features that just have to exist in there because it is needed i.e. for modifiying some component-value with an UI-element. Below a screenshot of the editor how it currently looks like.


- Player: The players main purpose is to be able to create games without an ounce of programming. I don't want to create a new visual studio project for each new project - while I learned that there where shortcuts (like auto-creating and setting up such a project), it really isn't necessary eighter. The plugins are good enough to customize the games as I pleased, so there is no point in having control of how the runtime-exe is setup. Currently the player is very primitive, just running from whethever plain-text config/xml-files/data a project is made out of. In the near future (read: once I get close to releasing a complete project with it) I will implement specific packed file formats for faster loading times and smaller storage size.

_________________________________________________________________________________________________________________

So thats the basic architecture of the engine. My design philosphy was simple: Do most general purpose in the SDK, build editor and player on top of that, make plugins for everything specific using the SDK and being used by editor and player. For that reason, SDK is mostly clean and developed very carefully, while I don't mind a few shortcuts and hacks here and there in the editor and/or plugins. The SDK also takes up most of the code -. 120k LoC vs. 28k Editor and 2k player. For that reason, it is furthermore devided into several submodules, which I'll end the article with briefly adressing:

- Audio: Loads and plays audio-files
- Core: A module for putting stuff together - state machine, scene/generic resource management and the plugin-interfaces all goes here
- Entity: The entity/component/system-module, with support for prefabs, system messaging etc...
- Event: The visual scripting module, minus all the vital GUI stuff (this is in the editor)
- File: Wrapper and helper-functions for file operations
- Gfx: The main graphics framework with classes for textures, meshes, etc...
- Gui: All basic gui stuff - widgets, rendering, handling...
- Input: This module transforms raw input into user registered actions, states, and so on.
- Math: Basic math classes like vectors, matrices, and a bunch of helper functions
- Network: A small networking libary (I only did on small-scale network game yet)
- Physics: Well, its physics. Rigid & soft-bodies are supported, and its designed for 3d (I'm uncertain whether making it 2d-capable is better to be build in here our just make a different physics module)
- System: Basic stuff like assertion-macros, logging, string-conversion and a ton of helper-functions for the STL, etc...
- XML: My own primitive XML-parser. I told you I don't use 3rd-party libaries, stop shaking your head
_______________________________________________________________________________________________________________

So thats about it for this time. Next up, I'll be discussing the developement of those sub-modules in more detail, starting with the GUI (since this was the first thing I actually designed). Thanks for reading, and until next time!

PS: here is one final screenshot of the most-recent project we made in a team of 4 with my engine:


Juliean

Juliean

 

Introduction

Hello,

my name is Juli(e)an, and this is the introduction to my journal about how insane I am to develope my own game engine. Some of you might have seen me before on the forums, mainly (solely) on the technical topics. Or you don't, but thats OK too. I have just been developing this kind of large project for the past one and a half year, and finally decided to share my doings, outside of the frequent snippets, when I search for help in the forums.

So let me give you a little backstory about me. Don't worry, future posts will be more technically, but I've just got to get this out of the way. Its going to be a long, but hopefully not boring monoloque. I don't blame you if you skip it or loose interest in the middle, its pretty much my programming-oriented biography in a nutshell. So off we go:

I've always dreamed of being a game-dev. I literally wanted to develope games since the first video game I ever played, back in the SNES area. Or did I start with Pokemon for Gameboy? I don't really remember exactly what was first, but I already started making scetches of levels for all my favourite games on paper. It was only few years later that I got my own home PC (I was about 10 years old), and I got used to that system. It was ever few years later that I finally decided to take things more seriously. I got my first C++ programming book when I was about 14 years old. It was a perfect beginners book, it only featured console programming and the C++ basics. I still lost track when it started with OOP. Man, was this stuff confusing back then. But I was relatively young, so it was more than understandable. I was still proud of what I was able to make (command-line calculators, small command-line games) back then, but I decided to postpone the whole game-developing thingy for a few more years.

So two more passed by, and one day I had a pretty strong dream (literally) about my favourite games of all times, Terranigma. I always wanted to make a sequel to this game so badly, and now that idea manifested in my head and didn't want to let go. So I made a few failed attempts at learning C++ in more depth. Then I realised that these attempts where flawed. There were tools already out there that did the job for me - I decided to go with the Rpg-Maker XP, that was back in 2008. I had a steep learning curve - after playing only with the event-system of the maker about one and a half year, I started to learn Ruby (their scripting language). That was when I finally understood OOP, at least from the Ruby-side. Nevertheless, this experience would help me to properly learn C++ in the near future.

But wait, you might say, didn't I just talk about how I had this strong game idea that I wanted to get done, and that I was already working on in a game engine? What am I bothering with C++ again? Well, there is more to this. At one point, using other peoples engines/scripts always kind of felt like cheating to me. There is no rationale to that, and I can't explain it, but thats the way I felt. Also, I noticed in the process of learning to apply ruby scripting in the maker that I always felt a little more drawn to the technical parts than I did to the game-design itself. Truly, my coding/scripting skills back then were horrible, but it was so fascinating getting complex stuff to work, morever so being things that the game-dev community (at least that I followed) deemed "impossible".

Don't get me wrong, I still really enjoyed making that game itself, but I just realized what my future lies in. I wanted to learn as much "hardcore"-programming as possible. So for my highschool graduation work, I wanted to make a "real" game, in c++ with directx. It was actually kind of foolish and all-in: I had to learn OOP with C++, graphics developement, and use those skills to develope a (technical) full game alongside with writing a paper of about 7000 words about the topic "game developement in OOP with C++". I also set up for a Super-Mario World clone - the wary reader might remember that I hadn't programmed any full game yet. So it was just an extremely risky task, and my future kind of depended on it - failure would hinder my further plans of studying. So quess what? It just worked out. I threw in so much time, but I got everything done as I planed, and passed with flying colours. I also learned pretty much all that I hoped for in the process, so it was basically a full success.

But then I wanted to expand the game. I only had a very limited amount of enemies, items, etc... . And I just noticed that I could not. I just had created a working system, but in such a way that it could not be extented without my sanity fleeing in the process. My code was still pretty horrible, even for myself to work with. The project was about 17k LoC, and 2k alone were part of an ungoldy god-player class. Mind if I spare you the rest of the story? I pretty much left the project as it was. It was finished per definition, but not in any way that I wanted it. So there went this then, I continued to develope my Terranigma 2 in the Rpg-Maker, but no more C++ for some time.

Lo and behold, after some more time passing, I had another weird inspirational dream that lit another spark that I didn't even knew existed: You know the game cursed mountain? It wasn't all that great, and I hadn't played it back then, but I've seen a trailer... just one time btw, but this dream, filled with symbols of buddhism and that trailer just planted another though into my mind: I've got to do something like that too. So obviously 3D was my next goal. Keep in mind, I still was more interested in making technical, than content-related progress, so using an external tool was out of the question. By the way, I have virtually no artistic skills. I designed some pretty neat levels using existing graphics for my Terranigma-sequel, but I'm still only able to create basic 2d-icons, and no 3d content. So my only way at living my dream was to program my own 3d application.

And there I went again: I started learning 3d-programming, using DX9 again, from scratch. I also wanted to take a little more care about designing my application this time - I didn't want to be stuck after a few months again. Long story short - it failed again. Oh, I learned quite a few things about 3d rendering - I had things like HDR, SSAO, shadows etc... up and running after some time, but the application after all was such a pain in the *ss to extend after some time, I had to leave it at that again.

You know what annoyed me the most about all these failures? Not that it failed, I quess thats to be expected if you are talking this route, going from 0 to 100 with c++ in only 2 projects. But I really hated it that I had to redo EVERYTHING again, for every new project that I started. I hear you saying "you didn't have that many projects...". Exactly. I always was detered from starting some of the further ideas I had in mind, due to that obstacle of having to redo things I already did, by eigther copy&pasting those classes that worked, or just starting from scrath. I'm a lazy person, I don't like to redo work. So whats was the logical conclusion to do now? Write my own game-engine, of course! Silly you who would suggest using something like Unity - haven't just read my article up to this point? Judging by the length of it, you probably haven't, but nevermind...

Now let me briefly adress the next step that lied between then and where I'm at now. I started to make a game engine, which would have been called "Dark Mountain Engine", but didn't find the motivation to continue. Then, in summer 2012, I wanted to take a small step back, and instead make my
own kind-of Rpg-Maker. Then I finally started to get the grip of it: I still remember that thread on gamedev that I made about GUI design - it totally changed my point of view. Learning about signals/slots was the starting point for me to learn how to write "good" (read: workable for me) code.

It all went uphill from there. I had a halfway working 2d editor after some time. But then came my studies at university. So we were asked to make a game project in our second term to apply for the game-area. Then I was like: "Hm, the time has come to make my 3d engine. Now or never, success or I'm going to leave this sh*t forever". So I had less then 4 months to take my barely-functioning 2d editor, and make a fully fetched 3d game engine out of it - alongside a basic 3d towerdefense game with a small leveleditor. OH BOY. What was I thinking? I did the same kind of sh*t already as part of my graduation, and now I was doing it again? Just with a much higher relative difficulty and much less time. And quess what... It was the sucess I've just been waiting for!

I started to really carefully plan my structure ahead - while still getting stuff done, with only small phases of analysis paralysis. I designed a flexibly entity-component-system, a complex low-level-renderer (using a design discussed in the "frostbite rendering architecture" thread in those forums"), and everything I needed for that 3d towerdefense game. So the "Acclimate Engine" was born. What libaries did I use? Silly you, I really think you aren't paying attention. Don't worry, I'm glad you even got so far. You deserve a gold medal for getting through this wall of text (or two, really).

And that is where I'm at. After that project, I continued to develope the engine up to this point. I integrated a directx11-render-backend, plugin-suppport, an editor, and by now a visual-scripting language. The engine has grown from about 15k LoC to about 150k as for now, and as much as LoC is not a great measure for anything, it still shows a great success for me, being able to handle the evergrowing complexity without it getting unworkable. Its far from it - I plan on continuing to develope the engine for quite some time. It only seems to get better and better with every iteration, unlike my other projects. For example, I'm in the progress of porting Terranigma 2 from the Maker to my engine, also finishing the game up content-wise. But thats another story I will tell another time.
__________________________________________________________________________________

Oh boy. Now that got way out of hand. I really don't expect anyone to fully read that. But thats just how I like to write - I'm sort of unable to express my thoughts without thousands of words.

So if you want to see something more visual, here is a link to my portolios "other projects" page, where you can see all the projects I mentioned:

http://www.envile-media.at/jwatzinger/?page_id=44

So next up, I'm going to talk about the progress of the Acclimate Engine so far, starting from the beginning, divided into the different modules. So stay tuned!

Juliean

Juliean

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!