Advertisement Jump to content
  • Advertisement
  • 08/11/17 09:04 PM
    Sign in to follow this  

    Using DOT to Debug Data Structures

    General and Gameplay Programming
       (1 review)

    TheComet

    In this article, I'd like to share a method I used to help visually debug tree-like data structures by leveraging the DOT format and Graphviz. This may be useful to you if you ever end up having to work with low-level data structures and no way to visually see what your code is doing.

    The Problem

    During the early stages of development of my inverse kinematics library (which, at the time of writing, is in its alpha stages and not quite ready for general use yet), I was working on an algorithm which takes a scene graph as its input and generates an optimised structure consisting of a tree of "chains", specifically designed for use with an IK solver.

    The transformation is not that easy to explain in words, but here is an illustration of before (a) and after (b) (please excuse my MSPaint skills; I'm a programmer, not an artist):

    1.png.14527fd00d931285c428f1928818ef93.png

    You can see here that each end effector in the scene graph specifies a "chain length" parameter, which tells the solver how many parent nodes are affected. Since the IK solver works most efficiently on single chains of nodes, it makes sense to break down the scene graph into multiple chains which the solver can then process sequentially. This is illustrated in (b). Notice how chain 1 (red) becomes isolated from the rest of the tree after processing, because its end effector only specified a length of 1. Also notice how in the new structure each chain consists of only a sequence of nodes with no branches.

    The algorithm had to be able to handle a few weird edge cases, such as:

    • What happens when you place an effector on a node that has multiple children?
    • What happens when there are multiple end effectors in a chain?
    • What happens when an end effector specifies a chain length that doesn't quite join up with the rest of the tree?

    This of course meant it was harder to test and make sure it was working correctly. I did of course write a suite of unit tests using Google's testing framework, but I wanted more: I wanted to have the ability to visually look at what my algorithm was generating, and I wanted to do this without having to use some fancy 3D engine.

    Inroducing: DOT and Graphviz

    DOT is a simple graph description language. Graphviz is a set of open source tools for generating graphics from DOT descriptions. For example, the following DOT code:

    graph testgraph {
        a -- b;
        b -- c;
        b -- d;
    }

    Compiled with dot as follows:

    dot -Tpdf testgraph.dot -o testgraph.pdf

    Produces the following graphic:

    2.png.3252fa7a3f9e72d5d94560c63d1ad932.png

    DOT is quite a powerful language. It's possible to specify colours, shapes, multiple connections between nodes, and much more! Read up on the format specification for more information.

    In only a few lines of code I was able to iterate the optimised chain tree and serialise it to DOT. This is what the example tree I drew in MSPaint looks like after it is broken down by my algorithm and exported to DOT:

    3.png.a086849bf37d2d09b8a62102bf905a2c.png

    Things to note:

    • The black edges show the connections between the original tree.
    • The red edges show how the chains are connected (just like in the first figure (b))
    • Effector nodes are coloured blue
    • Nodes that mark the start or end of a chain are square. You can see for example that node 6 is square because it has two child chains, but node 2 is not square because it's in the middle of the second chain.

    And just like that I had a powerful way to quickly spot errors in my algorithm. Using python and watchdog I wrote a simple script that would monitor the DOT file for changes and automatically compile it to PDF. Thus, every time I ran my program the graphic would update immediately on my second monitor so I could inspect it.

    Another Example

    In another application I wrote, I implemented an intrusive profiler which would dynamically build a tree from the callgraph and store timing information in each node. I thought it would be cool to also dump this tree to DOT format and see what it looked like. Note that in this case I didn't use the "dot" command line tool, instead I used "neato" (this is also part of the Graphviz package). Neato has a different layout algorithm based on physical constraints, which in this case produces much nicer graphs than "dot":

    4.thumb.png.01c5665c8b19103584fcb2b93d565786.png

    I find it quite beautiful to look at. What you see here is a visual representation of how much time was spent where in the program. Nodes that are red are "hot" (meaning lots of time was spent in them) and nodes that are blue are "cold" (nearly no time was spent in them).

    If you zoom in a little bit you can also see that I exported some of the profiler parameters of each function.

    5.png.48db060a3bcc93f56b91bef35d448cc3.png

    While this does provide a very nice birds eye view of where your program needs optimising, I would of course recommend using proper profiling tools to further analyse the slow functions.

    In conclusion

    Due to the simplicity of the DOT language, it's trivial to write an exporter for it, so if you ever find yourself fiddling with low level data structures, consider dumping them to DOT and visualising them. I found this to be extremely helpful during development of my IK library.



      Report Article
    Sign in to follow this  


    User Feedback


    Hey, I've looked into your ik library and it seems awesome. How stable it is, besides alpha, is it ready to use? And do you anticipate if the API is going to have significant changes in the future?

    Share this comment


    Link to comment
    Share on other sites

    Hi!

    It's fairly stable at this point (as in, it doesn't crash, there are no memory leaks, it correctly calculates results in every situation).

    The biggest API change in the near future will be that positions and rotations are specified in local space rather than in global space (which is currently the case). Other than that I don't see the API changing.

    Functionally, the way it was designed introduces some major flaws (none of which you will not run into if you don't try to solve nested trees with multiple end effectors). I'm still working on getting those fixed.

    Edited by TheComet

    Share this comment


    Link to comment
    Share on other sites


    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an account

    Sign up for a new account in our community. It's easy!

    Register a new account

    Sign in

    Already have an account? Sign in here.

    Sign In Now

  • Advertisement
  • Advertisement
  • What is your GameDev Story?

    In 2019 we are celebrating 20 years of GameDev.net! Share your GameDev Story with us.

    (You must login to your GameDev.net account.)

  • Latest Featured Articles

  • Featured Blogs

  • Advertisement
  • Popular Now

  • Similar Content

    • By MATov
      It's a story on how to write a plugin for Unity Asset Store, take a crack at solving the well-known isometric problems in games, and make a little coffee money from that, and also to understand how expandable Unity editor is. Pictures, code, graphs and thoughts inside.
      Prologue
      So, it was one night when I found out I had pretty much nothing to do. The coming year wasn't really promising in my professional life (unlike personal one, though, but that's a whole nother story). Anyway, I got this idea to write something fun for old times sake, that would be quite personal, something on my own, but still having a little commercial advantage (I just like that warm feeling when your project is interesting for somebody else, except for your employer). And all this went hand in hand with the fact that I have long awaited to check out the possibilities of Unity editor extension and to see if there's any good in its platform for selling the engine's own extensions.
      I devoted one day to studying the Asset Store: models, scripts, integrations with various services. And first, it seemed like everything has already been written and integrated, having even a number of options of different quality and detail levels, just as much as prices and support. So right away I've narrowed it down to:
      code only (after all, I'm a programmer) 2D only (since I just love 2D and they've just made a decent out-of-the-box support for that in Unity) And then I remembered just how many cactuses I've ate and how many mice've died when we were making an isometric game before. You won't believe how much time we've killed on searching viable solutions and how many copies we've broken in attempts to sort out this isometry and draw it. So, struggling to keep my hands still, I searched by different key and not-so-much-key words and couldn't find anything except a huge pile of isometric art, until I finally decided to make an isometric plugin from scratch.
      Setting the goals
      The first I need was to describe in short what problems this plugin was supposed to solve and what use the isometric games developer would make of it. So, the isometry problems are as follows:
      sorting objects by remoteness in order to draw them properly extension for creation, positioning and displacement of isometric objects in the editor Thus, with the main objectives for the first version formulated, I set myself  2-3 days deadline for the first draft version. Thus couldn't being deferred, you see, since enthusiasm is a fragile thing and if you don't have something ready in the first days, there's a great chance you ruin it. And New Year holidays are not so long as the might seem, even in Russia, and I wanted to release the first version within, like, ten days.
      Sorting
      To put it short, isometry is an attempt made by 2D sprites to look like 3D models. That, of course, results in dozens of problems. The main one is that the sprites have to be sorted in the order in which they were to be drawn to avoid troubles with mutual overlapping.

      On the screenshot you can see how it's the green sprite that is drawn first (2,1), and then the blue one goes (1,1)

      The screenshot shows the incorrect sorting when the blue sprite's drawn first
      In this simple case sorting won't be such a problem, and there are going to be  options, for example:
      - sorting by position of Y on the screen, which is (isoX + isoY) * 0.5 + isoZ
      - drawing from the remotest isometric grid cell from left to right, from top to down [(3,3),(2,3),(3,2),(1,3),(2,2),(3,1),...]
      - and a whole bunch of other interesting and not really interesting ways
      They all are pretty good, fast and working, but only in case of such single-celled objects or columns extended in isoZ direction After all, I was interested in more common solution that would work for the objects extended in one coordinate's direction, or even the "fences" which have absolutely no width, but are extended in the same direction as the necessary height.

      The screenshot shows the right way of sorting extended objects 3x1 and 1x3 with "fences" measuring 3x0 and 0x3
      And that's where our troubles begin and put us in place where we have to decide on the way forward:
      split "multi-celled" objects into "single-celled" ones, i.e. to cut it vertically and then sort the stripes emerged think about the new sorting method, more complicated and interesting I chose the second option, having no particular desire to get into tricky processing of every object, into cutting (even automatic), and special approach to logic. For the record, they used the first way in few famous games like Fallout 1 and Fallout 2. You can actually see those strips if you get into the games' data.
      So, the second option doesn't imply any sorting criteria. It means that there is no pre-calculated value by which you could sort objects. If you don't believe me (and I guess many people who never worked with isometry don't), take a piece of paper and draw small objects measuring like 2x8 and, for example, 2x2. If you somehow manage to figure out a value for calculation its depth and sorting - just add a 8x2 object and try to sort them in different positions relative to one another.
      So, there's no such value, but we still can use dependencies between them (roughly speaking, which one's overlapping which) for topological sorting. We can calculate the objects' dependencies by using projections of isometric coordinates on isometric axis.

      Screenshot shows the blue cube having dependency on the red one

      Screenshot shows the green cube having dependency on the blue one
      A pseudocode for dependency determination for two axis (same works with Z-axis):
      bool IsIsoObjectsDepends(IsoObject obj_a, IsoObject obj_b) { var obj_a_max_size = obj_a.position + obj_a.size; return obj_b.position.x < obj_a_max_size.x && obj_b.position.y < obj_a_max_size.y; } With such an approach we build dependencies between all the objects, passing among them recursively and marking the display Z coordinate. The method is quite universal, and, most importantly, it works. You can read detailed description of this algorithm, for example, here or here. Also they use this kind of approach in popular flash isometric library (as3isolib).
      And everything was just great except that time complexity of this approach is O(N^2) since we've got to compare every object to every other one in order to create the dependencies. I've left optimization for later versions, having added only lazy re-sorting so that nothing would be sorted until something moves. So we're going to talk about optimization little bit later.
      Editor extension
      From now on, I had the following goals:
      sorting of objects had to work in the editor (not only in a game) there had to be another kind of Gizmos-Arrow (arrows for moving objects) optionally, there would be an alignment with tiles when object's moved sizes of tiles would be applied and set in the isometric world inspector automatically AABB objects are drawn according to their isometric sizes output of isometric coordinates in the object inspector, by changing which we would change the object's position in the game world And all of these goals have been achieved. Unity really does allow to expand its editor considerably. You can add new tabs, windows, buttons, new fields in object inspector. If you want, you can even create a customized inspector for a component of the exact type you need.  You can also output additional information in the editor's window (in my case, on AABB objects), and replace standard move gizmos of objects, too. The problem of sorting inside the editor was solved via this magic ExecuteInEditMode tag, which allows to run components of the object in editor mode, that is to do it the same way as in a game.
      All of these were done, of course, not without difficulties and tricks of all kinds, but there was no single problem that I'd spent more than a couple of hours on (Google, forums and communities sure helped me to resolve all the issues arisen which were not mentioned in documentation).

      Screenshot shows my gizmos for movement objects within isometric world
      Release
      So, I got the first version ready, took the screenshot. I even drew an icon and wrote a description. It's time. So, I set a nominal price of $5, upload the plugin in the store and wait for it to be approved by Unity. I didn't think over the price much, since I didn't really want to earn big money yet. My purpose was to find out if there is a general demand and if it was, I would like to estimate it. Also I wanted to help developers of isometric games who somehow ended up absolutely deprived of opportunities and additions.
      In 5 rather painful days (I spent about the same time writing the first version, but I knew what I was doing, without further wondering and overthinking, that gave me the higher speed in comparison with people who'd just started working with isometry) I got a response from Unity saying that the plugin was approved and I could already see it in the store, just as well as its zero (so far) sales. It checked in on the local forum, built Google Analytics into the plugin's page in the store and prepared myself to wait the grass to grow.
      It didn't take very long before first sales, just as feedbacks on the forum and the store came up. For the remaining days of January 12 copies of my plugin have been sold, which I considered as a sign of the public's interest and decided to continue.
      Optimization
      So, I was unhappy with two things:
      Time complexity of sorting - O(N^2) Troubles with garbage collection and general performance Algorithm
      Having 100 objects and O(N^2) I had 10,000 iterations to make just to find dependencies, and also I'd have to pass all of them and mark the display Z for sorting. There should've been some solution for that. So, I tried a huge number of options, could not sleep thinking about this problem. Anyway, I'm not going to tell you about all the methods I've tried, but I'll describe the one that I've found the best so far.
      First thing first, of course, we sort only visible objects. What it means is that we constantly need to be know what's in our shot. If there is any new object, we got to add it in the sorting process, and if one of the old one's gone - ignore it. Now, Unity doesn't allow to determine the object's Bounding Box together with its  children in the scene tree. Pass over the children (every time, by the way, since they can be added and removed) wouldn't work - too slow. We also can't use OnBecameVisible and other events because these work only for parent objects. But we can get all Renderer components from the necessary object and its children. Of course, it doesn't sound like our best option, but I couldn't find another way, same universal and acceptable by performance.
      List<Renderer> _tmpRenderers = new List<Renderer>(); bool IsIsoObjectVisible(IsoObject iso_object) { iso_object.GetComponentsInChildren<Renderer>(_tmpRenderers); for ( var i = 0; i < _tmpRenderers.Count; ++i ) { if ( _tmpRenderers[i].isVisible ) { return true; } } return false; } There is a little trick of using GetComponentsInChildren function that allows to get components without allocations in the necessary buffer, unlike another one that returns new array of components
      Secondly, I still had to do something about O(N^2). I've tried a number of space splitting techniques before I stopped at a simple two-dimensional grid in the display space where I project my isometric objects. Every such sector contains a list of isometric objects that are crossing it. So, the idea is simple: if projections of the objects are not crossed, then there's no point in building dependencies between the objects at all. Then we pass over all visible objects and build dependencies only in the sectors where it's necessary, thereby lowering time complexity of the algorithm and increasing performance. We calculate the size of each sector as an average between the sizes of all objects. I found the result more than satisfying.
      General performance
      Of course, I could write a separate article on this... Okay, let's try to make this short. First, we're cashing the components (we use GetComponent to find them, which is not fast). I recommend everyone to be watch yourselves when working with anything that has to do with Update. You always have to bear in mind that it happens for every frame, so you've got to be really careful Also, remember about all interesting features like custom == operator. There are a lot to things to keep in mind, but in the end you get to know about every one of them in the built-in profiler. It makes it much easier to memorize and use them 
      Also you get to really understand the pain of garbage collector. Need higher performance? Then forget about anything that can allocate memory, which in C# (especially in old Mono compiler) can be done by anything, ranging from foreach(!) to emerging lambdas, let alone LINQ which is now prohibited for you even in the simplest cases. In the end instead of C# with its syntactic sugar you get a semblance of C with ridiculous capacities.
      Here I'm gonna give some links on the topic you might find helpful: Part1, Part2, Part3.
      Results
      I've never known anybody using this optimization technique before, so I was particularly glad to see the results. And if in the first versions it took literally 50 moving objects for the game to turn it into a slideshow, now it works pretty well even when there're 800 objects in a frame: everything's spinning at top speed and re-sorting for just for 3-6 ms which is very good for this number of objects in isometry. Moreover, after initialization it almost haven't allocate memory for a frame
      Further opportunities
      After I read feedbacks and suggestions, there were a few features which I added in the past versions.
      2D/3D Mixture
      Mixing 2D and 3D in isometric games is an interesting opportunity allowing to minimize drawing of different movement and rotations options (for instance, 3D models of animated characters). It's not really hard thing to do, but requires integration within the sorting system. All you need is to get a Bounding Box of the model with all its children, and then to move the model along the display Z by the box's width.
      Bounds IsoObject3DBounds(IsoObject iso_object) { var bounds = new Bounds(); iso_object.GetComponentsInChildren<Renderer>(_tmpRenderers); if ( _tmpRenderers.Count > 0 ) { bounds = _tmpRenderers[0].bounds; for ( var i = 1; i < _tmpRenderers.Count; ++i ) { bounds.Encapsulate(_tmpRenderers[i].bounds); } } return bounds; } that's an example of how you can get **Bounding Box** of the model with all its children

      and that's what it looks like when it's done
      Custom isometric settings
      That is relatively simple. I was asked to make it possible to set the isometric angle, aspect ratio, tile height. After suffering some pain involved in maths, you get something like this:

      Physics
      And here it gets more interesting. Since isometry simulates 3D world, physics is supposed to be three-dimensional, too, with height and everything. I came up with this fascinating trick. I replicate all the components of physics, such as Rigidbody, Collider and so on, for isometric world. According to these descriptions and setups I make the copy of invisible physical three-dimensional world using the engine itself and built-in PhysX. After that I take the simulation data calculated and get those bacl in duplicating components for isometric world. Then I do the same to simulate bumping and trigger events.

      The toolset physical demo GIF
      Epilogue and conclusions
      After I implemented all the suggestions from the forum, I decided to raise the price up to 40 dollars, so it wouldn't look like just another cheap plugin with five lines of code I will be very much delighted to answer questions and listen to your advices. I welcome all kinds of criticism, thank you!
      Unity Asset Store page link: Isometric 2.5D Toolset
    • By DevBlazer
      Hi all, I am currently playing around with making a 2D underwater survival game (pet project, not too serious) but I need someone I can sit with for like an hour or so, to help me flesh out some reasonable mechanics for the behavior of liquids and gases in relation to gravity, pressure, etc. (Sorry guys I totally failed science class and have no intention of studying it now).  There obviously has to be a balance between what I'm willing to code for and realism and then assistance with understanding some mechanics and how their formulas, etc would work.

      Again, I'm not trying to recruit anyone here, just looking for someone knowledgeable to have a friendly chat with for a little while.
       
    • By lougv22
      I am currently an indie game developer and I am looking to get a job with a game company as a game programmer. I worked for a game studio 9 years ago, but at the time I decided to get a day job as a software developer (non-game development), while focusing (as an indie developer), during my free time, on a vision for a game I've had for some time.
      This was then, but i've recently found myself unsatisfied with my day job and I am now thinking of going back to the game industry. The drive to make games is just too strong in me and I can no longer justify spending my days making software I am not excited about. Which leads me to my questions about a game programmer portfolio. Before i first got a job at a game studio i had built a couple of small games, this was way back though, around the year 2006. Would those be too old to showcase on a portfolio?
      Second question, i'd like to make the indie game i am working on available for potential recruiters to play, but I am not sure how to do that. I tried to put it up on Shimmer.io (kind like itch.io, but not as popular), but i ran into issues with that. It's a Unity game and the Web build i created for it was about 190 MB and it ran slowly and was very choppy on my machine, at which point i kind of gave up on the idea of putting it up online. The other option is to simply send (through email or Google drive) game companies a regular Unity build and let them play it that way. The question is, should i try to go the Web build route again and if so, any tips on making it work well this time? And also, if the Web build doesn't work again, would it be acceptable to send companies i apply for a regular build?
    • By IronicallySerious
      I have dabbled into almost every single field of study that goes into making a game. Currently, I am a sophomore and I made 2 VR titles and 1 non-VR. Next, on my journey, I went on to create a 2D game engine with very basic but "Turing complete" features. 
      Having experienced a small piece of everything that there is I am finding it very difficult to commit all of my efforts into a single field. I loved my time making gameplay systems, playing with OpenGL, creating 2D basic physics simulations and making sound engines and the regular engine stuff.
      How should I decide what I should go deep into? I am aware of the fact that the industry doesn't really look for complete package candidates for job/internships because normally they tend to get a small task which is dependent on a small section of the game (generally speaking).
      I kinda want to maximize my chances of landing somewhere good so I am trying to make a very planned approach from the beginning. I am 3 semesters in my CS major (out of 8 semesters) and the time for looking for internships is getting close. This is really bugging me because I am also the only one around here that is interested in making a career out of gamedev. If I continued to get demotivated like this I may just hop on to one of the "fad" tech nowadays like Machine learning and Cloud etc., which I see as a waste of my effort from the past 4 years to get to a point where I am able to explore gamedev to this considerable extent.
    • By achiga
      I am new to shader programming. I was learning how to detect edges using shaders these days. And I found the UnityChan toon shader project. But I found it difficulty to understand when I read its implementation of sobel filter.
      Especially for these s few lines:(From Line166)
              depthsDiag.x = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture,i.uv[1]+uvDist)); // TR
              depthsDiag.y = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture,i.uv[1]+uvDist*float2(-1,1))); // TL
              depthsDiag.z = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture,i.uv[1]-uvDist*float2(-1,1))); // BR
              depthsDiag.w = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture,i.uv[1]-uvDist)); // BL
              depthsAxis.x = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture,i.uv[1]+uvDist*float2(0,1))); // T
              depthsAxis.y = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture,i.uv[1]-uvDist*float2(1,0))); // L
              depthsAxis.z = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture,i.uv[1]+uvDist*float2(1,0))); // R
              depthsAxis.w = Linear01Depth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture,i.uv[1]-uvDist*float2(0,1))); // B
              depthsDiag -= centerDepth;
              depthsAxis /= centerDepth;
      It seems to try to get the diagonal and axial depth values. But I don't know why we need to substract centerDepth from depthsDiag and divide depthsAxis by centerDepth?
       
      Another confusion comes from these a few lines when it tries to return the final color from fragment shader:(From line 191)
           
              float SobelX = dot(SobelH, float4(1,1,1,1));
        float SobelY = dot(SobelV, float4(1,1,1,1));   float Sobel = sqrt(SobelX * SobelX + SobelY * SobelY);       Sobel = 1.0-pow(saturate(Sobel), _Exponent);   //NK   float4 Col = tex2D(_MainTex, i.uv[0].xy);   Col = _EdgesColor * Col * (1.0 - Sobel) + Sobel;   return Col * lerp(tex2D(_MainTex, i.uv[0].xy), _TestColor, _BgFade);  
      What does Sobel = 1.0-pow(saturate(Sobel), _Exponent) do? And what dose Col = _EdgesColor * Col * (1.0 - Sobel) + Sobel mean?
      Sorry for my bad English and hope anyone could help me understand this.
      Links to the git
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!