Jump to content
  • Advertisement

mklingen

Member
  • Content Count

    12
  • Joined

  • Last visited

Community Reputation

162 Neutral

About mklingen

  • Rank
    Member
  1. Ah, I see, that's the bit of magic I was missing. I had only learned about the attributes related to "serialize" and "do not serialize" and didn't know of the more complex attributes related to how references are serialized/deserialized. Thank you.
  2. Yep, that's exactly what I'm doing right now :) My question is just: what to serialize? The game data itself (there is a lot of it, and most of it is redundant, some of it is referential, etc.)? Or a set of "helper" classes which get converted into game data (this is not scalable)?   Like, take for example a pseudocode class like this:   class Thing {      // References to other things in the program...      Thing thing1;      Thing thing2; }   class Things {      List<Thing> things; }     void SerializeThings(Things things) {      JSON.Serialize(things); }   How is this supposed to be properly serialized? The internal references inside the class "Thing" are not meant to be data members, but rather *references to other entities in my game*. The JSON serializer will just sit there looping forever with circular references like this, and it's a big mess.    So what I've done is to create something like this:   // Give everything an ID and reference it using that instead. class ThingFile {     uint thingID;     uint referenceThing1;     uint referenceThing2; }   Then something goes through, de-seralizes a bunch of "ThingFile" classes, and converts them into "Thing" classes. This is great because it actually *can* be saved. The problem is, serialization and deserialization is no longer "automatic", and along these lines I've already written thousands of lines of boilerplate.
  3. I guess I am asking for too much. I would like the saves to be as "complete" as possible, as "readable" as possible,  as small as possible, and require as little boilerplate as possible to interface with.    I am using C#, so can you point me to a resource which makes this a "non-issue?" I understand that C# supports native XML serialization, but it doesn't solve my issue of potentially massive game files. It also supports native binary serialization, (which I am now leaning towards), but I'm worried about sensitivity to changes in game builds, and the loss of human readability (though I'm not yet sure why I would want it to be human readable, perhaps for interfacing with 3rd party tools).
  4. So I've recently begun the arduous process of making it possible to save and load data about my game's state. My game is a tycoon-like sandbox game with a ton of data which needs to be saved: entities, player state, world state, etc. The approach I've taken right now I just to create a vast enumeration of "helper classes" which are meant to be serialized and deserialized into their associated game state. For instance, I have "entity files", as well as "player metadata files", "world files" etc. These get saved to JSON files. Then, a "save manager" just loops through them all and does the necessary boilerplate stuff to instantiate everything in the game.   I've done this to make it very easy to control what data in my game actually gets saved so I don't end up writing multi-gigabyte files full of unnecessary serialized data -- also to avoid binary incompatibilities and allow for modding.   Unfortuantely, I've started to run into big problems with scaling up this system. Invariably, I will forget a bit of state that needs to be manually saved and transferred over, and I'm beginning to see the nightmare before me as I add more and more features to my game.   So, what is the best practice? This is a part of game development I've never understood. Most tutorials I've looked at only consider trivial examples where there are only a few bits of data that need to be saved. Is it better just to dump binaries? Or perhaps I should be directly seralizing all of the actual data in my game? (And doing the requisite nightmarish work of dealing with things like circular references, massive files, etc.)
  5. Not sure where this goes, but the solution is kind of graphicsy, so I put it here!   As part of my robotics research, I hacked together a simple way of tracking arbitrary, modeled 3D objects with a kinect or similar depth sensor using only a fixed-function graphics pipeline. Maybe it will be useful for some game programmer somewhere? I mean, imagine that you just have a bunch of household objects that have known models. You can interact with these objects in front of a kinect sensor to put feedback into the game.   Here's a video:       On the left we have a point cloud showing a table with a rock and a drill. I move around the rock and drill, and the system tracks them both. On the right, we see an offscreen buffer that is used to render synthetic point clouds. The synthetic point clouds are matched with the sensor cloud to track the objects.   I make the following assumptions: We have accurate models of the objects we wish to track. We have a good estimate of the initial position of the objects in the Kinect image frame. The objects either lie on tables, or are being held so that they are mostly visible.   The algorithm works like this: Initialize the object positions in the kinect frame using user help, feducials, or an offline template-matching algorithm. Each frame, get a kinect point cloud. Cull out any points near large planes (use RANSAC to find the plane), which we assume belong to tables. Now, render synthetic point clouds for each of the objects. The way we do this is extremely simple. We just color each object uniquely, and render the entire scene in an offscreen buffer (this is the image on the right in the video). Then, we sample from the depth buffer to find the z-coordinate of each pixel. For each point in each object's synthetic point cloud, find its nearest point in the kinect sensor cloud, such that the point is within a radius D (we set D to 10 cm). This is done using an octree. Using the corresponding points, run one iteration of ICP to find a correction. Transform the object by the correction returned by ICP What we get is a system that can (sort of) track multiple 3D objects in (near) real time, and can handle occlusion so long as all of the objects in the scene are known. It could be made more useful if we also model the location of the human hands holding the objects. Work also needs to be done to figure out where the objects are to begin with...
  6. mklingen

    A.I. Path Following

    I have two solutions. Both of them rely on treating the path not as a path per-se but as a trajectory. What you want is for your agent to move such that at each point in time, the agent is close to the path, and his velocity is along the path. So, you've got to add another dimension to your path: time. You can turn your static path into a function that says "where should I be at time t?" the agent then tries to get to the point "where he should be". This is called a trajectory.   The easy solution: Take the entire trajectory as a single curve paramaterized by time t between 0 and 1. Create a function which goes from [0,1] to a 3D point f(t). The agent starts at t = 0. He seeks the point f(t) until he's within a radius (called the blend radius). t then advances by a small amount. That's it!    The better solution: Start again with a trajectory paramaterized by a single number between 0 and 1. Find t_x such that the distance between the agent and the trajectory is minimized. (If you trajectory is piecewise linear, this involves projecting to the nearest line) Seek the point  f(t_x + e)  where e is a small number representing the "lookahead time". e can be larger the further away the agent is from the trajectory. I also have source code and a graphical example for the second solution: http://www.openprocessing.org/sketch/96006
  7. I thought you might be interested in an article I wrote on Gamasutra about motion planning and pathfinding. Topics covered:   Theoretical foundations of Motion Planning and pathfinding -- optimality, completeness, configuration space, etc. The Bug Algorithm Visibility Graphs and Navigation Meshes Lattice Grid Search: A* and variants Flow Fields and other Control Policies Randomized Planning: PRMs and RRTs Trajectory Optimization
  8. mklingen

    Why Behavior Architectures?

    A better analogy would be *writing the entirety of a programming language and a compiler* every time I want to start a new project. By hand. And then debugging the underlying compiler and programming language before I even begin writing applications. And worse, the rest of my software is still all written in machine code.   Then, I've got to write tools for my new language in machine code. I've got to write authoring software. Debuggers. I've got to teach new people on the project my fancy new language. I've got to come up with style conventions. I have to handle corner cases.   Luckily, with high level programming languages, somebody already did that stuff for me decades ago.   Not so for these hand-rolled meta languages.
  9. mklingen

    Why Behavior Architectures?

    I'm not saying that's the only way I wish to do things, but rather that I would like to be able to do it easily within the confines of a behavior architecture. Production rules are a basic building block of computer programs. They should be in any behavior meta language as well. They live in BTs as selectors and sequences. They live in FSMs as state transitions. In my opinion, not only should they be in such an architecture, they should be very easy and intuitive to implement. 
  10. mklingen

    Why Behavior Architectures?

    I recently went through 3ish years of a large robotics project where the main focus was on doing complex robotic tasks through behavior trees. There were a couple of limitations on the trees: They had to be binary. They had to return either true or false They could not pass information at runtime between each other as arguments. We had the following conditional operators: Sequence (&&), Select (||), Parallel (*), While, For, and Match (which is a sort of switch). Each behavior was a C++ class with a single function called "execute" which returned true or false. This function was then called in a thread. To develop behavior trees, we used a horrible combination of C++ operator overloading, macros, and so on. We could then view the resulting behavior tree in a little window with colorful boxes and arrows. The resulting code ended up looking something like this: // A very high level robotics task. Robot initializes, searches for a rock, and tries to pick it up // with either its right or left hand. Behavior& PickUpRock() { return // These are examples of behavior creation macros which take in an ordinary C++ function. They get expanded // into something along the lines of Behavior(name, boost::bind(function, argument 1, argument 2, ...)) BEHAVIOR(InitializeSystem) && BEHAVIOR1(SearchFor,"table") && BEHAVIOR1(SearchFor,"rock") // Choosing left or right hand involves the usage of a cryptic switch // statement substitute. && Match(IsOnLeft("rock"), 1, Grasp(LEFT_ARM, "rock") BEHAVIOR1(GoHome, LEFT_ARM) ) || Match(IsOnLeft("rock"), 0, Grasp(RIGHT_ARM, "rock") BEHAVIOR1(GoHome, RIGHT_ARM) ); } // This is an example of a sub-behavior which uses arguments // It is just a sequence of less abstract sub-behaviors. Behavior& Grasp(Arm arm, string object) { return CreateGraspSet(arm) && PlanToGrasp(arm) && GraspPreshape(arm, object) && ServoToTarget(arm) && CloseHand(arm); } // Many thousands of lines defining all the sub-behaviors follow in multiple files... // At the very bottom we finally get to write straight C++ code. It will usually be just a dozen or so lines. // If the Behaviors fail to compile you will get very cryptic error messages from boost and STL. You can see that what's happened is we've just created a less useful, harder to write meta-language on top of C++, instead of using C++ directly. Even worse, we were forced to use static singletons everywhere just to transfer data between behaviors. This led to things being extremely hard to debug, since we couldn't easily infer what data was getting changed where by which behavior. Contrast this with simply writing the whole thing in an ad-hoc script, where you can store local data and very clearly see where the data is going to and where it comes from.   I've yet to find a BT implementation which doesn't involve either extreme verbosity or crypitc meta-language symbols.   Take for example this library, which in its main example, takes dozens upon dozens of cryptic lines full of "news"  to make what is equivalent to a few function calls and a while loop within if/else brackets -- or perhaps this library, which uses cryptic C# operators to make a meta-language, much like in my example. I'm beginning to suspect that the whole drive behind BT is to make it easy to create plug-and-play graphical programming in an editor (why you would want to do this if you're not writing middleware for non-programmer designers I have no idea).   Is this how BTs are really used in the industry? Is there a better way?
  11. mklingen

    Why Behavior Architectures?

      True, but it also provides me the flexibility to change things if I desire. It's very frustrating to spend 20 minutes thinking about how I'm supposed to do a simple switch statement or increment a variable in a Behavior Tree when such a thing could be done with one line of code otherwise. It's also extremely frustrating to spend all of my time writing code about checking which state the agent is in, and formally defining state transitions in a state machine.   Overall, you guys make good points about things like predictability, balance, and formally verifying systems.   I guess what I really want is this:   A behavior architecture which doesn't require me to write thousands of lines of bloated window dressing just to get started, which allows me to do if, else, foreach, while, and switch statements, and which allows me to pass data between behaviors in a natural, easy way.   My preferred method of programming is something like this: // Just a complicated function function Example(arguments) { // I want to be able to evaluate arbitrary arguments. // Also, I should be able to block for as long as I want // while waiting on a result. result = DoSomething(arguments); // I want to be able to do branching if(MeetsCondition(result)) { result2 = DoSomething2(result); // I want to be able to short circuit and return // something if it needs no further processing. if(MeetsCondition(result2)) { return result 2; } // I should be able to simply pass a variable through // as many functions as I please and get a processed result out. else return DoSomething4(DoSomething3(result2)); } else if(!MeetsOtherCondition(result)) { // I should be able to dynamically generate lists // of results and evaluate them. list = ComputeList(result); // I should be able to easily iterate through lists // of results. foreach(element in list) { elementResult = DoSomething2(element); if(MeetsCondition(elementResult)) { return elementResult } } // Here, nothing in the list met some condition. return error1; // (or throw an exception) } // I should be able to evaluate error conditions else { return error2; // (or throw an exception) } } I.e., I like to think somewhat functionally, with a very clear flow of data into and out of functions. I like to be able to iterate over the results of functions. I like to be able to return whatever I please from a function. If I wanted to do something like the above in a Behavior Tree, for example, I would need to do one of two things: 1. turn this into a complex leaf behavior and throw away the return value (or write it to some kind of shared state), 2. I could spend hours upon hours trying to refactor it so it works with the three or four operations I have with a behavior tree, and the fact that I simply can't pass the result of one behavior into another one without serious work.
  12. In AI, it seems very common to want to make some kind of "Behavior Architecture" for ordering and building different ways that agents can interface with the world. I see a lot of state machines, Behavior Trees, HSFMs,  etc. I've implemented these for many different projects (and for my research in robotics as well), but recently I'm starting to doubt their usefulness.    What benefit do I get out of using such an architecture over just coding the entirety of the behavior using modular, native functions and built in control structures (or, if we're concerned about compilation or runtime modification, with simple scripts in a scripting language?) Am I missing something conceptually here?   EDIT: And by the way, what I mean by this is literally interpreting the concept of a behavior architecture as a series of explicit "State" or "Behavior" nodes that interact with each other dynamically as opposed to just straight code calling and evaluating functions in any way the programmer desires. I've been on so, so many projects where development starts with "class Behavior ..." or "class State ..." and then proceeds to link them up in data files dynamically or (much worse) using C macros.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!