best methods: data driven code

Started by
8 comments, last by Norman Barrows 9 years, 2 months ago

best methods: data driven code

data driven is a way to reduce iteration time by eliminating re-compiles when constants change. but true data driven software would be able to read in both constants and code from an external source such as a hard drive, network connection, etc. code would be in some form such as interpreted scripts, p-code for a built-in VM, etc.

given the various ways to get data driven code into a piece of software and execute it, what tends to work best for games?

A. interpreted scripts?

B. a VM and compiled p-code? if the VM compiler is fast, the impact on iteration time would be negligible. my own experiments with VM based games have shown this. Aside: i stopped that avenue of exploration when it came time to implement parameter passing. i concluded that a macro processor built on top of c++ gave you pretty much all the advantages of both a custom language plus a standard c++ development environment (debugger, profiler, lint, optimization, etc) with MUCH less work - although it does not reduce compile times. As for VM's - their use in mission critical code is still questionable at the current performance level of today's PCs. Someday we may all be able to use VM interpreted code and pretty much eliminate compile from the development iteration cycle (like basic and dos back in the day <g>), but it seems that that day is not today - at least for games of any significant size.

C. something else entirely?

D. all of the above ? <g>

E . non of the above ? <g>

- sorry i couldn't resist, it looked too much like a multiple choice test! <g>.

but seriously, what tends to work best?

i don't have any specific cases in mind, this is more looking towards the future as how to handle possibilities going forward - always think ahead!

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Advertisement

Most games i have seen tend to go with a set of config files combined with scripts to implement a data driven facility.

Such a system lends itself to easy modding too because you don't need some kind of Proprietary vm compiler or assembler to make behaviour and config changes...

Aside: i stopped that avenue of exploration when it came time to implement parameter passing. i concluded that a macro processor built on top of c++ gave you pretty much all the advantages of both a custom language plus a standard c++ development environment (debugger, profiler, lint, optimization, etc) with MUCH less work - although it does not reduce compile times.

Unless I'm mistaken, it sounds as though you're just dressing up C++ with some macros, which isn't really scripting. At that point you might as well have an engineer write the C++ directly. Plus I'm not sure whether or not we're to infer that you load and execute arbitrary assembly code at run-time, which scares me!

You want something that runs inside a VM because it provides you with a nice, safe sandbox you can easily control and maintain. You can provide users with the exact data and interface they should have to the game and isolate them from touching things they shouldn't and breaking the game (or worse). It's also typically easier to control their lifetimes independently of everything else, since many VM-based languages allow you to place them into their own island of memory that you can toss away wholesale when you're done with them. We've also used this to our benefit in the past as a simple error recovery mechanism -- if someone writes some broken script that puts the VM in a bad state, you can often just tear down and rebuild the VM on the spot and keep going (if you've already reported the issue and can't debug it immediately yourself, of course wink.png). Granted you need to be able to write such scripts in a way that they can auto-recover when re-loaded, but my point was to highlight another potential advantage of a VM. As to whether or not you're interpreting script directly or compiling them into a bytecode first, is more of a optimization and obfuscation thing than anything else.


'Norman Barrows', on 23 Jan 2015 - 6:16 PM, said:
Aside: i stopped that avenue of exploration when it came time to implement parameter passing. i concluded that a macro processor built on top of c++ gave you pretty much all the advantages of both a custom language plus a standard c++ development environment (debugger, profiler, lint, optimization, etc) with MUCH less work - although it does not reduce compile times.

Unless I'm mistaken, it sounds as though you're just dressing up C++ with some macros, which isn't really scripting. At that point you might as well have an engineer write the C++ directly. Plus I'm not sure whether or not we're to infer that you load and execute arbitrary assembly code at run-time, which scares me!

that VM experiment i referred to was a test of the feasibility of a pure p-code VM game. while i was able to get 60fps no problem, the additional work required to create a fully functional VM language, suspicions that it might not be fast enough as project size grew, and the loss of things like compiler optimizations made me abandon the project. but its possible i did so prematurely. instead i opted for a macro processor that translates to c++ code. this was ok, because one of my primary motivations in creating a vm language was to reduce the number of keystrokes required for code entry. and a macro processor did this while still giving me the power and benefits of c++.

my question today is more about how to do things going forward.

so it seems it boils down to interpreted scripts or p-code, with p-code executing faster, but requiring a compiler. i'd probably opt for p-code personally - but then again, i'm a performance junkie! <g>.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Having done the script-compiled-to-pcode thing I can tell you from experience that it was the only way we were going to get our design team to actually be able to customize interactions and provide fast iteration on game systems and features. Sure, sometimes we needed to move a scripted system into code for speed reasons, but by that time we had a blueprint to follow and most of the questions and problems with said system were known and overcome.

A scripting language can also do a lot of crazy stuff that you'd never want to do in C code by hand. Like hot-loading changed scripts and patching up the object data at runtime, all while the script itself is waiting for a function that takes 10 second to run to return.

In our case, scripts were used in places where the player wouldn't notice small delays in interaction ("small delays" being anything from a single frame to a second or two, depending on the system) and where iteration time was key to being able to make something fun.


Having done the script-compiled-to-pcode thing I can tell you from experience that it was the only way we were going to get our design team to actually be able to customize interactions and provide fast iteration on game systems and features.

so p-code let you put more hands to the task of writing scripting/coding behavior - AND it let those hands work without the delays of a re-build.

it seems that so many things come down to whether you have non-coders on the team or not - and build times.


A scripting language can also do a lot of crazy stuff that you'd never want to do in C code by hand.

goto anyone? <g>.

self modifying code?

FYI: as part of the anti-crack protection, caveman includes a small p-code VM that uses self modifying p-code, and is part of the main game loop! needless to say,its a very small bit of code for performance reasons, and the entire vm is coded in a RISC type manner. but its got all the basics, memory and fetch and IP, arithmetic and logic instructions, etc.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Across various projects we've had several different systems, some working better than others.

Whenever possible and reasonable, a live update is best. Everybody loves it. It feels so much better than forcing a recompile, even if it is a partial recompile of scripted data.

Most projects had several different ways to reload and reparse data. Some subsystems would use one kind of adjustable data, other subsystems would use a different kind of adjustable data.

Some built all the data through tools. There was no hot-loading of data. Designers could make changes, but in order to see them in game they needed to save their stuff, stop the game, build the game, and re-launch the game. The incremental process was much faster than a code build, but still required several minutes. This was fairly common on certain consoles like the DS where it was somewhat difficult (but not impossible) to link between the PC and the console. One such editor worked well but was occasionally frustrating. That level/event editor, as just one example, had a transition trigger where the character enters a region for a doorway as the trigger area for the source map, and the this causes a load and transition to the target location for the destination map. It could take 2-3 hours before the level designer was completely happy with both ends of the doorway trigger, each time requiring building a new DS build but not a rebuild of the source code. It would have likely taken 2-3 days if this was in code so the fine tuning time was an improvement, but it could have been better as far as the designer was concerned. (When offered the choice of cutting features to improve the tool, designers preferred the annoyance.)

Some have required no tool changes, the executable monitored the source directory and automatically applied changes. This is particularly easy to do to for models and textures when the game is built with proxy items. We've had this on every PC game and about half of the console games. They can have Maya up, modify the model, hit export, and watch on the other monitor as the model vanishes and gets replaced instantly. I've seen (but never worked on) projects where the tools link directly into Maya, moving a vertex in Maya resulted in the model's vertex moving on the game screen. That seems harder than a file system monitor, but if you decide that is cost effective, it is a thing people do.

Some have used excel spreadsheets that designers could make changes to. There was an "export to xml" button they could press that ran several excel macros, potentially checking out files, then exporting to xml data files. These files were picked up in two different ways. First, the build system would compile them into an efficient format that gets loaded directly when the app started. Second, the game monitored that directory for file changes, and when any file was changed the simulator stopped, the xml parsed, and modified values re-imported and applied. This had a potential to break the game in progress. Some changes worked well for this type of editing, some changes required a restart. An example of this was adjusting our story tree in one game. The designer or writer could modify the spreadsheet, hit the button, switch over to the game which detected the file change and reloaded, then use a cheat code to jump to a specific node of the story tree. Usually this worked quite well.

Some have had built-in systems that allowed live tuning of data. On the PC you could hit a debug key, on the consoles this meant hitting a magic key combination, both opened a bunch of cascading menus on screen exposing tens of thousands of options and variables that had been created over the years. As an example on Tiger Woods, I built a fancy grid to show the surface of the putting green. It had little animated knots along the grid to help show the direction and speed of the slope. In this case designers could view and modify the data on the console but needed to write them down and hand-enter the final values. Designers could live-modify the tunable constants controlling the speed of the knots, write down the new values, and modify the script. They could also live-modify the spacing of the grid lines both horizontally and vertically, the number of grid lines in each direction, and several other values. IIRC there were about 10 different adjustable values I put in there. The designers could rapidly modify items and see them on screen instantly.

Still others used a tool as a side window that allowed modification in game and the option to save. Designers could make changes to live data in another window, watch as they took effect, and saved when they were happy. An example of this was on Sims 3, designers could change values for how much interest an object had. They could adjust values, perhaps making "evil" sims slightly more interested in sabotaging an item, then set to a fast speed and watch how often it happened. If they weren't satisfied, they could change the number that was instantly reflected, and watch for another several sim-days to see how often it happened. Once satisfied, hit the button to save the file, and all is well.

Some games have allowed modifying the script code, this one was rather advanced. It would stop the simulator, reset all of those objects that are in use (in turn resetting any Sims that were using the item), unload the items, replace the script code, and replace the items, then restart the simulator. I never looked into the mechanics, but I'm pretty sure they relied on the save/load system for it, saving off those objects to a serializer stream, deleting them from the world, rebuilding and recompiling the script, and then loading the objects into the world with a deserliazer stream. Once you get used to having this type of feature it is painful to not have it.

Exactly what kind of system you use is heavily dependent on the system you are tying it to.

Hot-reload of models and textures and animations is not too difficult to implement on the PC, saves artists a cumulatively hundreds of hours, possibly thousands of hours, over the course of a project when you multiply the number of people times the number of reloads over 12 or 18 or 24 months. Usually a low cost, usually a high benefit, do it.

A more complex live preview system like we used in Tiger Woods with a comprehensive menu with graphical sliders and such has a fairly high burden to create, sometimes the cost of creating tunable values for a specific features is large and never fully recovered. In that game, since the engine has lasted for a decade, over the long tail it becomes worth the large up front investment. Moderate to high cost, moderate to high benefit, may or may not be a good fit.

Some data has complex interactions and is very difficult to run a live update. Other times making it adjustable introduces penalties or forbids certain important optimizations. High cost, low benefit, bad idea.

Most of the game to achieve data driven is through configuration files and scripts to do, but I think it would increase the programmer's maintenance costs.
In my project, I designed a virtual logic class module, as follows
this website is the cpp code:
Then I add any configuration dynamically through this file without the need to change any c + + code, but it can capture increases the configuration of the c + + code any changes.
The 'cpp class' implement a set of mechanisms through XML configuration files. you can get calls by registering callback function when these properties changes at run time and not just a static configuration file.
This technology has been used in my game, if u r interested u can download and have a look.

https://github.com/ketoo/NoahGameFrame [A fast, scalable, distributed game server framework for C++]


When offered the choice of cutting features to improve the tool, designers preferred the annoyance.)

good for them!

users generally prefer more features to more/better un-released in-house tools.


Some built all the data through tools. There was no hot-loading of data. Designers could make changes, but in order to see them in game they needed to save their stuff, stop the game, build the game, and re-launch the game.

Caveman uses this for some of the stats for animals. newer parts of the animal (monster) type definitions are data driven, older parts are still hard coded.


Some have had built-in systems that allowed live tuning of data.


In this case designers could view and modify the data on the console but needed to write them down and hand-enter the final values.

Caveman has that, but in the form of a generic editor that can be hooked up to up to 10 objects at once with just one line of code each, then used in-game to adjust things.

Later i discovered i could do the same thing with the built-in rigid body modeler/animator.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

I was thinking about this the other day, and it seems to me the biggest problem is passing data back and forth between game and VM code. or perhaps the fact that you have to define an api (param list) for each vm function.

for example:

in Caveman, there's an action called "inspect hut". its an action like an action in the Sims such as "water plant" or "Kick garden gnome". in this case, the action handler code increments a counter. when the counter hits a limit, it displays a message saying what the quality of the hut is.

so i was thinking about how to write the action handler using vm code. the game code owns the counter. so it would have to pass that by reference, and the vm code would increment it and return it. the vm code would also need access to all the other game variables required: hut quality passed by value, and current action passed by reference, so it can be set to NONE when the action completes. the string would be in the vm code.

so that's counter, hut quality, and current action that have to be passed between the game and VM. three params, not so bad.

but i also have much more complex action handlers that use as many as one to two dozen parameters, many of which are constants stored in global tables. i suppose that if access were non-global, they'd have lots of parameters too.

to get around this passing of lots of info between game and VM, you implement more and more in the VM.

so you'd implement start_inspect_hut_action AND inspect_hut in VM code, and it would store the counter, limit, and message string in VM code. only the current action would be a parameter. and you could simply make it return 1 if the action completed, and zero otherwise and set current action accordingly in the calling code, eliminating parameters entirely.

but following this trend, more and more code becomes vm. at some point, performance may become an issue. and the project becomes less "data driven" and more "written in a language that compiles faster". the ultimate evolution of this would probably be a game done almost entirely in VM code, with just mission critical stuff done with c++ (or whatever). or perhaps even a pure VM game where the vm language included high level mission critical routines such as builtin_render_queue.render_all.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

This topic is closed to new replies.

Advertisement