Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 05 Jan 2009
Offline Last Active Yesterday, 02:52 PM

Posts I've Made

In Topic: How to store data for reflection in preprocessor stage

24 February 2015 - 01:37 PM

Olof Hedman have the right idea. C++ can let you approach reflection in a unique way: do (almost) everything at compilation time.
I've experimented and played around a lot with that in a couple of pet projects as well as in some production code at my previous company. The solution I've ended up with that I liked the most is to describe everything at compilation time using template specialization, much like type traits. From various proposals for compilation time reflection for the C++ standard that I've seen, I think other people have used similar approaches to reflection in c++ as they are proposing things that work generally in a similar way.
I'm going to oversimplify this, but this approach is very extensible. I've used it to automatically generate language bindings completely at compilation time, to serialize graphs of objects, and a few other things.

Basically, I have a reflection::traits template that I specialize to describe every item I want my reflection system to describe. It is defined like so:

namespace reflection
    template< typename T > struct traits {};

I then have a specialization of it for each thing I want to describe, and which contain static methods, typedefs, etc. depending on what kind of thing I'm describing.

For instance if I have a class named Foo, I'll have a specialization of the reflection template that looks like this:
template<> struct reflection::traits< Foo >
    static const char* Name() { return "Foo"; }
At runtime, I now can now the name of class Foo by calling this: reflection::traits< Foo >::Name()
Of course, just getting the name isn't really useful. What I really want to do is to enumerate all the members. I do it using a compilation time visitor pattern. I guess it could be possible to use a more functional programming style, using tuples or something similar but I find the visitor approach less daunting.
In my previous company we only used this to serialize things, so I was doing something like to describe member variables:
template<> struct reflection::traits< Foo >
  static const char* Name() { return "Foo"; }

  template< typename V > accept( V& visitor )
    visitor.template memberVar( "Blah", &Foo::m_Blah );
    visitor.template memberVar( "Thing", &Foo::m_Thing );

It was all done using a bunch of macros so it looked something like this:


The reflection::traits specialization had to be declared friend, so I had another macro for that. It is the only thing I need to insert into the definition of my classes, other than that this approach is non-intrusive, all the reflection stuff lives completely on the side.


It is possible to do much more complicated things, though: just create a dummy type for each member variable / property and create specific specializations of reflection::traits for those where you can then put whatever you need, like pointers to setter/getter member functions).

Likewise, macros are just one way to go about it. On my pet project I do more complicated things so I have a generator that generate all those templates from descriptions written in a simple language (I just don't like to insert annotations in the C++ code itself, I think it's both more complicated and less neat).


Then I can for instance print the member variables of any class described using this system by doing something like this:
template< class C >
class PrintObjectVisitor
    WriteVisitor( const C& object, ostream& output ) :
        m_object( object ),
        m_output( output )

    template< typename T > void memberVar( const char* pName, T C::* mpVar )
        output << "  " << pName << ": " << m_object.*mpVar;

    const C& m_object;
    ostream& m_output;

template< typename C >
void PrintObject( const C& obj )
    PrintObjectVisitor v( obj, cout );
    reflection::traits< C >::accept( v );
The visitor can have methods besides "memberVar" to describe other aspects of the class, using template functions to pass along the required type (so that the visitor can then use reflection on that type in the same way and so on). For instance, you could have a method to tell the visitor about the superclasses of the class. It can then recursively visit them to print their member variables too.
You can use this to attach reflection information and implement visitor patterns for other things than classes. For namespaces, I just create a dummmy type in the namespace:
namespace Bar
    struct reflection_tag {}
Then specialize reflection::traits for "Bar::reflection_tag" to describe reflection informations about that namespace, including a function that goes through all the classes and nested namespace that it contains using a compile-time visitor like above.
Likewise, I create dummy structs in the class reflection traits to identify methods, properties and so on and then specialize the reflection::traits class for those to describe everything I need to describe about those things, depending on what I need to do.
The nice thing is that for most things, you pay no runtime cost for all that. That PrintObject function above, for instance, gets compiled into completely straight forward code that will just print each variable of the object without performing any lookup through a container at runtime. Furthermore, you don't get a bunch of data you don't need compiled into your binaries. If you only need to serialize a bunch of data in a binary blob, you don't need the class and variable names as long as you can identify the format version (I was doing it by using that same system to generate a CRC of all the classes description - it was enough for us since we used this only for network communication and it allowed to make sure that both the client and server were able to serialize and unserialize the exact same types). By the way in a modenr C++ compiler, things like computing CRCs such as this could be also done completely at compilation time using constexpr.
Another plus of this method is that you don't need to add stuff into the classes themselves, it lives completely on the side. You can make macros to build all those template specializations, I've did that at my prevoous company. However, in my personal projects I'm doing more sophisticated stuff using this approach than just serialization (like scripting language bindings), so I wrote a tool that generate those. I didn't want to use a Qt or unreal like approach of inserting special annotations through my c++ classes though, just a matter of taste but I find this messy. Instead, I have a simple interface description living in their own files, using a simple description language that ressembles a very simplified subset of c++, where I describe namespaces, classes, methods and properties. Then I have a tool that spits out a bunch of header files containing all those reflection::traits specialization, and from that I can generate  serialization, language bindings and such entirely through a bunch of templates.
Its also possible to use all this to construct a more classical system that allows introspection at runtime, but I'd only use that for things like property editor UIs and the like.

In Topic: 1997 game graphic files

07 February 2015 - 09:40 AM

The third byte might just be a format version number. It's usually good practice to include such a thing in a binary format.


As for the bytes in grey, think about what information you are missing: if this file, BLATT.RES, holds animation frames that are supposed to be played together but which are of different sizes, then there is probably a relative position for each frame of the animation. There might also be some animation speed information somewhere, perhaps a per-frame display duration.


I'd try to see if the bytes in grey may contain those things.

In Topic: Unworkable project

06 February 2015 - 04:06 PM

I've been in a similar situation at my previous job. Very nice coworkers but clueless/toxic management and the worst code base I've ever seen (Imagine a C++ code base written using only the worst possible aspects of C in the worst possible ways by an egotic self taught medical doctor who reinvented everything from strcpy to his own "database engine" which had all the referential integrity of an alzheimer patient, and it was used to manage an hospital, everything from medical records to prescriptions)


You did the right choice (although myself I did wait until I had a new job lined up to put in my resignation). There's no use being miserable in a job where you are the sucker stuck working on some horrible shit that management is unwilling to give the time and resources to fix.

In Topic: Ever completely lost a game and its source code?

31 January 2015 - 10:53 AM

It happens even to publishers.


I used to work for a studio that was doing handheld games for Atari (Infogrames Atari, not Atari Atari). Some day, they decided to bundle together two GBA games into one cartridge, one that our studio had done recently, and some older game that had been developed elsewhere. We asked for the source of that other game, and of course, they were thoroughly unable to locate them. All we could have was a binary image of the game's rom.


Since we had to add a launcher to select which of the two game to launch, we ended up having to place the older game at the beginning of the rom (since without the sources we couldn't relocate it to a different address), overwrite its first instruction with a jump to start the launcher, and jump back there if the user chose to launch that game. Our own game for which we still had the source was recompiled to run from whichever address we ended up putting it at.

In Topic: Game content source repository?

26 August 2014 - 05:43 AM

We use perforce here, on a large project with multiple teams around the world, to store both the code and the data.

Some things that make perforce good for this are:

- the possibility to have different locking policies per file type (you want to allow multiple checkout for sources, but exclusive checkout only for binary data such as art assets)

- the possibility (again, per file type) to limit the number of revision for which you keep the actual data. For instance, you can get it to store only the last 5 revisions of PNG files and discard earlier ones. This is vital for very large project s that really deal with a lot of data, to keep the size of the repository under control.

- perforce allows to setup proxy servers, and it works really well and allows the dozens or hundreds of people working at each studio to just talk with their local proxy, which in turn incrementally synchronize themselves with the central repository. This way the (large) data being committed elsewhere in the world is only downloaded once by your local proxy, and then everyone gets them on their PC through the lan. Imagine if a team of 100 persons had to download the same latest art assets through the internet directly...

Despite of all this it is very responsive in practice, someone on the other side of the world pushes a commit through their local proxy and you see it almost immediately. Of course when large operations are underway such as branching or large commits it tends to create some slow downs but nothing really crippling.