Jump to content

  • Log In with Google      Sign In   
  • Create Account


Protodata—a language for binary data


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 EvincarOfAutumn   Members   -  Reputation: 327

Like
2Likes
Like

Posted 11 February 2013 - 03:20 AM

I recently rewrote and improved some old software of mine, and figured the next step is to put it in the hands of people who might use it.

 

Protodata lets you write binary data using a textual markup language. I’ve found this really useful in game development when I want a custom format, would rather not use plain text or XML, and don’t want to invest the time to make a good custom editor. Protodata supports signed and unsigned integers, floating-point numbers, and Unicode strings, all with a choice of bit width and endianness.

 

Edit: here’s an example document describing a cube mesh.

 

# File endianness and magic number
big u8 "MESH"

# Mesh name, null-terminated
utf8 "Cube" u8 0

# Using an arbitrary-width integer for zero padding
u24 0

# Vertex count
u32 8

# Vertex data (x, y, z)
f32
+1.0 +1.0 -1.0
+1.0 -1.0 -1.0
-1.0 -1.0 -1.0
-1.0 +1.0 -1.0
+1.0 +1.0 +1.0
-1.0 +1.0 +1.0
-1.0 -1.0 +1.0
+1.0 -1.0 +1.0

# Number of faces
u32 6

# Face data (vertex count, vertex indices)
u32
4 { u16 0 1 2 3 } # Back
4 { u16 4 5 6 7 } # Front
4 { u16 0 4 7 1 } # Right
4 { u16 1 7 6 2 } # Bottom
4 { u16 2 6 5 3 } # Left
4 { u16 4 0 3 5 } # Top

 

Please tell me what you think and offer suggestions for improvement. smile.png


Edited by EvincarOfAutumn, 11 February 2013 - 03:40 AM.


Sponsor:

#2 C0lumbo   Crossbones+   -  Reputation: 2194

Like
0Likes
Like

Posted 11 February 2013 - 05:53 AM

Have you considered including support for pointers?

 

In the textual form you would need some syntax for creating a label (a thing you can point at) and some syntax for adding a pointer (a thing that points at a label). You would probably have to define whether pointers are 64bits or not either at the top of the text file or through a command line parameter when you convert text to binary.

 

In the binary form, the labels would be converted into values representing the offset in bytes from the start of the binary blob and also store a table of the pointer locations. The load code could then iterate through the pointer table and fix up the pointer offsets.

 

That way you'll have support for in-place loading of potentially quite complicated structures.



#3 EvincarOfAutumn   Members   -  Reputation: 327

Like
0Likes
Like

Posted 11 February 2013 - 06:05 AM

Have you considered including support for pointers?

 

I hadn’t really, but that’s a good idea. Macros and expressions are in the works—the main challenge there is just designing something nice, not so much the actual implementation. I’ll be sure to include a way to make absolute and relative references.



#4 KidsLoveSatan   Members   -  Reputation: 492

Like
0Likes
Like

Posted 11 February 2013 - 06:12 AM

Support for something like typedef, and perhaps structs.

 

typedef s32 int

int 1
int 2
int 3

struct something
{
  s16
  u32
  utf8
  u8 0
}

something { 0 1 "x" }

 

 



#5 EvincarOfAutumn   Members   -  Reputation: 327

Like
0Likes
Like

Posted 11 February 2013 - 06:26 AM

Support for something like typedef, and perhaps structs.

 

Yeah, there definitely needs to be some way to avoid repetition. Both your examples would be covered by a macro system. For now the C preprocessor works:

 

#define int s32

int 1 2 3

#define something(foo, bar, text) \
  u16 foo \
  u32 bar \
  utf8 text \
  u8 0

something(0, 1, "x")

 

But I intend to add something more concise and typesafe. In particular, something that feels more native to the language.



#6 Bacterius   Crossbones+   -  Reputation: 8468

Like
0Likes
Like

Posted 11 February 2013 - 07:29 AM

Being able to store hierarchical structures would be useful, but textual data may not be the best support for very large datasets.


The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.

 

- Pessimal Algorithms and Simplexity Analysis


#7 Aardvajk   Crossbones+   -  Reputation: 5934

Like
1Likes
Like

Posted 11 February 2013 - 07:48 AM

I wrote something like this a few years back, primarily as an assembler for virtual machines but then ended up using it to prototype level data for games prior to writing an editor.

 

I found in the end that it was kind of useful as a temporary tool but that the time invested in writing the scripts (correctly) was actually much more overall than the time to write a quick and dirty tool specific for the job I needed the binary data for.

 

Still, nice to see someone else ended up thinking the same way I did back then. :)



#8 ApochPiQ   Moderators   -  Reputation: 14885

Like
0Likes
Like

Posted 11 February 2013 - 12:03 PM

I think conceptually this is a good idea, but it's really hard to make something like this into a "killer app" for basically the reasons that Aardvajk already articulated.

I would see this sort of thing being far more powerful in an environment where you have a lot of platforms that need to speak a common protocol, and having a readable spec of the protocol sitting in front of you is very valuable versus trying to reverse engineer the protocol from some code.

Unfortunately, someone else beat you to the killer app in that space.

I honestly have a hard time coming up with things I would change about protocol buffers for the specific cause they're used for; but if nothing else, you could start by looking over their work and seeing what sorts of enhancements scratch your itches.

#9 EvincarOfAutumn   Members   -  Reputation: 327

Like
0Likes
Like

Posted 11 February 2013 - 01:07 PM

I think conceptually this is a good idea, but it's really hard to make something like this into a "killer app" for basically the reasons that Aardvajk already articulated.

 

Yeah, I’m well aware of the limitations. Just trying to decide where to go with it.

 

I think what I’d really like for Protodata to become is the killer DSL for two things:

  1. Writing a legible, executable file format spec; and
  2. Doing simple reporting and transforming of data in existing files.

Think “binary sed with external modules”.



#10 ApochPiQ   Moderators   -  Reputation: 14885

Like
0Likes
Like

Posted 11 February 2013 - 01:14 PM

Transformation is definitely an interesting area. Have you thought about versioning at all?

#11 EvincarOfAutumn   Members   -  Reputation: 327

Like
0Likes
Like

Posted 11 February 2013 - 02:25 PM

Have you thought about versioning at all?

 

What do you mean exactly? First-class support for format versions?



#12 ApochPiQ   Moderators   -  Reputation: 14885

Like
0Likes
Like

Posted 11 February 2013 - 03:36 PM

Well I'd see format version control as just a special case of data transformation. It's also a common enough need that I could see it being very cool to have a system that can automatically version up old data into new data (or backwards if needed).

Where that gets tricky in your current model is that format metadata is interleaved with the actual content. IMHO what makes protocol buffers so nice is that the two are cleanly separated; versioning is trivial to compute based on just metadata inspection.

Honestly I can only see a relatively small niche case for having the two interleaved as in your approach - which is rapid prototyping. For long-lived and stable systems, I see the interleaving as more of a liability than anything, although it'd still be handy to support transformations between formats for on-the-fly "dynamic" compatibility/interoperation.

#13 AllEightUp   Moderators   -  Reputation: 4179

Like
0Likes
Like

Posted 11 February 2013 - 09:18 PM

I have also done something very much like this in the past.  The reason for it was a bit different but the explanation may be useful for further thought.  First off, I wrote the binary data as text just like this but I did so in a manner which made data tables compatible with Lua.  Eep, a script language for my binary data.  Oh how useful it is though...  I had three goals:

 

1)  Source control.  Need I mention that binary sucks for source control? :)

2)  Rebuild from source.  I personally think of two levels of source.  There is the DCC content (max/maya/etc files) and then the "exported" data.  If that exported data is in basic text format and you change just how you pack it into a file, you can avoid re-export via-DCC third party items and speed up turn around considerably by simply rebuilding the binary outputs (and not having to implement versioned readers).

3)  Variable input data formats.  Max and Maya at the time didn't have similar curve types, bits of data were exported in different orders and all those hassles you generally have.  (And FBX wasn't really a common format at the time.)  So I just tagged the file with "exported from Max" or "exported from Maya" and wrote Lua scripts loaded during conversion to binary which dealt with the variations.  As such the tools pipeline was solid and fixed, only the little bits of lua here and there were modified 90+% of the time when a problem was found or a format was changed.  I.e. using a Lua state, you load the exported file, do a single table lookup "data { exporter="Max" }" then read in "Max.lua"/"Maya.lua"/etc to create the accessors which translate from text to binary.

 

In this way I was able to do everything I believe you intend in a manner which was not only simple to read/modify by hand, but also able to deal with the long tail of overall maintenance of the data.  Not to mention some meta data saying where the lua file came from, and as such if artist x didn't check in the DCC source the nightly build screamed at them when it realized it had only a lua file and not DCC source data.

 

I do suggest looking into Lua or other script languages for the format of the text because it is of huge benefit to later have a "language" available when going from text to binary.  I personally like Lua due to it's freeform nature of mixing map/vectors into a single data structure.  For reasons of access later it just simplified things not to have to worry about order of data and such.  (Not to mention it is damned fast for what it is. :))






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS