Jump to content

  • Log In with Google      Sign In   
  • Create Account

Will push pixels for food

Losing the Training Wheels: Some thoughts on moving from D3D11 to D3D12 – Part 1

Posted by , 04 June 2016 - - - - - - · 1,469 views

Note: This article was originally posted on my blog, go check it out! Also, the article editor might have eaten some of my formatting, my apologies for that.

 

LOSING THE TRAINING WHEELS: SOME THOUGHTS ON MOVING FROM D3D11 TO D3D12 – PART 1

 

A new generation of graphics APIs has been released into the wild. With it come promises of improved performance through lean and mean drivers, an API that gives you – the developer – raw and virtually unhindered access to your graphics hardware, and fever dreams of pushing hundreds of thousands of draw calls per frame through your rendering pipeline without any of your systems showing any signs of breaking a sweat. Industry professionals and hobbyist developers around the world working on their very own next big thing rejoice, and all is good in the world of video game (graphics) development.

 

Sound amazing, doesn’t it? You bet it does!

 

There’s a catch though. Like with most of these big technological promises there’s a price to pay, and in this particular case the price can be expensive. The philosophy behind DirectX 12 (and Vulkan and Mantle for that matter) is for the graphics driver to get out of the developer’s way, and it does this by only doing the minimum amount of work necessary to translate the commands you provide into a predictable output on screen. This has some consequences, as by default there are no more training wheels to help you out like in DirectX 11 (and there were probably a lot more training wheels in place than you were aware of!). You are now in charge of managing the video memory backing your resources, you are in charge of making sure that your GPU resources are in valid states at all times, you are responsible for closely managing video memory budgets and consumption and telling the operating system about them, you are responsible for generating efficient resource binding layouts for your shaders, and you are responsible for so many more fun and exciting things which I couldn’t list here because I’d just be going on forever. Failing to adhere to these responsibilities will relentlessly get you kicked into the wondrous world of undefined behavior, of which the side effects can range between nothing at all going wrong and your machine completely locking up or causing a kernel panic at seemingly random times. Fun times!

 

This series of blog posts (which I’ve been wanting to write for a very long time, but I just never found the time to do so) goes over some problem points and general findings I’ve encountered during the time I’ve spent with DirectX 12, both professionally and in my spare time. I want to spend time especially on problem scenarios which might not be explained too well in other documentation. Additionally I’d like to talk at some point about design and architecture when building a DirectX 12 application from the ground up without a legacy D3D11 codebase to start from. Do note that this series of posts is not meant as an all-inclusive guide on how to move your existing applications to this new API or as a tutorial on how to build D3D12 applications.

 

Some topics we’ll go over first will be related to memory management, residency management, resource lifetime management, resource transition strategies and pipeline state object and root signature strategies. Today we’ll start off with some memory and alignment related talk.

 

Enjoy!

 

THINGS YOU MIGHT WANT TO CONSIDER DOING UP FRONT: DEALING WITH RESOURCE ALIGNMENT

 

DirectX 12 does away with specialized resource interfaces such as ID3D11Buffer, ID3D11Texture2D, etc. in favor of a generalized ID3D12Resource interface with a matching D3D12_RESOURCE_DESC structure used for resource creation.

 

typedef struct D3D12_RESOURCE_DESC {
  D3D12_RESOURCE_DIMENSION Dimension;
  UINT64                   Alignment;
  UINT64                   Width;
  UINT                     Height;
  UINT16                   DepthOrArraySize;
  UINT16                   MipLevels;
  DXGI_FORMAT              Format;
  DXGI_SAMPLE_DESC         SampleDesc;
  D3D12_TEXTURE_LAYOUT     Layout;
  D3D12_RESOURCE_FLAGS     Flags;
} D3D12_RESOURCE_DESC;

 

If you’re familiar with descriptor structures in DirectX 11, nothing in here should be a big surprise. One element in this structure, the Alignment entry, might carry more weight though than you’d at first assume.

 

D3D12 wants video memory allocations to be straightforward and fast – remember: the driver has to get out of the developer’s way -, so it makes a lot of sense to tie the size of its allocations to a value or set of values which it can deal with in a quick way, values closely tied to video memory page sizes for example. Because of this D3D12 only allows you to specify one of four (technically three) specific alignment values: 64Kb for general purpose allocations, 4Kb for small texture resources (note: I’m generalizing heavily here), 4Mb for multisampled textures and 0 to let the runtime decide for you.

 

For textures these alignment values don’t pose a lot of problems, and it can be safe in a lot of cases to let the driver choose an alignment for you as textures tend to be on the larger size when it comes to memory footprints. The amount of space wasted due to alignment overhead is generally tiny compared to the actual texture size.

 

Buffers on the other hand pose a very big and inconvenient problem, making it almost impossible to directly (and naively) map ID3D11Buffer objects onto ID3D12Resource counterparts. As you might have noticed above, buffers have no other option than to specify a 64Kb alignment value – a value of 0 will always result in the runtime choosing 64Kb anyway -. Lots of titles tend to have a large amount of smaller ID3D11Buffer objects (think small vertex buffers, index buffers, constant buffers, etc.), which was absolutely fine in D3D11 as the driver would manage your memory for you under the hood. Taking the same approach in D3D12 will cause every single buffer, no matter how small, to take up a minimum of 64Kb of memory.

 

Oops.

 

Not dealing with this problem right away might cause your title to suddenly consume hundreds of Megabytes to Gigabytes more video memory than necessary, causing an immense amount of memory pressure on the operating system which at this point will frantically try to deal with the situation. If you have a system for dealing with residency (a topic which I’ll elaborate on in my next post) your application will start working hard (and will fail) to stay within its provided memory budget, causing your title to stutter, slow down and maybe eventually crash when the situation becomes too dire.

 

Solving this problem is going to require some work as you’re going to want to implement a mechanism for doing sub-allocations on buffer resources. A lot of D3D11 titles already provide some form of buffer sub-allocation in the form of linear allocators or ring buffers which sit on top of an underlying ID3D11Buffer resource. These are very fast and very useful, especially for video memory allocations with a lifetime of a single frame or less, and are an essential tool in any graphics codebase, so porting them to D3D12 is a good first step if you have them. If you haven’t implemented any of these yet, they’re very similar to their system memory counterparts, only now you’re dealing with pointers acquired through calls to ID3D12Resource::Map (or ID3D11DeviceContext::Map in D3D11), and for certain scenarios D3D12_GPU_VIRTUAL_ADDRESS values which act as pointers in virtual GPU memory. If you are looking for a reference implementation of these constructs, the MiniEngine project, which is part of the DirectX Graphics Samples on github provides a great implementation of a linear allocator which can be found at MiniEngine/Core/LinearAllocator.h and MiniEngine/Core/LinearAllocator.cpp.

 

To solve our alignment problem completely however we’ll need to go a step further by implementing a general purpose allocation system. The allocation strategies used here can be similar to the ones you’d implement for system memory allocations (look at it as implementing malloc for buffer resources). Feel free to get creative based on your specific allocation requirements; a construct such as a simple small block allocator can already solve a lot if not most of your alignment problems.

 

One additional thing to keep in mind when implementing a general purpose allocator is that buffer resources which are used as a backing store for your allocations are created with specific usage flags (see D3D12_RESOURCE_FLAGS) and specific heap types (see D312_HEAP_TYPE), meaning that you’ll want to specify separate allocators for separate use cases. Additionally this will have an effect on how you deal with resource state transitions (see D3D12_RESOURCE_STATES), as state transitions on a specific sub-allocated buffer can now affect other allocations on the same backing resources.

 

Take the following pseudocode example:

 

void Foo(ID3D12Resource* buffer)
{
    // Sub-allocate two buffer resources
    BufferSubAlloc bufferA = MallocBuffer(buffer, 512);
    BufferSubAlloc bufferB = MallocBuffer(buffer, 256);

    // Transition buffer A into a vertex/constant buffer state
    ResourceStateTransition(bufferA, D3D12_RESOURCE_STATE_VERTEX_AND_CONSTANT_BUFFER);

    // Transition buffer B into an index buffer state
    ResourceStateTransition(bufferB, D3D12_RESOURCE_STATE_INDEX_BUFFER);

    DoSomeStuffWithBuffers(bufferA, bufferB); // What state is buffer A in at this point?
}

 

Two buffer sub-allocations, Buffer A and Buffer B, are allocated off of the same buffer. Buffer A gets transitioned into a vertex/constant buffer state, which results in the underlying ID3D12Resource to be transitioned into that same state. Next up, Buffer B gets transitioned into an index buffer state, causing that same ID3D12Resource from before to be transitioned in that state. Both Buffer A and Buffer B are now in the index buffer state because their backing resource is in that state, making Buffer A invalid for use as a vertex or constant buffer.

 

Separating allocators based on intended use cases can save you the headache of having to deal with situations like these and will allow you to use buffer sub-allocations like you would use ID3D11Buffer objects.

 

CLOSING THOUGHTS

 

I hope this post didn’t turn into too much of a rant, I sometimes have trouble with staying coherent in these kinds of posts or discussions (as many who know me can tell you). I’m looking forward to writing the rest of these, and I hope that someone somewhere finds these overviews useful. Next time I’ll probably be talking about the MakeResident and Evict calls and how to use them to manage resource residency. I might throw in more stuff if I find the time, who knows.

 

Thank you for reading, until next time!




The CRI Project

Posted by , 07 September 2014 - - - - - - · 771 views
CRI, D3D, OpenGL, Mantle, DirectX and 1 more...
This journal entry was originally written for my homepage. Go check it out!


THE CRI PROJECT

CRI, short for Common Rendering Interface, a name which is no longer relevant as I hinted at in my last blog post, is a high performance 3D rendering toolkit which started development in late 2013.

WHAT IS A RENDERING TOOLKIT EXACTLY?

So glad you asked! The CRI project is a piece of software which allows a developer to prototype and build graphical systems and effects in an intuitive manner. It is an extensible research and experimentation tool targeted more towards building technical demos and proofs of concepts (read: really really cool shiny stuff) rather than actual games (although using CRI in a game project would definitely be feasible). It is a sandbox environment for rapid implementation and iteration of anything graphics and rendering related.

At the core of the project lies a powerful lightweight multithreaded rendering engine built on top of Direct3D 11 and AMD’s new Mantle API, with Mantle being the primary focus. Support for OpenGL 4 is currently also in the works but is treated as a “nice to have” feature for now since CRI relies heavily on multithreaded rendering job execution, something OpenGL does not like to play with nicely. Whether OpenGL will actually show up in later revisions of the project is absolutely not sure.

On top of this rendering engine a set of tools will be provided which allow a developer to build custom lighting and shading pipelines, render queues, post-processing effects and much much more in a data-driven and native plugin-based fashion. A basic scriptable scene compositor and asset manager will also be provided to create scenes and easily set up graphics configurations. With this an extensive profiling suite will also be provided as a way to get exact performance details of your setup.

Aside from all of this a C++ API will also be exposed for both the higher level tools and the low-level rendering engine. This allows for the development of plugins as well as integration into other C++ based projects such as games.



WHY BUILD SOMETHING LIKE THIS?

In the past I’ve written quite a few rendering engines both professionally and as personal side projects. The last one of my personal rendering projects was part of a larger project called RainEngine, which up until now has been my most ambitious and advanced personal project to date. As soon as I started my current job I pretty much stopped development on this project, this due to it being very massive in scope and me not being all too pleased with its overall design. I have been able to build some very cool stuff with it however and it served as the centerpiece of my portfolio before I started doing graphics programming professionally, and I am still very proud of what I achieved.

During the development of the RainEngine project I learned that even though I had greater ideas in mind I always got most of my enjoyment out of building cool graphical systems and effects. Often this lead me to spending weeks or even months on tweaking, fine-tuning and experimenting with advanced lighting systems or post-processing effects even though I should have been focusing on how they would work best in the game I wanted to build. It is at this point that I realized that I didn’t care so much about building games with the technology I had implemented, but rather about building breathtaking or lifelike and dynamic scenes which one could traverse in real-time. Combine this with the excitement of being able to read papers about the latest and greatest AAA techniques and implementing them within my own environment, and I was all set.



This is why I am taking on the CRI project.



Does a tool like this already exist? Probably.

Do I care? No.

This is all about me building something I want to build: A cool and exciting project which will allow me to easily do even more cool and exciting stuff. If people end up being interested in it, that’s great! If people want to try using it, even better! (although I’ll probably hold off on releasing anything until certain software components I’m using are widely accessible and no longer under any NDAs, *cough* Mantle *cough*. This is also a heads-up that I won’t be discussing anything Mantle specific on this blog). I’m just saying that I’m not planning on creating the world’s best AAA rendering engine or the most revolutionary application ever, I just want to build a tool I can enjoy working on and working with while pushing my own skills even farther.



WHICH STATE IS THE PROJECT IN RIGHT NOW?

I’ve been working on this project for several months now. Due to work and life in general I often have days or even weeks where I don’t touch any CRI code at all, which means development isn’t exactly going at breakneck speeds. What I have right now is a core library with platform abstractions and utility code, a math library (which has been with me for years and has improved with every graphics related project I’ve done), a task-based multithreading system, a fully functional D3D11 low level renderer and partially functional Mantle and OpenGL renderers. Since I did not anticipate on getting access to Mantle at the start of the project I had designed the rendering infrastructure around D3D11. The focus has however shifted towards a Mantle-based design, so I am currently working on revising the architecture of the renderer to be more future proof (as it seems like future generations of both OpenGL and D3D will follow a Mantle-like approach).

As for design documents and architectural concepts for the rest of the project a lot more work has been done. I’d like to save the details for these for future blog posts though, as this one seems like a wall of text already.



FINISHING UP

If you’ve made it to this point: Congratulations! Apparently my post wasn’t too boring! If I had cake I’d give you a slice!

I’d like to write a lot more about this project as development continues. Even if nobody really reads any of this, it’s nice to write about this stuff to clear out my mind a bit!



Thanks for reading, and until next time!

(Again, you can also read this entry at my homepage!)


The death of a project

Posted by , 30 December 2013 - - - - - - · 1,099 views

2013 has been a completely insane year for me, it was the year in which I dropped out of university (with my degree not too far off!), got a job at a game development studio as a (rendering) engineer in Canada, and moved away from Europe to go do said job.

This combination of events of course make a huge impact on your day-to-day life. You leave behind friends, family, your classmates, and in my case projects.

For the past few years I have been working on a couple of things - either on my own or with friends - under the moniker Rainware. The biggest of these projects was the RainEngine project (Random note: holy shit, if you check out the forums seems like everyone and their mom wants to build the next greatest game engine out there these days).

This project has been my baby for the last 4 or 5 years and has gone through many iterations. In its current state it is a very powerful and neat engine in which I can do my crazy crazy experiments, but it is far from finished, and actually finishing it according to the specification I wrote up for it would still take years in my current situation (having a full-time job which tends to stretch the daily 9-to-5 time frame quite often - not that I'm complaining of course, I love it! -).

The idea of working on this project for another couple of years is not what's bothering me, it's the concept of how relevant this project is still going to be after spending way too much development time on it. Four years ago I also didn't have the same skills and experience I have now; this means that there are a couple of components of this engine which are not designed the way I'd like them to be (read: They're a pain in the ass!). Replacing these would be a huge task in itself as they are deeply embedded into the core of the engine.

At this point I also feel like I have learned most of what there is to be learned from doing a project like this on your own. Although I gained a massive amount of knowledge (knowledge which got me where I am today), I feel as if it doesn't have anything to offer to me anymore, and the fact that the cool game projects we were working on are all dead now (seeing as there's this huge ocean in between me and the guys I was working with) doesn't get me excited about this project's future either.

So, that being said, I now officially declare the RainEngine dead.
So long! We had a great run! I had an awesome time working on you! But now it's time for something else...


Since I started working at my current job I've grown a lot more confident about my programming skills, especially when it comes to writing clean and well thought-out code. When working on personal projects the last few years I only had myself and maybe a handful of classmates to review the code I wrote, not what you'd call a trustworthy group of reviewers. This kind of kept me from releasing code into the wild, because I would always be afraid of doing something really stupid somewhere in my code.

These days I work with a team of people who have been working as software engineers for years; some of my coworkers even started out when I was still in elementary school. Getting confirmation from these people that what I write actually is quality code has boosted my confidence quite a bit, and this will be an important factor in how I tackle future projects.

In these past few months I have also learned a lot about what it actually is to manage and develop these huge software systems without them becoming a huge chaotic mess of code, half implemented features and indecipherable road maps.

In addition to this my skills in writing rendering code have improved as well.


All of these things have motivated me to start on something new. Something more thought out and doable.

All in all this new project will be:
  • More manageable in scale: I wouldn't say it's a small project, but it is something that can be actually finished by a single developer in a reasonable amount of time.
  • Closely related to my job: My job has actually motivated me to do more of the same in my personal projects!
  • More open: Once this project gets off the ground I want to actually be more open about it. Maybe I'll throw the code out in the open under some permissive license to see what people can do with it. Or maybe I'll just do frequent binary releases, who knows.
  • Properly structured: This applies to both the code itself and the way it's being developed.
  • Awesome: Speaks for itself, right?

Of course I don't want to get ahead of myself. I'm still finishing up writing the documentation what this project is going to be exactly and where its limits are going to be. In addition to that I'm also only writing boilerplate and support code right now, the actual meat of the project is something I'll be starting on in the next couple of weeks.

As soon as this thing actually does something I'll be more than happy to write about it.

I hope this journal post didn't come off as me rambling about a bunch of stuff, I'm just really excited to work on something new to go together with my new life!

Until next time!


Yes, that definitely happened!

Posted by , 24 November 2013 - - - - - - · 1,074 views

Note: This is kind of a follow up on my last journal entry, read that one first if you want to be able to make sense of this one Posted Image


So I have been living and working in Vancouver for about three weeks right now as a software engineer at a local game studio, and it's been absolutely amazing. I was able to do a bunch of fixes for and witness the launch of one project on Steam during this time, and I've been able to dive head first into another project as well.

This job is pretty much everything I had hoped it would be; I get to do what was basically my hobby for years as my job, and I get to do this in an awesome location with an awesome group of people with years of experience (seriously, some of my coworkers have been working in this industry since I was still a little kid!)

One of the best things about this job is maybe that while I'm technically an intern, I'm not being treated like an intern. I have a solid position within my team with my own share of real responsibilities and real tasks just like anyone else, and being able to live up to those responsibilities only confirms even more that I made the right decision in taking this opportunity and that this is what I want to do with my life.

Of course I can't go into specifics of what I'm working on, so let's just say that it's pretty damn cool and exciting Posted Image


Besides work I've been able to meet a bunch of really cool people here in Vancouver and surroundings, especially everyone in AIESEC SFU, which is the organisation which set me up with my internship in the first place (On that note I should definitely give a shout out to the person who made all of this possible, you know who you are! You're awesome!).

The city and surroundings themselves are quite different from where I used to live in Belgium, but I'm absolutely loving it.


I think that's about all I wanted to say for now. I'll probably do some more journal entries in the future if anyone is interested.

Until next time!


Did that just happen?

Posted by , 10 October 2013 - - - - - - · 2,849 views

About a month ago I was a computer science student in Belgium going into his last year before finishing his degree. Due to a whole bunch of coincidences this is no longer the case today.

As some of you may or may not know my main passion is rendering. I can get extremely excited about implementing fancy lighting systems, next-gen rendering pipelines and everything in between. Obviously my goal was to find a job doing just this once I had my degree; the only downside to this idea was that Belgium doesn't really have an active game industry. Conclusion: I'd probably need to get my degree, get a software engineering job at a local company and try to work my way into a game development company abroad (preferrably across the atlantic ocean) one way or another after gaining some professional experience.


I went to visit my cousin in Vancouver last month together with a good friend of mine. This cousin is doing a year-long marketing internship at a local company and is basically having the time of his life. Of course I had to go over there to join in on the fun.
Some time before I departed from Belgium my cousin and I started talking about the organisation through which he set up his internship and all the other internship opportunites which were available in Vancouver. Apparently this organisation was in talks with a game development studio, and this studio in particular was looking for someone with experience in rendering.

Of course, this piqued my interest, but I didn't give myself any illusions. I was still in university, I did not have a degree nor any professional software development experience whatsoever and my portfolio was made up out of an engine project and a bunch of small tech demos. Since there were talks about possibly setting up a meeting with this company I collected some of my best demos, wrote up a resumé and made contact cards, but I still wasn't really optimistic about my odds of actually impressing anyone. The reaction I was expecting from this meeting was something along the lines of "that looks really nifty, why don't you come back in a couple of years and we might be able to talk business", and I would've been really happy with that (having contacts in this industry means a lot!).


So after a week of being in Vancouver I had my first job interview. Ever. 8000 kilometers from home. And it was awesome!
I was able to show off my work, talk about my engine project I've been working on for the past four years and to my amazement the discussion quickly changed to talks about an intern position for the coming year and a possible full time placement afterwards. All in all it was just a very exciting and surreal experience.


After this day my life was pretty much turned upside down. I'd be leaving Belgium and move to Vancouver for at least a year. I would be giving up my studies (although my university guaranteed me that I could pick them back up from where I left them if I wanted to). I really couldn't refuse this offer since it was basically everything I ever wanted in a job.


A couple of days after the meeting I arrived back home in Belgium, and the events of the past couple of days just seemed way too surreal. In the following weeks there'd be tons of paperwork, a technical interview including a programming test over Skype (which went really really awesome as well), apartment hunting and telling friends and family that I'd be moving to the other side of the world very soon.


This brings us to the present day. I'm finishing up all of the required paperwork (work permit, rental contract, etc.) and I hope to be able to depart for Canada in about a week or three.

I couldn't be more excited.


Decals, finally!

Posted by , 26 July 2013 - - - - - - · 2,799 views

I finally have a decal implementation working in the graphics component of my engine project. This really was a simple implementation and I've had this on my to-do list for a while, but it just got buried by a ton of other things I wanted to get out of the way first. Anyway, here's a screenshot using some art I made some time ago (click to enlarge):

Posted Image

Decals have access to all the features of the engine's material system and are completely managed on the GPU, so there's no need to involve the physics engine to calculate decal positions for example.

Both 2D decals and volume decals are supported.

The scene in the screenshot I posted is still a work in progress, hence the absence of shadows, indirect/ambient lighting, anti-aliasing and the crude texturing. I'll probably post a journal entry when it's completed.


Tile-based deferred shading

Posted by , 25 June 2013 - - - - - - · 5,527 views
tile-based, deferred, shading and 4 more...
I've been absent from this site for a while now, mainly for academic reasons (tons of deadlines, you know the drill).

I have dedicated some small amounts of spare time on my personal projects as well though, and last night I was finally able to complete something which was on my to-do list for quite a while: Tile-based deferred shading

Here's a result image (click to enlarge):
Posted Image

This is a small scene with 1024 quite densely packed active dynamic pointlights using a physically based BRDF with support for a wide range of materials (including pretty accurate rendering of metals).
This frame was rendered in 9,34 ms (107,7 frames/sec) at a resolution of 1280x720 on a mid-range DX11 graphics card with all engine systems enabled and actually doing work (eg. audio, physics, etc.). This was rendered without applying any form of culling on the lights beside the tile-based culling in the compute shader.

Note: Point light intensities and cutoff distances are generated randomly, hence the sharp edges for some lights.

The implementation supports both point and spot lights. Directional lights are rendered in a separate pass.

It still needs some work when it comes to optimization, but I'm already pretty damn satisfied with the results Posted Image

EDIT:

Got the frame time of the exact same scene down to 7,4ms


We are live!

Posted by , 09 February 2013 - - - - - - · 884 views

We are live!

Today our new website over at www.rainwarestudios.com went live! Check it out!

Posted Image


Data exchange formats!

Posted by , 11 December 2012 - - - - - - · 2,072 views
Data exchangeXML, JSON and 3 more...
Data exchange formats!

Importing and exporting data to and from your game, be it save data, resource data or anything in between can be tricky to get right in a flexible manner.

While developing your game you might want to be able to make sense of the data your game is working with, so you'll want to store data in a compact, easy to parse and human readable format. However when you release your game you might also want to be able to store this exact same data in a binary format without breaking compatibility. On top of that you might want to store your data in such a way that the overhead of building your in-game data structures from these files becomes as small as possible, with a 1-to-1 mapping of data being the ideal case.

I've been working on solving these problems in my own implementations for a while, and while I haven't found the "perfect solution" just yet, I've come across some interesting techniques for working with data. In this journal entry I'd like to share an overview of some work I've done the last couple of months, primarily focusing on human readable representation of in-game data.


Before I begin I'd like to share an article which was posted last month on #AltDevBlogADay (and reposted on gamasutra) about 'A formal language for data definitions'. I've drawn some inspiration from this article for developing my solutions, so it might be an interesting read.



1. The first attempt: XML and the 'Generic attribute system':

As most of you will probably know XML (eXtensible Markup Language) is a simple and popular language for storing data in a both human and machine readable format. Because of its popularity and widespread use there are a lot of third party libraries available for reading and writing XML data in a lot of major programming languages. Because of this it might not be a surprise that my journey started off in the realm of XML.

While it's technically possible to store pretty much any data representable by text in XML, the language itself has no concept of primitive data types. To give an example about what kind of issues this can present, when writing numerical data in XML it will be up to your program to decide whether this data is actual numerical data, or a string representing numerical data. This can be resolved however by providing a so-called schema for the data you're trying to represent, and by using a parser which can validate your XML document against this schema.

Using a schema however can present some overhead, both for the actual parsing of data - you're actually parsing 2 files now- and for overall data maintenance. Seeing this added overhead, I decided not to go with schema's and went for a more "brute force" approach: The generic attribute system.

The attribute system in itself was really simple, a single attribute contained 2 string values: a name and a value. Attributes are stored in so-called attribute sets, which can contain other attribute sets as subsets as well. This creates a very primitive data structure for storing data which can be represented by text hierarchically, so mapping XML data to this intermediate data format was very simple.

To solve the problem of determining which datatype an attribute contained I went with a very primitive approach: Let some factory system deal with it. This meant that an object factory would first of all check whether all attributes for creating an object were available in the attribute set, after which it attempted to parse the string value an attribute contained as the expected datatype. If the data was parsed successfully the factory could do a constraint check (eg. checking whether a value was within acceptable ranges) and construct the requested object.

This worked, that it did, but I don't think I have to explain to anyone why this wasn't exactly an ideal system (brute force approaches seldomly are). The parsing stage for getting data from attribute sets to actual objects pretty much forced me to provide a completely different code path for parsing binary files, which is something we really wanted to avoid.

So XML and attributes went into the trashcan, and I set some prerequisites for my next approach:
  • The language for defining data should support some basic primitive types
  • This language should also allow for a direct mapping of most types defined by the game/engine.
  • And this language should allow a user to structure data in such a manner that it almost directly maps to a binary representation of the same data, while still remaining readable.

2. The second attempt: JSON... or something that used to be JSON

I always liked JSON (JavaScript Object Notation), I always thought of it as a clean and no-bullshit way of storing data. As opposed to XML, JSON does have support for a couple of primitive types, these being strings, numerical values, boolean values, and null values. JSON also provides the concept of objects -which are regular ol' associative arrays- and lists. On top of that JSON syntax is ridiculously easy to parse.

I don't like everything about JSON though. The lack of a syntax for writing comments is what bothered me most as I like to write and document some files by hand, although I understand the decision not to include it into the language itself. Some developers write comments as elements in an object, but this means these values will be parsed and loaded in as actual data, and that's something I want to avoid.

As I mentioned above JSON has a really easy syntax, so I decided to experiment with writing my own JSON parser just for the fun of it. I didn't have any previous experience with writing parsers, except for systems for reading binary data (which don't really qualify as parsers), but after an hour or two I had a complete JSON parser built from the ground up. After throwing a bunch of huge JSON files at it to see if it actually worked as intended (it did), I started to experiment with it.

As I said I had no previous experience with writing parsers, so I haven't a clue about best practices or about how to approach complex languages, so I don't know whether the approach I followed for my parser would make any sense to someone who has more experience in these matters. What I did was create a parser system which would accept 'rule objects'. Each rule object would describe the syntax for a single primitive datatype or data structure, and a system for parsing that datatype or structure, optionally mapping it to a native (in my case C++) representation of that type or structure.
This means that the parser just remembers where it is in a file, checks whether it can find a rule which can be applied at that position, and executes the parser for that rule.

So my original JSON parser contained rules for objects, lists, strings, numerical values, booleans and null values. Of course, the first thing I thought was, why stop here? I also realized that a rule didn't necessarily have to map to an internal data value, so I could just write additional rules for adding adding language features, like comments.

So, I wrote a very simple and small rule for C-style line and block comments and registered it with the parser. This worked perfectly, which meant I now had a language incompatible with regular JSON, but which supported all the features of regular JSON with the added benefit of comments.

Of course, additional rules followed, adding even more supported datatypes. Some examples include support for data structures like vectors, matrices, etc. Support for things like directly assigning binary data (found in external files) to object or list entries was added as well, together with more game-specific functionality such as resource references.

The result looks something like this:
[source lang="jscript"]/** This structure describes a material*/{ // Global material info "name": "some_material", "shader_program": @resource("deferred.rsh"), // Material parameters "parameters": { "Color": @color( 0.0, 1.0, 1.0, 1.0 ) }, // Texture resources "textures": { "Diffuse": @resource("diffuse.rtex") }}[/source]
[source lang="jscript"]/** This structure describes a shader*/{ // Global shader info "name": "some_shader", "shader_setups": [ { // Standard shader setup info "name": "default_d3d11", "layer": "solid", "platform": "win_d3d11", "shader_target": 5.0, "shaders": [ { "shader_type": @enum("vertex"), "shader_source": @file("some_shader_source.hlsl"), "entry_point": "VS", "flags": [ "DEBUG" ] }, { "shader_type": @enum("pixel"), "shader_source": @file("some_other_shader_source.hlsl"), "entry_point": "PS", "flags": [ "DEBUG" ] } ], "samplers": [ { "name": "some_sampler", "filter": @enum("anisotropic"), "address_u": @enum("wrap"), "address_v": @enum("wrap") } ] } ]}[/source]
(note: These are just dummy structures written for example purposes.)

So now we have an extensible language which is easy to read, easy to parse, and which can be directly parsed into a binary representation from which we can construct objects in our game, just like when we would load in binary files.
This is a massive improvement over our XML-based approach, but there's still work to be done.



That, however, will be for another entry.


Let's talk audio

Posted by , 24 October 2012 - - - - - - · 5,542 views
audio development, game audio and 2 more...
When you look at the different disciplines involved in game programming, I would have to say that graphics is the field which interests me most. I thoroughly enjoy designing and implementing efficient rendering systems and I love to mess around with the endless possibilities which programmable graphics hardware offers.

Today I'm stepping out of my comfort zone and into the wondrous and vastly less documented world of audio programming for games.
For me, getting into an audio mindset was quite hard coming from doing mostly graphics development as you try to translate the concepts of rendering pipelines, shaders, materials, etc. to their possible audio counterparts. This is not a realistic approach of course as there are many concepts found within graphics development which simply do not translate to audio programming, and vice versa.

Throughout the years I've developed and maintained a fairly simple cross-platform 3D audio 'engine' for use in games. This engine has all the basic fundamental features you'd expect in an audio engine: 2D and 3D audio sources with different falloff models, data streaming, support for multiple input formats, multi-channel/surround output, support for multiple back-ends (eg. OpenAL), etc. It has no concept of anything more advanced than these features, so there are no filters or effects, no occlusion models, no fancy volume regulation systems, and so on.

This system has suited my needs throughout the years because I honestly considered audio as a secondary feature, and because I was quite happy with just some basic audio sources here and there for environmental audio and some interactive audio playback.

These days I'm working on a collaborative project which is of a much larger scope than pretty much any game project I've done before, so the requirements for both my graphics and audio systems have gone up quite a bit. I'm very happy to say that I have completed work on my graphics system overhaul last week and that it is faster, more flexible and generally better than ever before.

For my audio system I considered a couple of options:
  • Completely ditch my current system and start from scratch with 'advanced features' in mind from the very beginning, or
  • Work with the current system and extend it, or
  • Forget the idea of implementing an audio system from scratch and go with a solution like FMOD.

I ruled out the third option because I don't like to have libraries with more restrictive licenses in my projects, and because I suffer from a bad case of enjoying wheel-reinventing. Since our project is more of a proof of concept type of deal and because we have a "it's done when it's done" mentality without any fixed deadline dates I can take all the time I need to go with one of the first two options.
After some code review I decided to go with extending the current system after making some alterations which allow for an easier integration of newer features and some general code cleanup.

So now I'll have to take a look at our project requirements so I can sketch out a list of features and a very rough roadmap. Some features I would definitely like to see are a data-driven audio pipeline setup, proper audio occlusion and filters and effects.

There's another concept I've been toying with for the last couple of days which involves programmable filters (in graphics terms: shaders for audio data), where you can write audio filters or effects as small programs which get applied to a chunk of audio data before it gets buffered. I have some very basic knowledge of discrete-time systems and signal processing from when I took some engineering classes a couple of years ago which could provide a starting point, but I have no idea of how feasible this idea would be for real-time systems or whether this has been done before. I guess I'll have to do some prototyping to find out whether this would even remotely work.


I suppose that's about all I needed to ramble about today, if anyone has some relevant audio-related papers, case studies, post-mortems, etc. feel free to share.






December 2016 »

S M T W T F S
    123
456 7 8910
11121314151617
18192021222324
25262728293031

Recent Comments