Jump to content
  • Advertisement
Sign in to follow this  
  • entries
    24
  • comments
    21
  • views
    17658

Entries in this blog

 

Spigots

I'm currently working on yet another tool to grease the wheels of the Team UnrealSP content pipeline. The latest problem is texture packages. For those unfamiliar with the Unreal archive format each unreal archive file list a number of exported objects and a number of imported objects. So a map file will export the geometry of the map and import textures, decorations etc. An added complication exists in that imports are referenced via the archive filename, minus extension. So no two unreal archives can share the same base filename as they would conflict and one would be inaccessible.

We have a good texture artist on the team who has produced (and continues to produce) a number of texture packages. Unfortunately some of those texture packages have ended up needing to be renamed. Worse, some of our maps already depend on the texture packages in question. Up until now the only way to fix this has been for the mapper to load the map in UnrealEd and manually switch all the textures involved, then close and reopen UnrealEd and reload the map to check that the dependency on the old texture package had been removed. Any decorations that used old packages would need to be rebuilt completely.

Since this is obviously undesirable, and since I already had a lot of the base code written from other utilities, I decided to put together a small utility to automate the replacement of packages. The first version went together really quickly but unfortunately I'd not accounted for one thing. If the map uses a texture from the package being replaced for which there is no equivalent texture with the same name in the replacement package you end up with the default texture. The mapper would then have to go through the map after replacement looking for default textures and manually replacing them, which would be no better and maybe even worse than replacing all the textures manually to start with. So I've been working to identify such situations and require a replacement texture to be specified. It's taking longer than I'd hoped but I'm getting there. By the time Battle for Na Pali is finished we shall probably have quite an extensive set of tools available to us.

In other news, deque is now officially pronounced "de-queue" in the UK and not "deck". I was talking about deques at work and, thinking to err on the side of caution used the (apparently) more common pronunciation - "deck". Nobody had a clue what I was talking about until I switched to my normal pronunciation - "de-queue". So there you have it. I do love working for a British company. Colour and normalise are spelt correctly and now even deque is pronounced correctly. On the flip side, if I ever work for an American company I'm going to lose a good few percent productivity just through all the misspellings I'll make!

?nigma

Enigma

Enigma

 

Swings & Roundabouts

Sorry you didn't get a journal entry last week, I've been pretty busy recently and struggling to get back in the swing of writing weekly journal entries since the big GDNet downtime (It's all their fault really, not just me being bone-idle).

Not really much I can talk about at the moment. I'm seriously looking forward to reaching the point where I can actually talk about the stuff I'm doing at work on for the Team UnrealSP mod team, but for now it's all very hush-hush, which makes for rather boring journal entries.

I looked into the nVidia instrumented graphics drivers this past week. Getting them installed and hooking an application up to read the counters was pretty easy, but unfortunately by graphics card doesn't have many counters available, only one on the GPU, the rest in the drivers.

I still haven't gotten around to the PC upgrade I've been planning since the beginning of January. Although on the positive side my procrastination has resulted in the components dropping in price by around GBP60 total. As a result I shall probably get both XP and Vista for my new machine. The only question is whether to go 32bit XP and 64bit Vista or 64bit for both? My only concern about the latter is compatibility and drivers. Does anyone have any tales of woe/joy to steer me one way or the other?

Work continues as normal. We had one amusing incident after getting some crash report code written. One of the artists left the game running overnight. It crashed and, due to a small bug in the crash report code, proceeded to write out around eighty gigabytes of crash dump data!

Less amusing and more satisfying we managed to get to the bottom of an obscure vtable size mismatch warning in one of our dlls. Turns out we were compiling one of our static libs with RTTI enabled (accidentally) and everything else (deliberately) without.

I'm due to finish my probation period at work this coming week, so hopefully that will all go smoothly.

?nigma

Enigma

Enigma

 

If You Can't Join 'em, Beat 'em

This journal entry was written a fortnight ago, but couldn't be posted then due to GameDevs downtime. Not much of interest has happened between then and now so I'm posting this old entry tonight and will get back on track next week.

Saving vertices in the problem map turned out not to be possible. Various possible workarounds were mooted, but before taking any potentially drastic decisions I decided to have one last go at fixing things. As I said before, 128 000 seemed a rather arbitrary limit. The UT public headers include a templated dynamic array class, so my first thought was that the 128 000 vertex limit must have been a static array. The problematic map was failing to load by hitting an assert, so I started by searching the UnrealTournament exe and dlls for the text string in the assert message. That narrowed me down to one highly probable dll.

Next I pulled out my downloaded PE format document (including covenant not-to-sue with Microsoft) and started parsing through the headers. The data segments weren't large enough to contain a 128 000 vertex static array, which left either a stack array (unlikely), a dynamically allocated array or I was looking in the wrong file. If it was a dynamically allocated array then odds were the allocation size would be stored in either the data segment or as an immediate operand. I therefore tried scanning the file for any four consecutive bytes which could be interpreted as a non-zero multiple of 128 000. The results were very promising - although there were a good fifty or so matches, most of them were clearly irrelevant. Only six or seven of the results seemed plausible.

From earlier testing I knew that one of the 128 000 entries was from the test which triggered the assert (I'd tried suppressing the assert previously on the off chance, but unsurprisingly that led to a crash). With so few possibilities to choose from I decided to use educated guess work to find the values I needed. I patched the file by doubling selected multiples of 128 000 and tried running the map. After a few false starts I hit pay dirt. Although there was some significant rendering corruption the map was loading and rendering. I tried a few more similar combinations and quickly found one which fixed the remaining issues. Vertex limit? What vertex limit?



I'm not sure if I've mentioned it before but at work our coding standards for our current project disallow exceptions. I don't know the reasons for this although I can think of several reasonable possibilities and the decision is a slightly contentious one. Anyway, as a result we have our own exception-free implementations of some parts of the standard library. One such implementation is the standard list class. Unfortunately I ran across a slight problem with it a couple of weeks ago. The end iterator was implemented using a null pointer, which meant that you couldn't use --end() to get an iterator to the last element. I decided to fix this and add sentinel nodes to the list implementation.

Now every competent programmer should be able to write a linked list implementation. I've done it myself several times. It turns out modifying somebody else's implementation is a bit harder. Add to this the fact that all this was taking place while my computer was out of action (see my previous journal previous entry), leaving me working on a tight time limit to be checked in before the end of the day because I was working on somebody else's box as they were away for the day. And on top of that our distributed build system wasn't set-up on that machine for me and every change to list required a rebuild of practically the entire project.

I worked as quickly as possible and got my changes checked in at the end of day. I knew there were a couple of issues remaining, but I thought they were minor. Turns out I was wrong. I came in the following Monday to find that I'd basically broken half the project and spent half a day fixing bugs in at least half the list member functions. Moral of the story? If at all possible use an existing standard library implementation. Don't write your own!

A few days later I found a curious problem with some usage of our list template. Compilation of one function was failing with an error that the compiler couldn't convert from pointer to reference. Fair enough I thought, except that it shouldn't have been trying to convert to reference. I played around with it a bit and managed to boil it down the roughly the following snippet:
typedef list::const_reverse_iterator iterator;
typedef iterator::reference reference;
Type * p = 0;
reference r = p;
reference (iterator::* f)() const = &iterator::operator*; list::const_reverse_iterator was a typedef of std::reverse_iterator::const_iterator >, of which the relevant bits of implementation are:
class reverse_iterator
: public _Iterator_base_secure
{ // wrap iterator to run it backwards
/* snip */
typedef typename iterator_traits::reference reference;
/* snip */
reference __CLR_OR_THIS_CALL operator*() const
{ // return designated value
_RanIt _Tmp = current;
return (*--_Tmp);
}
/* snip */
}; list::const_iterator::reference was Type * &.

The confusing thing was that the test code snippet was compiling the line reference r = p; fine, thus proving that Type * was convertible to reference, but was choking on the following line, complaining that it could not convert type Type & (iterator::*)() const to Type * (iterator::*)(). I don't understand how iterator::reference can be Type & in the iterator class scope and Type * outside it. The only possibility I can think of is that this is another ODR violation error, but wasn't able to find any reason why the ODR might have been violated. I'm going to have another look when I have some time to try and figure out what's going on, but for now this one has me baffled. I anyone has any ideas please let me know.

?nigma

Enigma

Enigma

 

One Step Forwards, Two Steps Back

I spent my free time this week modifying my old Unreal map reader so that it could rebuild the file after parsing it into memory. I then went about investigating whether those vertices I thought were unused really were redundant. Unfortunately it turns out they aren't. I'd forgotten about the completely brain-dead manner in which Unreal handles its texture coordinates. For every polygon Unreal stores the texture coordinates by storing the world-space origin of the repeat-textured infinite plane which coincides with the polygon, plus x and y vectors within that infinite plane to represent the texture axes. Like I said, brain-dead. So the vertices I though were unused were actually the texture coordinate origins. I'm now searching for alternative ways to save precious vertices in the map.

I had some "fun" with Visual Studio at work this week too. Due to reasons I won't go into our network is not as good as it might be. Having made a few changes to a utility class I hit recompile only for IncrediBuild to decide it was only going to build on my machine. Since this change meant recompiling practically the entire project this was going to take a while. One of my colleagues suggested rebooting my machine just to see if I could get IncrediBuild into a more cooperative mood, so I did. I stopped the build, closed Visual Studio, restarted and hit compile. Immediately I got a C1902 error (Program database mismatch: please check your installation). I couldn't build anything. We tried just about everything to try and fix it, including reinstalling Visual Studio. Finally, just as we were waiting for tech. support to show up to completely rebuild the machine I thought to Google the error. Some of the hits were talking about mspdb80.dll, so I tried replacing it. Lo and behold everything started working again. Why on Earth a full uninstall and reinstall of Visual Studio didn't fix the problem I can't begin to guess.

?nigma

Enigma

Enigma

 

Magic Constants

One of Team UnrealSPs mappers came across an interesting problem this weekend. We already knew about UTs zone limit (64, because a zone mask is stored in a 64 bit integer) and bsp node limit (65535, because bsp node indices are stored in unsigned shorts) but now for the first time we've hit the vertex limit. The limit is 128 000, which seems a little arbitrary. I've been looking into the issue and it looks to me like UnrealEd isn't cleaning up after itself very well. As near as I can tell there are a good 50 thousand unreferenced vertices in the map data, so I'm hoping I'll be able to write a small utility to clear those unused vertices out of the map file this coming week and bring the map back under the limit.

We changed source control systems at work this week, which was great fun. We're now using Perforce, or at least trying to - we're still finding our feet a little bit. The diff viewer and merge tool certainly look funky, with their multicoloured displays and variable speed scrolls.

?nigma

Enigma

Enigma

 

Like a Hot Knif through Butter

I've finished (the first pass of) my Jpeg2000 loader. I think I'm going to opt to call it Jackknif. That's Jackknife, the only English word I could find which contains a 'J' followed by two 'k's (J2K, get it?), with the e knocked off to indicate that it's not a complete implementation. Yes, there is method to my madness (or should that be madness to my method?).

It turns out I did manage to get a finished version of the code down to less than a thousand lines, which shows that it really isn't that complicated an algorithm. Speed wise I was competing against two open source reference implementation - JasPer (written in C) and JJ2000 (written in Java). My reference image (2048x2048 rgb) took approximately nine seconds to load under Jasper and approximately six seconds to load under JJ2000.

The first complete version of Jackknif was taking around fourteen seconds. I thought this was pretty reasonable and whipped out a profiler only to be rather confused by the results. The hotspot was showing up as 120 million calls to fill_n, but I only used fill_n in a couple of places. One place which should only have amounted to a few thousand calls and another, in static initialisation, which should only have involved about twenty or so calls. I took a careful look through the source and spotted a minor bug in my static initialisation code. It looked something like:
static int array[size];
static bool initialised = false;
if (!initialised)
{
function_which_initialises_array(array);
}
// code which uses arrayI'd forgotten to set the boolean flag to true, so my array was being repeatedly initialised, to the tune of ~6 million times. Fixing that minor bug, along with a couple of very minor optimisations (changing arrays of ints to arrays of shorts) brought Jackknif down to just under six seconds.

I was very pleased with this. My fairly naieve implementation was outperforming even the "optimised" JJ2000 implementation. Next bottleneck was the filtering. The way it was implemented wasn't very cache friendly. I looped through every component and for each component looped through every row and then every column. To demonstrate, a 4 pixel square image would have been processed something like:
Image:
r11 g11 b11 r12 g12 b12 r13 g13 b13 r14 g14 b14
r21 g21 b21 r22 g22 b22 r23 g23 b23 r24 g24 b24
r31 g31 b31 r32 g32 b32 r33 g33 b33 r34 g34 b34
r41 g41 b41 r42 g42 b42 r43 g43 b43 r44 g44 b44

Visitation order (components):
r11 r12 r12 r14 r21 r22 r23 r24 r31 r32 r33 r34 r41 r42 r43 r44
r11 r21 r31 r41 r12 r22 r32 r42 r13 r23 r33 r43 r14 r24 r34 r44
g11 g12 g12 g14 g21 g22 g23 g24 g31 g32 g33 g34 g41 g42 g43 g44
g11 g21 g31 g41 g12 g22 g32 g42 g13 g23 g33 g43 g14 g24 g34 g44
b11 b12 b12 b14 b21 b22 b23 b24 b31 b32 b33 b34 b41 b42 b43 b44
b11 b21 b31 b41 b12 b22 b32 b42 b13 b23 b33 b43 b14 b24 b34 b44

Visitation order (array indices):
0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45
0 12 24 36 3 15 27 39 6 18 30 42 9 21 33 45
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46
1 13 25 37 4 16 28 40 7 19 31 43 10 22 34 46
2 5 8 11 14 17 20 23 26 29 32 35 38 41 44 47
2 14 26 38 5 17 29 41 8 20 32 44 11 23 35 47I switched the order to loop though components of each pixel one after another and processed the first pixel of each column in order before processing the next column:
Visitation order (components):
r11 g11 b11 r12 g12 b12 r13 g13 b13 r14 g14 b14
r21 g21 b21 r22 g22 b22 r23 g23 b23 r24 g24 b24
r31 g31 b31 r32 g32 b32 r33 g33 b33 r34 g34 b34
r41 g41 b41 r42 g42 b42 r43 g43 b43 r44 g44 b44
r11 g11 b11 r12 g12 b12 r13 g13 b13 r14 g14 b14
r21 g21 b21 r22 g22 b22 r23 g23 b23 r24 g24 b24
r31 g31 b31 r32 g32 b32 r33 g33 b33 r34 g34 b34
r41 g41 b41 r42 g42 b42 r43 g43 b43 r44 g44 b44

Visitation order (array indices):
0 1 2 3 4 5 6 7 8 9 10 11
12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47
0 1 2 3 4 5 6 7 8 9 10 11
12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47I expected that might bring the execution time down to around four and a half seconds, maybe four if I was lucky. I underestimated. With that simple optimisation the execution time plummeted to around 2.2 seconds.

I still have a few more optimisations to apply. I'm not sure when that will happen since I shall probably be working on something else this next week for a bit of a break. My target though is to bring that execution time down to no more than 1.5 seconds for my reference image.

I intend to release the final source code, both a cleaned up unoptimised version so people can see how the algorithm works, plus the final optimised version, under a permissive open source license when I'm done. The only thing I intend to disallow is patenting of techniques used in derivative works. I'm sure there exists an open source license with this kind of restriction. If anyone knows of a license with this restriction, please let me know - it'll save me a few minutes searching.

Finally, the obligatory screenshot, actually four screenshots in one:


The top left shows the fully decoded image, minus horizontal and vertical filtering, resized from 2048x2048 to 256x256. The top right shows the same image, but only the top left corner of it, at normal size. The bottom left shows the fully decoded image with horizontal and vertical filtering, again, resized from 2048x2048 to 256x256. The bottom right shows the same image, but again only the top left corner of it, at normal size.

?nigma

Enigma

Enigma

 

Graphic Violence

I've broken the back of the Jpeg2000 decoding algorithm. What really rankles is that all the publicly available implementations are at least a hundred thousand lines of code and my nearly complete implementation (admittedly with only a subset of the functionality) is only just about the hit a thousand lines. Here's what I have so far, the first code block, which equates to the red channel of the image reduced by a factor of 32 in each dimension:



All that's left now is to add the loops and two additional lookup table to allow me to decode the remaining 3071 code blocks for that image, add filtering code to recombine the code blocks into the finished image, optimise and then clean-up and resolve hard-coded values to the appropriate variables. I have a few ideas how I can optimise which, if they work, should result in a significant speed-up.

I came across yet another interesting code issue at work this week. I had some code roughly like this:
class Base1
{
public:
Base1()
{
// code
}
void function()
{
// code
}
protected:
int variable1_;
int variable2_;
bool variable_;
};

class Base2
{
protected:
bool variable_;
};

class Derived1
:
public Base1
{
public:
Derived1()
{
function();
// code
}
};

Where Base1 and Base2 were bases of classes with similar interfaces, used for similar purposes (think static polymorphism). The code in the Derived1 constructor, after the call to function was failing with a very odd error (invalid Windows error message). Stepping through the code we discovered that although execution correctly stepped through the Base1 constructor and Base1::function, the debugger seemed to think that Derived1 was inherited from Base2, not Base1. It wasn't just a debugger fault either. The error was occurring because access to variable_ was actually accessing variable1_ which happened to be where variable_ would have been if the base class really was Base2, not Base1. Something obviously got very confused somewhere. Eventually I resorted to getting a completely clean version of the entire project from source control, which fixed the issue. I still don't know what was wrong.

?nigma

Enigma

Enigma

 

Happy New Year

Not really much to talk about what with Christmas and the new year. I've spent a little free time looking further into Jpeg2000 over the last couple of weeks. I'm now trying to get my head round entropy decoding and the MQ arithmetic decoder. I printed out 22 pages of source code to take away with me over Christmas. I think I understand enough about the arithmetic decoder, which was about three of those 22 pages. The entropy decoder which took up the remainder of the space appears to be a very complicated "optimised" implementation of a relatively simple algorithm. I put the word optimised in quotes because I'm pretty confident that it was a bad choice of optimisation strategy. I shall find out if I'm right in the new year.

I thought I'd leave you with a couple of snippets from the JJ2000 source which made me laugh, when they didn't make me cry.

Check the JavaDoc comments:
/**
* Returns the reversibility of the filter. A filter is considered
* reversible if it is suitable for lossless coding.
*
* @return true since the 9x7 is reversible, provided the appropriate
* rounding is performed.
* */
public boolean isReversible() {
return false;
}Tricky stuff, that dyadic decomposition:
for (rl=0; rl // Find the number of subbands in the resolution level
if (rl == 0) { // Only the LL subband
minb = 0;
maxb = 1;
}
else {
// Dyadic decomposition
hpd = 1;
// Adapt hpd to resolution level
if (hpd > maxrl-rl) {
hpd -= maxrl-rl;
}
else {
hpd = 1;
}
// Determine max and min subband index
minb = 1 maxb = 1 }
// Allocate array for subbands in resolution level
exp[rl] = new int[maxb];
for(j=minb; j tmp = ehs.readUnsignedByte();
exp[rl][j] = (tmp>>SQCX_EXP_SHIFT)&SQCX_EXP_MASK;
}// end for j
}// end for rlI've also decided, after reading far too much bad C++ code on the internet and far too many bad tutorials that come 2007 I shall try to find time to write my own modern C++ tutorial. Watch this space.

All that remains now is for me to wish you a Happy New Year. Farewell 2006, I hardly knew you.

?nigma

Enigma

Enigma

 

Obscure, Incomprehensible and just plain Broken

I was off work this week. Tomorrow could be interesting since I think the code I checked in just before I left might have broken the build. It should only be a small break, but unfortunately build success is a binary state - the build is either broken or it's not. I did email them about it with steps to fix so hopefully it won't have been much of a problem.

I spent my week working on the Jpeg2000 loader again, working through the new source code I talked about last month. Also Christmas shopping, playing DHTML Lemmings and various other random activities, so not actually as much time on the loader as I'd been intending.

The new source code is still pretty awful:
int i,k1,k2,k3,k4,l; // counters
int tmp1,tmp2,tmp3,tmp4; // temporary storage for sample values
int mv1,mv2,mv3,mv4; // max value for each component
int ls1,ls2,ls3,ls4; // level shift for each component
int fb1,fb2,fb3,fb4; // fractional bits for each component
int[] data1,data2,data3,data4; // references to data buffers
final ImageConsumer[] cons; // image consumers cache
int hints; // hints to image consumers
int height; // image height
int width; // image width
int pixbuf[]; // line buffer for pixel data
DataBlkInt db1,db2,db3,db4; // data-blocks to request data from src
int tOffx, tOffy; // Active tile offset
boolean prog; // Flag for progressive data
Coord nT = src.getNumTiles(null);
// 38 lines which don't modify nT or src
// getNumTiles is a non-modifying getter
nT = src.getNumTiles(null);Not to mention the seemingly everpresent "what, you mean some people don't use the same size tabs as me" interchange of tabs and spaces for indentation. I really ought to find a beautifier. Still, it's easier to work through that the jasper source. Feels a bit strange to be working with Java again though.

Next weeks installment will either be a day early or won't get written, since I'm away for Christmas as of next Sunday.

?nigma

Enigma

Enigma

 

Nooks & Crannies

The observant amongst you will have noticed that it's not Sunday. The even more observant amongst you will have noticed that it's not Sunday and I'm posting a journal entry. The really observant amongst you will notice something odd about this. There is a reason for this. Drum roll please... I wasn't feeling too good last night and had an early night instead of writing this. So it's a day late.

I had another interesting compiler incident at work last week. I had a piece of code performing a number of floating-point operations including some basic trigonometry. It was all working fine until I made a slight modification. After said modification the code worked fine in debug mode but failed with a floating-point stack check error in release mode. Investigations led to much confusion since doing anything differently seemed to result in the code working fine. Even just reading the floating-point operating environment at the start of the function caused the code to stop failing. I hunted through the source code and the generated assembly to see what could be wrong and while the source code looked OK the assembly looked a bit odd. Eventually our lead programmer took a look and after a bit of poking said he'd seen something similar before and it was probably an optimiser bug involving inline assembly (we have our own trig function implementations since our base library is portable across PC and console(s)). If he's right then I'm beginning to lose faith in compilers. That would be two genuine bugs in less than a month!

Outside of work I've been poking around some more obscure parts of the C++ standard. Such knowledge sometimes comes in useful, like when a co-worker was trying to suppress a lint error in a macro and wondering why he couldn't get it to work. Lint errors can be suppressed by adding comments of the form //lint -eXXX but adding that to a macro won't do anything since comments are replaced with a single space before preprocessing.

In the course of my poking I came across the macro examples in Section 16.3.5, Paragraphs 5 & 6, beautifully obscure examples intended to demonstrate as many macro combinations and effects as possible with the minimum quantity of code:

Just trying to follow through the expansions to verify the result took me a good five minutes. Any errors in transcribing the above excerpts are my own.

I'm also trying to figure out if the following code is valid C++:
int main()
{
int @ = 1;
return @;
}
I've not found a compiler that will accept it, but '@' is not in the basic source character set (Section 2.2, Paragraph 1) and the first phase of translation includes:

And an identifier is defined as:
Anyone wanting to argue for or against the validity of @ as a C++ identifier, speak now or forever hold your peas. (Yes, that was a terrible pun. You should be used to them by now).

Finally, unless anyone has any other recommendations, I'm planning on adding Journal of EasilyConfused to my list of regularly-read GDNet journals, to replace EDI's journal. (Always three there are, a master, an apprentice, and a very talented indie team).

I almost forgot, this week I found myself writing two oddly named functions: consume_hash and consume_carrot. The latter was a typo indirectly caused by the former (No, not for the reasons you're thinking). Who needs drugs when your brain is capable of such nonsense unaided?

?nigma

Enigma

Enigma

 

Things that go tweet in the night

So, it's been three months already since I started my journal. I had hoped that this would become a fairly regular record of my work for Team UnrealSP, with occasional interludes of randomness. Instead it's turning out kind of the opposite. It's been another slow week. I seem to have acquired a sparrow or similar small bird that seems to like landing outside my window at half six in the morning and wake me up by chattering away for ten minutes before flying off again. As a result, due to tiredness, I haven't managed to do any work on the mod this week.

Not much of interest going on at work that I can tell you about either. We had a couple of new programmers start last week which means I'm now officially not the most junior programmer on the team! I've also booked all my holiday now since our leave year runs from January to December. As a result I'll only be in the office for another seven and a half days this year. Even better, half a day will probably be spent doing "research" - a couple of the guys are getting Wiis on Friday and bringing them into the office. I just hope we don't break the company's large flat screen TV.

I found some more Visual C++ weirdness at work this past week, though not nearly as bad as the last one:
struct Object
{
Object(int i)
:
i(i)
{
}
int i;
};

struct Array
{
Object & operator[](unsigned int index)
{
return array[index];
}
Object array[1];
};

Array objects = {Object(2001)};

int main()
{
return objects[0].i;
}

When compiled under Visual C++ 8.0 at warning level 4 produces the following warnings:
spuriouswarning.cpp(18) : warning C4510: 'Array' : default constructor could not be generated
spuriouswarning.cpp(12) : see declaration of 'Array'
spuriouswarning.cpp(18) : warning C4610: struct 'Array' can never be instantiated - user defined constructor requiredIt appears somebody forgot about aggregate initialisation when writing that second warning.

In other news, I feel I need a new GDNet journal to read since EDI stopped updating regularly. Anybody have any suggestions?

Finally, since it's Advent and I feel bad about not giving you anything interesting to read about I'm going to let you in on a little secret. I have another project slowly ongoing. It's not directly game related but hopefully it will be of interest to some people here. It's a long term project, years most likely, especially at the rate I'm going. That's all I'm going to tell you for now. Still, I'm told the first step is admitting you have a problem. Err, I mean secret project. Let the rampant speculation commence.

?nigma

Enigma

Enigma

 

The Source Code Challenge

I managed to find a bit of free time to get back to my Jpeg2000 decoder this week. Unfortunately it's so long since I last worked on it I've lost track of where I was. I did have lots of notes, but the source code is so impenetrable it would still take me a while to get back up to speed. I say "would". Instead I found another open source Jpeg2000 codec, this time written in Java. Hopefully between the two sources I've be able to get a more solid grasp on the format and accelerate my progress. I keep wondering whether I ought to just buy a copy of the spec.

Because two major concurrent projects isn't enough I also keep getting distracted by other random issues. This week I decided to look into parsing. I've written parsers before, even a very simple parser generator. I've also used Boost.Spirit a few times (and I'd love to learn how to use it better, maybe something to get distracted by some other month). This time however I decided to forget everything I new and research from scratch. It's funny what you can learn when you do this. I didn't research in too much depth, but it didn't take me too long to come across Parsing Expression Grammars (PEGs) and Packrat parsers, neither of which I remember coming across before.

I'm not completely sold on Packrat parsing - it looks great for simple parsing but not so good for more complex parsing due to the complexity of changing state - but Parsing Expression Grammars seem really useful. I quickly hacked together a simple recursive descent parser for mathematical expressions, along with a generator using an equivalent grammar. After getting the parser working I spent a bit of time working on error handling, for which some of the details mentioned in the Packrat parser paper I was reading were very useful. All-in-all it was an interesting diversion and next time I need a parser I'll have a slightly better foundation to start from.

Finally, I present you with The Source Code Challenge(TM). If you remember (assuming anyone actually reads this drivel regularly) a few weeks ago I was freeing up hard drive space to install Medieval II: Total War. I vaguely wondered at the time how much of my hard disk was filled with source code. This week I decided to find out. The Source Code Challenge(TM) is for you to do the same. The target to beat is 27 505 files (.c, .cpp, .h & .hpp) or 537 175 479 bytes (512MB!). Admittedly a large proportion of that come from six compilers with attendant include folders, plus boost, but it's still an awful lot of source code!

?nigma

Enigma

Enigma

 

Iterative Ranting

There was another mini SC++L versus do-it-yourself debate in the Beginners forum this week, which got me thinking (again) about the issue of vector and reading from a file. The issue is that there is a simple solution using new which cannot be expressed as efficiently using vector:
std::ifstream reader("file.dat", std::ios::binary);
reader.seekg(0, std::ios::end);
std::size_t data_size = reader.tellg();
boost::scoped_array data(new char[data_size]);
reader.read(&data[0], data_size);The closest you can come is to use istreambuf_iterators:
std::ifstream reader("file.dat", std::ios::binary);
std::istreambuf_iterator begin(reader);
std::istreambuf_iterator end();
std::vector data(begin, end);which performs multiple allocations and copies within the vector constructor for all but the smallest of files since istreambuf_iterator is only a model of InputIterator. Alternatively you can pre-allocate and read:
std::ifstream reader("file.dat", std::ios::binary);
reader.seekg(0, std::ios::end);
std::vector data(reader.tellg());
reader.read(&data[0], data.size());which redundantly initialises the entire vector to zero before overwriting with the file contents.

Let's be clear. This is not a major issue from a performance point of view. We're talking about file I/O, which is orders of magnitude slower than anything that vector is doing. The issue is simply that there is a clear deficiency in the standard library, and worse, one without apparently any good reason for being there.

I messed around a bit looking for alternative solution using vector. It seemed clear to me that an iterator based solution would be cleanest, and that the only real problem with istreambuf_iterator is that it is unnecessarily generic. File streams support random access, whereas generic streams only support input/output iteration. I tried writing my own random access ifstreambuf_iterator and discovered that I couldn't. More surprisingly, the reason I couldn't implement it is because the C++ iterator abstraction is fundamentally broken. An istreambuf_iterator is not a model of RandomAccessIterator. It's a model of RandomAccessInputIterator, a concept unsupported by the C++ iterator abstraction model. The C++ iterator abstraction combines two orthogonal concepts, iteration and read/write access, into a single concept, without supporting all possible combinations.

Armed with this new-found knowledge I trawled the web (well, I typed a couple of brief queries into Google, but the former sounds more impressive) and unsurprisingly discovered that I was not the only one to have come to this conclusion. Reassuringly there is already a paper at the C++ Standards Working Group site that, also unsurprisingly, is much better thought out and goes into much more depth. The vector-from-file issue is one that has bugged me for quite some time. Now I can finally stop wondering if I'm missing some simple construct and rest in the knowledge that it simply needs fixing.

One of my co-workers stumbled across something interesting at work this week that had a few of us scratching our heads. Without consulting a compiler, what would you expect to result from the following code snippet?

#include

struct Base1
{
Base1()
:
a(1)
{
}
int a;
};

template
struct Base2
{
Base2()
:
b(2),
c(3)
{
}
void function()
{
std::cout }
Type b;
Type c;
};

struct Derived
:
public Base1,
public Base2int >
{
void function()
{
Base2::function();
}
};

int main()
{
Derived d;
d.function();
}

?nigma

Enigma

Enigma

 

De frmenta ging...

Medieval II: Total War was released on Friday. I've spent a fair bit of time today clearing out my hard drive to find the 11GB(!) needed to install it. I did get a look at it at work on Friday and it looks very nice. I particularly like the movies you get for certain events. I never played the original Medieval: Total War (or Shogun: Total War, for that matter) so I'm looking forward to seeing how Medieval compares to Rome in terms of gameplay.

Speaking of freeing up hard drive space, after doing so I ran Disk Defragmenter to make sure everything was cleaned up nicely before I install (which I intend to get started on right after this journal entry). As the defragmenter was running I wondered why do we even need to defragment hard drives? I mean sure, I understand the process of disk fragmentation and the need to clean it up, but why does defragmentation have to be an occasional time consuming activity?

When I was doing my degree we had a coursework for an Operating Systems module which required us to implement a file system emulator (just as a simple Java program). Most of the class implemented simple FAT-based solutions, apart from the Unix Crew, who used Linux-like implementations. I was one of the few who went for something a bit more exotic. I don't remember all the details but I think it was vaguely NTFS-based and I know that one of the things I considered was automatic defragmentation. The plan was that every filesystem operation (or certain types of operations) would perform a partial defragmentation of the filesystem, making full defrags unnecessary. Unfortunately I never got around to implementing this feature so I never found out what complications would arise from it. It seems an interesting idea though.

In Language Shootout news, some of my new programs have been accepted now. The new faster programs move C++ g++ above both D Digital Mars and C gcc in the overall charts on execution time for the first time since I started submitting programs.
benchmark-name | C++ speed as % of fastest other | status
-------------------+---------------------------------+-------
nsieve | 101 | 1 program accepted, 1 program rejected
regex-dna | 165 | 1 program accepted
nsieve-bits | 97* | 1 program accepted*
recursive | 98 | 1 program accepted
mandelbrot | 97 | 1 program accepted
nbody | 102 | 1 program accepted
fasta | 103 | Not attempted
cheap-concurrency | 1852 | 1 program accepted, 1 program rejected, 1 program pending
spectral-norm | 102 | 1 program accepted
k-nucleotide | 93 | 1 program accepted, 1 program rejected
chameneoes | 770 | 1 program pending
pi-digits | 102 | Not attempted
partial-sums | 139 | 1 program accepted
reverse-complement | 85 | 1 program accepted
binary-trees | 405 | 1 program pending
fannkuch | 170 | Not attempted
sum-file | 224 | Not attempted
startup | 294 | Not attempted*one of the optimisations used may now be considered illegal, which will affect several programs in the benchmark, including mine

I have to finish by saying I'm disappointed that nobody spotted my not-so-little typo in last weeks journal entry. Or perhaps you all thought it was one of my bad plays on words.

?nigma

Enigma

Enigma

 

The Googles... They Do Nothing!

I've nearly gotten my triangulation code ported. It's been fun ripping out unused variables, re-factoring five-hundred line functions and modifying everything to work off of dynamically sized zero-based arrays instead of fixed-size one-based arrays. I say fun. What I really mean is it made me want to claw my eyes out on multiple occasions. That sort of fun. Still, it's nearly over now and the rewritten version is actually readable. Mostly. On the plus side I got a nice shiny metal Total War keyring at work this week to distract me.

I've been going through a bit of a quiet patch on the mod front recently. Messing around with incomprehensible C code at work has left me less inclined to do the same in my free time. The rest of the team has been getting on well though. A recent recruitment drive has attracted some talented new mappers to the project. I look forward to seeing what they all come up with.

Finally, work on the Language Shootout continues. I've submitted a number of new programs for various benchmarks. I currently have five programs pending and am about to submit a sixth. Hopefully I'll be able to update the table next week with a lot more green! Hopefully I'll also be able to celebrate a thousand journal views. Maybe I'll try to think up something interesting to talk about next week...

?nigma

Enigma

Enigma

 

Compulsive Obsolescense

I was looking into polygon triangulation algorithms at work this week. It didn't take long to find reference to the optimal version, developed by a guy named Bernard Chazelle, which can triangulate a simple polygon in linear time. The reference also came with a warning that it was not a simple algorithm. They weren't kidding. I found a copy of the original paper (pdf) on Chazelle's home page. Understanding the algorithm itself isn't impossible. Figuring out how to actually implement the algorithm on the other hand had me well and truly stumped. If anyone knows of any publicly available implementation of this algorithm (I haven't been able to find any) I'd be interested to learn of them.

So linear time was unworkable in the time I have available to devote to the task but fortunately there are alternatives. My next choice was an n log n algorithm that claimed to usually run in roughly linear time. Even better this one had source code. Unfortunately the link to the source code was dead. I eventually found a mirror, only to find what I'd been dreading. Sample implementation source code which is 50% incomprehensible. int i1, i2, i3, i4, i5, i6, i7. Yes, I know exactly what those are going to be used for! Also, I was under the impression that K&R style function definitions of the form:
void function(a, b)
int a;
int b;
{
}had been deprecated in the very first C standard, back in 1989. Yet here was code written in 1994, five years later, still using it. Sometimes. Even better though has to be this gem of a comment:
int usave, uside; /* I forgot what this means */And after a weekend off, tomorrow morning I get to go back to work and try and finish porting that mess to C++ so that I can extend the algorithm for my own purposes. Joy.

I also thought I'd update you on my progress with the language shootout:
benchmark-name | C++ speed as % of fastest other | status
-------------------+---------------------------------+-------
nsieve | 113 | 1 program rejected
regex-dna | 204 | 1 program pending
nsieve-bits | 97* | 1 program accepted*
recursive | 98 | 1 program accepted
mandelbrot | 97 | 1 program accepted
nbody | 102 | 1 program accepted
fasta | 103 | Not attempted
cheap-concurrency | 1852 | 1 program accepted, 1 program rejected
spectral-norm | 102 | 1 program accepted
k-nucleotide | 222 | 1 program rejected
chameneoes | 770 | Not attempted
pi-digits | 102 | Not attempted
partial-sums | 178 | Not attempted
reverse-complement | 169 | Not attempted
binary-trees | 405 | Not attempted
fannkuch | 170 | Not attempted
sum-file | 224 | Not attempted
startup | 294 | Not attempted*one of the optimisations used may now be considered illegal, which will affect several programs in the benchmark, including mine

Remember, the target is to get all of them down to 101 or below. Long way to go yet, but the benchmarks I've submitted programs for are doing a lot better than the ones I haven't.

?nigma

Enigma

Enigma

 

T t HR h E readi ADI n N g G

Continuing my language-shootout mission I spent what little free time I had this week looking at the issue of threading. This seems to be one area where C++ falls badly behind other languages currently in the shootout. As I understand it this is partly because the standard threading implementation for C++ under *nix, pthreads, uses kernel-space threads, as opposed to user-space threads. Kernel-space threads are also known as "heavyweight" threads, as opposed to "lightweight" user-space threads and the cost of a context switch is much higher.

In an effort to produce a much more efficient C++ concurrency implementation I investigated a couple of alternative threading libraries. Eventually however the fact that for shootout development I'm running Gentoo from a LiveCD with a RamDrive (I don't trust it sufficiently to allow it to write to my NTFS partition) and the added complexity this adds to downloading and installing a threading library together with a growing interest in the subject led me to decide to try and write my own extremely minimal threading library.

I decided to aim as low as possible and therefore targeted a non pre-emptive user-space system on x86 with a de-facto standard C++ stack. Non pre-emptive threads simply mean that instead of each thread running for a certain period of time and then being pre-empted in favour of another thread instead each thread runs until it decides to give up the processor. This has obvious advantages in terms of reduced synchronisation and scheduling burdens and obvious disadvantages in terms of threads potentially never giving up the processor. Also, as I understand it, a purely user-space thread library cannot take advantage of multiple processors, kernel-space threads are required for this, although multiple user-space threads can be multiplexed onto multiple kernel-space threads.

After a bit of reading around interprocessor interrupts, x86 segments and global and local descriptor tables amongst others I eventually managed to implement a very simple system which currently works under a basic test program on two compilers under windows, with or without optimisations:
#include
#include "test_threads.h"

void thread_function(int i)
{
std::cout yield();
std::cout }

int main()
{
thread_t * thread1 = create_thread(thread_function, 1);
thread_t * thread2 = create_thread(thread_function, 2);
while (thread1->running && thread2->running)
{
std::cout yield();
}
std::cout }output:
main thread
thread 1
thread 2
main thread
thread 1
thread 2
main threadI'm now working on porting the system to gcc, which is the target compiler for the language shootout. Unfortunately gcc introduces extra complications with it's alternative assembler syntax and lack of __declspec(naked) directive.

If people are sufficiently interested I may try to write up my experiences if and when I manage to develop a working language-shootout program with my own threading library. I'm sure I'm not doing things the best way possible, but it's an interesting journey and I've learnt a lot already.

?nigma

Enigma

Enigma

 

Ready, Aim, Fire!

I got distracted this week by a couple of thread in General Programming. The first was my c++ d c# benchmark, particularly the links to the Language Shootout. I think benchmarking languages is a fairly nonsensical concept, even benchmarking language implementations. Do you benchmark the speed of the compiled code? Time to write it? Do you have programmers of equal experience and ability in each language? To prove this point I decided I would participate in the Language Shootout and try to get C++ to within 101% of the execution time of the first placed program in every benchmark.

A couple of abortive attempts using the MinGW port of GCC 3.3.1 and trying to extrapolate how much of an optimisation my various changes would be under GCC 4.1.1 led me to finally think again about installing a *nix variant on my computer. I've had half of my hard disk unpartitioned since I got this computer nearly three years ago. The intent was always to put a *nix on it, but I never got around to it. I still haven't. Instead I downloaded and burned a Gentoo LiveCD (Gentoo because that's one of the *nix variants used by the Language Shootout). I've come to the conclusion that LiveCDs are pretty awesome. The ability to boot a modern OS without a harddrive is just plain fun. As to the Shootout, well, my mission could take a while.

The other thread I got distracted by was Why do [some] people end up despising c++? limits of c++?. I must confess I don't really understand the apparent sudden shift away from C++. There are certainly some good points made as to weaknesses of C++ in that thread, but I personally don't find them to be significant problems. I rarely run into issues in my day-to-day use of C++ that are the fault of the language. Yes, the compilation model with header files is archaic, but it doesn't hinder me at all. I'm quite happy to admit to being a C++ fanboi. I've tried a number of languages and C++ just fits my way of thinking best. Perhaps I'm just odd and other people really are having serious issues using C++ to solve real problems.

?nigma

Enigma

Enigma

 

Slow Week

As the title says, this has been a bit of a slow week for me, so I don't have much to talk about. I did manage to embarrass myself at work this week. I coded a fairly simple function which didn't work correctly at first. Nothing new there. A lot of simple code fails to work perfectly first time. So I set a breakpoint and checked out what was happening in the function. The result I got was:
if (current_value > persistent_value)
{
-> // code
}Where the -> indicates the current execution point. The debugger watch window showed the current values of current_value and persistent_value as (roughly):
current_value : 4.1
persistent_value: 3.8x10e38So, you can understand my confusion. The debugger was telling me that current_value was considerably less than persistent_value, yet execution was still entering the body of the if. Much confusion, stepping through the assembly, even using the Intel reference manuals to manually disassembly the machine code followed. From everything I could tell the compiler was simply generating wrong code, reading from a different stack offset at different points. Clearly this was impossible. Modern compilers do not generate wrong code. At least not with such simple source code.

The answer turned out to be exceedingly simple and was discovered after a co-worker looked through the code with me. A simple copy-paste error had left me with:
float persistent_value = std::numeric_limits::max();
for (some_condition)
{
float current_value = get_current_value();
if (current_value > persistent_value)
{
// code
float persistent_value = std::numeric_limits::max();
}
// code that modifies persistent_value
}
The intent had been to reset persistent_value to the maximum float value, but by copying and pasting I'd actually declared a local variable instead of modifying the more broadly scoped persistent_value. The debugger couldn't distinguish between the two persistent_values, so as soon as execution entered the if block it reported the value of persistent_value in the if as that of the local variable, whereas the more broadly scoped persistent_value still has a value of about 3.1.

So, that was my moment of severe embarrassment this week. Feel free to make me feel better my documenting your own foul-ups in reply, or make me feel worse by claiming never to make such stupid mistakes.

On the mod front I'll still working through the JasPer source and likely will be for some time to come. I'm finally starting to get into the meat of the code now. I'm sure things would be much easier if I had a copy of the Jpeg2000 standard, but I don't. Since I'm not going to be able to show you a screenshot of what I'm working on for a while yet, and because my inane babblings must be quite tedious without any graphical interludes I got permission to post an existing screenshot from the mod here, showing the distance fog I quickly hacked into an existing Unreal Tournament renderer:



EDIT: In other news, phantom's treatise on trees should be required reading this week.

?nigma

Enigma

Enigma

 

Brevity is Not a Virtue

The ability to explain something quickly and accurately with a minimum of fuss is usually an advantage. There are exceptions however. My continuing explorations of the JasPer source code are a prime example of that. Too often programmers abbreviate names in source code as if somehow this makes their code better. The thinking is usually that shorter names are quicker to type and therefore shorter names means shorter development times. This is a false economy.

Code is read many more times than it is written and most programmers are relatively fast typists. The time saved not typing a few characters is usually lost when you return to the code a little while later to fix or extend it and have to remember or work out what your abbreviations stood for. Things are even worse when it's a different developer trying to understand the code. Abbreviations which made perfect sense to the original developer may make no sense to the person now reading the code. Not only does this require the new developer to look up the abbreviations, it also increases their cognitive load as they must remember the abbreviations and mentally translate them as they go.

All of this is true in spades of the JasPer source. With the qmfbids (Quadrature Mirror-Image Filter Bank ID), mctranss (Multicomponent Transform), cblkstys (Code Block style) and pis (Packet Iterator, not number of radians in half a circle), understanding JasPer often feels like an exercise in decryption in itself. Fortunately the more I work through the more I find trivially reduces for the specific limited functionality I require.

Brevity can be a virtue in many areas. When it comes to variable, function and class names however I'll take verbosity over terseness any day of the week. Just imagine a world of lex_comp and set_sym_diff.

?nigma

Enigma

Enigma

 

I feel dirty

This week at work I was asked to profile something from an older codebase. So I opened the project and tried to build it but ended up with compiler errors. After trying a few things to see if it was a problem with my project setup the guy next to me thinks for a bit and then turns to the project lead. "Has anybody managed to build that in 2005"? Turns out it could only be compiled in VC6. Shudder.

I borrowed an older box with VC6 on it, finally got everything setup and got the project built. Now comes time to do the profiling. We didn't need anything clever, just some basic call counts and very rough percentage timings, so I just stuck some static counters in and a very rough profiler using c time functions and writing output to a file. Which led to the following conversation*:
Me : What file output and timing functions do we have in this code base?
I was going to use fstream and ctime, but I'm getting compile errors in xlocale.
Co-worker: Just use a FILE pointer.
Me : I'm getting errors about clock too.
Co-worker: Really? What sort of errors?
Me : It says clock is not a member of std

Me : Wait a minute. This is VC6. It's probably not a member of std.I hate VC6! (*conversation may not be word-for-word accurate)

Having finally got everything working the results were certainly interesting. It just goes to prove that even experts are pretty bad at guessing bottlenecks. Everyone expected the function I was profiling to be called a lot and to be taking up a large proportion of the execution time. The profiling results showed it was being called relatively infrequently and taking up a relatively small proportion of the total execution time. This is why you should always profile before spending time optimising.

Actually that's not the entire story. I was coming down with a mild flu-like bug whilst doing all this and so wasn't at my sharpest. After thinking about it over the weekend I suspect there might be another related function that also needs to be profiled. I'll probably have a look at that tomorrow.

I also got my first batch of free games this week. One of the advantages of working for a company that's owned by Sega is that I get a free copy of every game Sega publish! The initial batch wasn't overwhelming, Rome: Total War - Alexander (PC), Virtual Pro Football (PS2) and Let's Make a Soccer Team (PS2), but there are more interesting titles coming up.

I think that's all the news this week. I didn't manage to get much done on the mod front due to aforementioned flu-like bug. Hopefully one day in the not-to-distant-future I'll be able to post some screenshots of the mod stuff I'm working on since I'm sure you're all bored to death of reading my inane ramblings.

?nigma

Enigma

Enigma

 

Something Old, Something New

I actually did some work at work this week. Nothing heavy, I'm still getting used to things, but I should get some more complicated things to do from here on in. It's interesting to be working in a real team environment after so long working solo on projects. The only team coding projects I've done before really were toy projects at uni.

On the mod front I've been looking into the Jasper source code to decipher the Jpeg2000 format. It might sound like a serious case of NIH to be writing my own Jpeg2000 reader, but in reality I only need a very small subset of functionality, so it should be possible to make some serious performance gains. I've made quite a bit of progress on the headers. Now I just need to trace through Jasper properly to figure out how things are setup so I can start decyphering the actual image data. Here's some pseudocode for what I have so far:
{
uint16 marker_segment
assert(marker_segment == 0xff4f)
}
{
uint16 marker_segment
assert(marker_segment == 0xff51)
uint16 segment_length
uint16 capabilities
uint32 image_width
uint32 image_height
uint32 image_origin_x
uint32 image_origin_y
uint32 tile_width
uint32 tile_height
uint32 tile_origin_x
uint32 tile_origin_y
uint16 number_of_components
assert(segment_length == 38 + (number_of_components * 3))
{bitfield:1, bitfield:7, uint8, uint8} components[number_of_components]; // signedness, precision, horizontal_sampling_distance, vertical_sampling_distance
for_each(component in components)
{
uint8(component.precision) += 1
}
}
repeat
{
uint16 marker_segment
switch (marker_segment)
{
case 0xff52:
uint16 segment_length
uint8 general_coding_style
uint8 progression_order
uint16 number_of_layers
uint8 multicomponent_transform
uint8 number_of_decomposition_layers
uint8 code_block_width
uint8 code_block_height
uint8 code_pass_style
uint8 uses_qmfb
case 0xff5c:
uint16 segment_length
bitfield:3 number_of_guard_bits
bitfield:5 quantisation_style
uint8 stepsizes[segment_length - 3]
for_each(stepsize in stepsizes)
{
assert(stepsize 32)
uint16(stepsize) = stepsize 11
}
case 0xff5d:
uint16 segment_length
uint8 component_number
bitfield:3 number_of_guard_bits
bitfield:5 quantisation_style
uint8 stepsizes[segment_length - 3]
for_each(stepsize in stepsizes)
{
assert(stepsize 32)
uint16(stepsize) = stepsize 11
}
case 0xff64:
uint16 segment_length
uint16 registration_id
uint8 comment[segment_length - 4]
case 0xff90:
uint16 segment_length
uint16 tile_number
uint32 length
uint8 tile_part_instance
uint8 number_of_tile_parts
case 0xff93:
}
}


I also received an e-mail this week from somebody asking for the source to my old Furries Demo from the NeHe Creative contest. Looking through the source as I packaged it up I was a bit shocked to see how bad it was. Looks like at that time I was still transitioning to the SC++L. I ripped out my poorly written Array class and replaced my raw char *s with std::strings among other changes before mailing it off. Hopefully people won't pick up as many bad coding habits as they might otherwise have done so.

Finally, I came across an interesting game design problem this week. How do you design an online multiplayer system for casual gamers for a game which consists of relatively short real-time action segments separated by (potentially non-real-time) setup phases? The aim is to have a persistant "world" in which the player can see real gains and losses over a long period (say a month), yet remain playable by individuals with limited and varied amounts of free time.

?nigma

Enigma

Enigma

 

And now, the Continuation...

My first week of work was interesting. In a not all that interesting kind of way. It's a new office - I started the day they moved in - so it's taken a little while to get things sorted out plus some key people have been off, so although I've been there a week I've yet to actually do any work.

I spent all of Monday playing Rome: Total War and getting paid for it! The rest of the week I've reading through the project documents and familiarising myself with the existing codebase. Some of the code is pretty poorly written (thousand line functions with ten levels of indentation anyone?) but the company's definitely trying to move in the right direction. The coding standards for the current project are heavily based on the book C++ Coding Standards by Herb Sutter and Andrei Alexandrescu. That was another reason I had nothing to do - they were going to get me to read the book, but I already own a copy!

On the mod side I've managed to get the antialiased high-res screenshots working. I eventually used an old hack to hook the main rendering function and replace it with a function that calls the function it replaced multiple times. A little ugly, but it works. Now I'm working on loading the screenshots into the screensaver using Jpeg2000 format. Unfortunately loading a 2048x2048 resolution Jpeg2000 image is taking ~7500ms with Jasper, which is a tad long so I'm going to have to investigate either purchasing a j2k-codec license or writing my own Jpeg2000 decoder. I'll probably try the second route first as I feel like I haven't done much serious coding recently. Plus I haven't received my first pay cheque yet.

?nigma

Enigma

Enigma

 

Because I Can...

So, I have this shiny new GDNet journal just sitting here unused, which seemed a bit of a waste. I guess I'd better try and find something interesting to fill it with. I've added a small section of resources to the header of my journal (hopefully it looks as beautiful under other browsers as it does under FireFox, I haven't the time to check it right now). If anybody has any additional resources they think I may be interested in then let me know!

Right, now that's out of the way I have to ask myself what vaguely gamedev-related stuff can I talk about that people might be vaguely interested in reading (vaguely)? Unfortunately at the moment the answer is not a great deal, but hopefully it won't be like that all the time. I start a new job tomorrow, so I might be able to talk a little about that in the future but for now I'll waffle on briefly about the long term mod project I've been involved with for a while now.

Most of you have probably moved on from the original Unreal Tournament. After all, we've had two sequels already and a third is imminent. There is a small but not insignificant group of people however that still love the original Unreal and, lacking a decent platform to create single player adventures amongst the newer tech., rely on Unreal Tournament to create Unreal themed single player mods. One such team is led by a good friend of mine and so I offered my services.

While my activities for this mod team have been many and varied recently I've been concentrating on one specific aim. In order to increase publicity I've been developing a screensaver to showcase some scenes from our first map. Sounds simple doesn't it. Well, it is. And it isn't. The design of the screensaver calls for zooms over static screenshots. This is fine, but in order to avoid pixellation we need very large screenshots. As a result I've taken the source to the utglr renderer written by Chris Dohnal and hacked in GL_EXT_framebuffer_object support to allow me to capture screenshots at 2048x2048 resolution. Unfortunately anti-aliasing doesn't appear to work with framebuffer objects and my card can't cope with higher than a 2048x2048 resolution, so I shall probably have to try and freeze the timer or convince the renderer to render the same frame multiple times so I can render a larger screenshot in multiple passes and downsize it without suffering motion artefacts. The fact that the UT engine is inherently a software engine and I only have access to the front-end rendering subsystem, after T&L, complicates matters.

?nigma

Enigma

Enigma

Sign in to follow this  
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!