Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

1410 Excellent

About Enigma

  • Rank
  1. Enigma


    I'm currently working on yet another tool to grease the wheels of the Team UnrealSP content pipeline. The latest problem is texture packages. For those unfamiliar with the Unreal archive format each unreal archive file list a number of exported objects and a number of imported objects. So a map file will export the geometry of the map and import textures, decorations etc. An added complication exists in that imports are referenced via the archive filename, minus extension. So no two unreal archives can share the same base filename as they would conflict and one would be inaccessible. We have a good texture artist on the team who has produced (and continues to produce) a number of texture packages. Unfortunately some of those texture packages have ended up needing to be renamed. Worse, some of our maps already depend on the texture packages in question. Up until now the only way to fix this has been for the mapper to load the map in UnrealEd and manually switch all the textures involved, then close and reopen UnrealEd and reload the map to check that the dependency on the old texture package had been removed. Any decorations that used old packages would need to be rebuilt completely. Since this is obviously undesirable, and since I already had a lot of the base code written from other utilities, I decided to put together a small utility to automate the replacement of packages. The first version went together really quickly but unfortunately I'd not accounted for one thing. If the map uses a texture from the package being replaced for which there is no equivalent texture with the same name in the replacement package you end up with the default texture. The mapper would then have to go through the map after replacement looking for default textures and manually replacing them, which would be no better and maybe even worse than replacing all the textures manually to start with. So I've been working to identify such situations and require a replacement texture to be specified. It's taking longer than I'd hoped but I'm getting there. By the time Battle for Na Pali is finished we shall probably have quite an extensive set of tools available to us. In other news, deque is now officially pronounced "de-queue" in the UK and not "deck". I was talking about deques at work and, thinking to err on the side of caution used the (apparently) more common pronunciation - "deck". Nobody had a clue what I was talking about until I switched to my normal pronunciation - "de-queue". So there you have it. I do love working for a British company. Colour and normalise are spelt correctly and now even deque is pronounced correctly. On the flip side, if I ever work for an American company I'm going to lose a good few percent productivity just through all the misspellings I'll make! ?nigma
  2. Enigma

    Swings & Roundabouts

    Sorry you didn't get a journal entry last week, I've been pretty busy recently and struggling to get back in the swing of writing weekly journal entries since the big GDNet downtime (It's all their fault really, not just me being bone-idle). Not really much I can talk about at the moment. I'm seriously looking forward to reaching the point where I can actually talk about the stuff I'm doing at work on for the Team UnrealSP mod team, but for now it's all very hush-hush, which makes for rather boring journal entries. I looked into the nVidia instrumented graphics drivers this past week. Getting them installed and hooking an application up to read the counters was pretty easy, but unfortunately by graphics card doesn't have many counters available, only one on the GPU, the rest in the drivers. I still haven't gotten around to the PC upgrade I've been planning since the beginning of January. Although on the positive side my procrastination has resulted in the components dropping in price by around GBP60 total. As a result I shall probably get both XP and Vista for my new machine. The only question is whether to go 32bit XP and 64bit Vista or 64bit for both? My only concern about the latter is compatibility and drivers. Does anyone have any tales of woe/joy to steer me one way or the other? Work continues as normal. We had one amusing incident after getting some crash report code written. One of the artists left the game running overnight. It crashed and, due to a small bug in the crash report code, proceeded to write out around eighty gigabytes of crash dump data! Less amusing and more satisfying we managed to get to the bottom of an obscure vtable size mismatch warning in one of our dlls. Turns out we were compiling one of our static libs with RTTI enabled (accidentally) and everything else (deliberately) without. I'm due to finish my probation period at work this coming week, so hopefully that will all go smoothly. ?nigma
  3. Enigma

    New version!

    Snowman Village is awesome. It should also come with a government health warning. Lots of great improvements in the new version, although I did prefer the cloud-style buttons. I came across a number of bugs in the last version. At least one of them still exists in the new version: There were no hazards visible, I rolled off the edge of the ledge to land on the ground below and never made it. Lost a good 4m+ snowball :( Likely other bug reports to come (I'll wait until they reappear in the latest version before reporting them). Keep up the good work! Σnigma
  4. Enigma

    If You Can't Join 'em, Beat 'em

    This journal entry was written a fortnight ago, but couldn't be posted then due to GameDevs downtime. Not much of interest has happened between then and now so I'm posting this old entry tonight and will get back on track next week. Saving vertices in the problem map turned out not to be possible. Various possible workarounds were mooted, but before taking any potentially drastic decisions I decided to have one last go at fixing things. As I said before, 128 000 seemed a rather arbitrary limit. The UT public headers include a templated dynamic array class, so my first thought was that the 128 000 vertex limit must have been a static array. The problematic map was failing to load by hitting an assert, so I started by searching the UnrealTournament exe and dlls for the text string in the assert message. That narrowed me down to one highly probable dll. Next I pulled out my downloaded PE format document (including covenant not-to-sue with Microsoft) and started parsing through the headers. The data segments weren't large enough to contain a 128 000 vertex static array, which left either a stack array (unlikely), a dynamically allocated array or I was looking in the wrong file. If it was a dynamically allocated array then odds were the allocation size would be stored in either the data segment or as an immediate operand. I therefore tried scanning the file for any four consecutive bytes which could be interpreted as a non-zero multiple of 128 000. The results were very promising - although there were a good fifty or so matches, most of them were clearly irrelevant. Only six or seven of the results seemed plausible. From earlier testing I knew that one of the 128 000 entries was from the test which triggered the assert (I'd tried suppressing the assert previously on the off chance, but unsurprisingly that led to a crash). With so few possibilities to choose from I decided to use educated guess work to find the values I needed. I patched the file by doubling selected multiples of 128 000 and tried running the map. After a few false starts I hit pay dirt. Although there was some significant rendering corruption the map was loading and rendering. I tried a few more similar combinations and quickly found one which fixed the remaining issues. Vertex limit? What vertex limit? I'm not sure if I've mentioned it before but at work our coding standards for our current project disallow exceptions. I don't know the reasons for this although I can think of several reasonable possibilities and the decision is a slightly contentious one. Anyway, as a result we have our own exception-free implementations of some parts of the standard library. One such implementation is the standard list class. Unfortunately I ran across a slight problem with it a couple of weeks ago. The end iterator was implemented using a null pointer, which meant that you couldn't use --end() to get an iterator to the last element. I decided to fix this and add sentinel nodes to the list implementation. Now every competent programmer should be able to write a linked list implementation. I've done it myself several times. It turns out modifying somebody else's implementation is a bit harder. Add to this the fact that all this was taking place while my computer was out of action (see my previous journal previous entry), leaving me working on a tight time limit to be checked in before the end of the day because I was working on somebody else's box as they were away for the day. And on top of that our distributed build system wasn't set-up on that machine for me and every change to list required a rebuild of practically the entire project. I worked as quickly as possible and got my changes checked in at the end of day. I knew there were a couple of issues remaining, but I thought they were minor. Turns out I was wrong. I came in the following Monday to find that I'd basically broken half the project and spent half a day fixing bugs in at least half the list member functions. Moral of the story? If at all possible use an existing standard library implementation. Don't write your own! A few days later I found a curious problem with some usage of our list template. Compilation of one function was failing with an error that the compiler couldn't convert from pointer to reference. Fair enough I thought, except that it shouldn't have been trying to convert to reference. I played around with it a bit and managed to boil it down the roughly the following snippet: typedef list< Type * >::const_reverse_iterator iterator; typedef iterator::reference reference; Type * p = 0; reference r = p; reference (iterator::* f)() const = &iterator::operator*;list< Type * >::const_reverse_iterator was a typedef of std::reverse_iterator< list< Type * >::const_iterator >, of which the relevant bits of implementation are: class reverse_iterator : public _Iterator_base_secure { // wrap iterator to run it backwards /* snip */ typedef typename iterator_traits::reference reference; /* snip */ reference __CLR_OR_THIS_CALL operator*() const { // return designated value _RanIt _Tmp = current; return (*--_Tmp); } /* snip */ };list< Type * >::const_iterator::reference was Type * &. The confusing thing was that the test code snippet was compiling the line reference r = p; fine, thus proving that Type * was convertible to reference, but was choking on the following line, complaining that it could not convert type Type & (iterator::*)() const to Type * (iterator::*)(). I don't understand how iterator::reference can be Type & in the iterator class scope and Type * outside it. The only possibility I can think of is that this is another ODR violation error, but wasn't able to find any reason why the ODR might have been violated. I'm going to have another look when I have some time to try and figure out what's going on, but for now this one has me baffled. I anyone has any ideas please let me know. ?nigma
  5. I spent my free time this week modifying my old Unreal map reader so that it could rebuild the file after parsing it into memory. I then went about investigating whether those vertices I thought were unused really were redundant. Unfortunately it turns out they aren't. I'd forgotten about the completely brain-dead manner in which Unreal handles its texture coordinates. For every polygon Unreal stores the texture coordinates by storing the world-space origin of the repeat-textured infinite plane which coincides with the polygon, plus x and y vectors within that infinite plane to represent the texture axes. Like I said, brain-dead. So the vertices I though were unused were actually the texture coordinate origins. I'm now searching for alternative ways to save precious vertices in the map. I had some "fun" with Visual Studio at work this week too. Due to reasons I won't go into our network is not as good as it might be. Having made a few changes to a utility class I hit recompile only for IncrediBuild to decide it was only going to build on my machine. Since this change meant recompiling practically the entire project this was going to take a while. One of my colleagues suggested rebooting my machine just to see if I could get IncrediBuild into a more cooperative mood, so I did. I stopped the build, closed Visual Studio, restarted and hit compile. Immediately I got a C1902 error (Program database mismatch: please check your installation). I couldn't build anything. We tried just about everything to try and fix it, including reinstalling Visual Studio. Finally, just as we were waiting for tech. support to show up to completely rebuild the machine I thought to Google the error. Some of the hits were talking about mspdb80.dll, so I tried replacing it. Lo and behold everything started working again. Why on Earth a full uninstall and reinstall of Visual Studio didn't fix the problem I can't begin to guess. ?nigma
  6. Enigma

    Magic Constants

    One of Team UnrealSPs mappers came across an interesting problem this weekend. We already knew about UTs zone limit (64, because a zone mask is stored in a 64 bit integer) and bsp node limit (65535, because bsp node indices are stored in unsigned shorts) but now for the first time we've hit the vertex limit. The limit is 128 000, which seems a little arbitrary. I've been looking into the issue and it looks to me like UnrealEd isn't cleaning up after itself very well. As near as I can tell there are a good 50 thousand unreferenced vertices in the map data, so I'm hoping I'll be able to write a small utility to clear those unused vertices out of the map file this coming week and bring the map back under the limit. We changed source control systems at work this week, which was great fun. We're now using Perforce, or at least trying to - we're still finding our feet a little bit. The diff viewer and merge tool certainly look funky, with their multicoloured displays and variable speed scrolls. ?nigma
  7. Enigma

    Jpeg 2000

    Quote:Original post by Ysaneya Quote:Original post by Jotaf Looks like Enigma here is working on a Jpeg2000 loader, and even at a very early stage he claims it's quite fast. I wouldn't be surprised, the libraries you mentioned are full of bloat :)Pretty cool :) I don't think it's faster than J2K. He mentions 1.5 seconds to load a 2048x2048 image. J2K loads a 1024x1024 image in 253 ms. Assuming linear scaling, a 2048x2048 would take about 1 second to load in J2K.I don't expect to be faster than J2K-Codec yet, but then I have a list of optimisations still to implement. Eventually I expect/hope to be pretty competative speed-wise with J2K-Codec, with the following advantages/disadvantages: AdvantagesFree (as in beer)Free (as in speech)Portable static linking DisadvantagesNot a complete Jpeg2000 implementationNo technical supportNaff nameAlso, I don't see anywhere that mentions whether or not J2K-Codec offers a multi-threaded soultion, but the Jpeg2000 decoding algorithm should be heavily parallelizable and I hope to take advantage of that. Anyway, keep up the good work on Infinity and the interesting journal entries, Σnigma
  8. Enigma

    Like a Hot Knif through Butter

    I've finished (the first pass of) my Jpeg2000 loader. I think I'm going to opt to call it Jackknif. That's Jackknife, the only English word I could find which contains a 'J' followed by two 'k's (J2K, get it?), with the e knocked off to indicate that it's not a complete implementation. Yes, there is method to my madness (or should that be madness to my method?). It turns out I did manage to get a finished version of the code down to less than a thousand lines, which shows that it really isn't that complicated an algorithm. Speed wise I was competing against two open source reference implementation - JasPer (written in C) and JJ2000 (written in Java). My reference image (2048x2048 rgb) took approximately nine seconds to load under Jasper and approximately six seconds to load under JJ2000. The first complete version of Jackknif was taking around fourteen seconds. I thought this was pretty reasonable and whipped out a profiler only to be rather confused by the results. The hotspot was showing up as 120 million calls to fill_n, but I only used fill_n in a couple of places. One place which should only have amounted to a few thousand calls and another, in static initialisation, which should only have involved about twenty or so calls. I took a careful look through the source and spotted a minor bug in my static initialisation code. It looked something like: static int array[size]; static bool initialised = false; if (!initialised) { function_which_initialises_array(array); } // code which uses arrayI'd forgotten to set the boolean flag to true, so my array was being repeatedly initialised, to the tune of ~6 million times. Fixing that minor bug, along with a couple of very minor optimisations (changing arrays of ints to arrays of shorts) brought Jackknif down to just under six seconds. I was very pleased with this. My fairly naieve implementation was outperforming even the "optimised" JJ2000 implementation. Next bottleneck was the filtering. The way it was implemented wasn't very cache friendly. I looped through every component and for each component looped through every row and then every column. To demonstrate, a 4 pixel square image would have been processed something like: Image: r11 g11 b11 r12 g12 b12 r13 g13 b13 r14 g14 b14 r21 g21 b21 r22 g22 b22 r23 g23 b23 r24 g24 b24 r31 g31 b31 r32 g32 b32 r33 g33 b33 r34 g34 b34 r41 g41 b41 r42 g42 b42 r43 g43 b43 r44 g44 b44 Visitation order (components): r11 r12 r12 r14 r21 r22 r23 r24 r31 r32 r33 r34 r41 r42 r43 r44 r11 r21 r31 r41 r12 r22 r32 r42 r13 r23 r33 r43 r14 r24 r34 r44 g11 g12 g12 g14 g21 g22 g23 g24 g31 g32 g33 g34 g41 g42 g43 g44 g11 g21 g31 g41 g12 g22 g32 g42 g13 g23 g33 g43 g14 g24 g34 g44 b11 b12 b12 b14 b21 b22 b23 b24 b31 b32 b33 b34 b41 b42 b43 b44 b11 b21 b31 b41 b12 b22 b32 b42 b13 b23 b33 b43 b14 b24 b34 b44 Visitation order (array indices): 0 3 6 9 12 15 18 21 24 27 30 33 36 39 42 45 0 12 24 36 3 15 27 39 6 18 30 42 9 21 33 45 1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46 1 13 25 37 4 16 28 40 7 19 31 43 10 22 34 46 2 5 8 11 14 17 20 23 26 29 32 35 38 41 44 47 2 14 26 38 5 17 29 41 8 20 32 44 11 23 35 47I switched the order to loop though components of each pixel one after another and processed the first pixel of each column in order before processing the next column: Visitation order (components): r11 g11 b11 r12 g12 b12 r13 g13 b13 r14 g14 b14 r21 g21 b21 r22 g22 b22 r23 g23 b23 r24 g24 b24 r31 g31 b31 r32 g32 b32 r33 g33 b33 r34 g34 b34 r41 g41 b41 r42 g42 b42 r43 g43 b43 r44 g44 b44 r11 g11 b11 r12 g12 b12 r13 g13 b13 r14 g14 b14 r21 g21 b21 r22 g22 b22 r23 g23 b23 r24 g24 b24 r31 g31 b31 r32 g32 b32 r33 g33 b33 r34 g34 b34 r41 g41 b41 r42 g42 b42 r43 g43 b43 r44 g44 b44 Visitation order (array indices): 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47I expected that might bring the execution time down to around four and a half seconds, maybe four if I was lucky. I underestimated. With that simple optimisation the execution time plummeted to around 2.2 seconds. I still have a few more optimisations to apply. I'm not sure when that will happen since I shall probably be working on something else this next week for a bit of a break. My target though is to bring that execution time down to no more than 1.5 seconds for my reference image. I intend to release the final source code, both a cleaned up unoptimised version so people can see how the algorithm works, plus the final optimised version, under a permissive open source license when I'm done. The only thing I intend to disallow is patenting of techniques used in derivative works. I'm sure there exists an open source license with this kind of restriction. If anyone knows of a license with this restriction, please let me know - it'll save me a few minutes searching. Finally, the obligatory screenshot, actually four screenshots in one: The top left shows the fully decoded image, minus horizontal and vertical filtering, resized from 2048x2048 to 256x256. The top right shows the same image, but only the top left corner of it, at normal size. The bottom left shows the fully decoded image with horizontal and vertical filtering, again, resized from 2048x2048 to 256x256. The bottom right shows the same image, but again only the top left corner of it, at normal size. ?nigma
  9. Enigma

    Graphic Violence

    I've broken the back of the Jpeg2000 decoding algorithm. What really rankles is that all the publicly available implementations are at least a hundred thousand lines of code and my nearly complete implementation (admittedly with only a subset of the functionality) is only just about the hit a thousand lines. Here's what I have so far, the first code block, which equates to the red channel of the image reduced by a factor of 32 in each dimension: All that's left now is to add the loops and two additional lookup table to allow me to decode the remaining 3071 code blocks for that image, add filtering code to recombine the code blocks into the finished image, optimise and then clean-up and resolve hard-coded values to the appropriate variables. I have a few ideas how I can optimise which, if they work, should result in a significant speed-up. I came across yet another interesting code issue at work this week. I had some code roughly like this: class Base1 { public: Base1() { // code } void function() { // code } protected: int variable1_; int variable2_; bool variable_; }; class Base2 { protected: bool variable_; }; class Derived1 : public Base1 { public: Derived1() { function(); // code } }; Where Base1 and Base2 were bases of classes with similar interfaces, used for similar purposes (think static polymorphism). The code in the Derived1 constructor, after the call to function was failing with a very odd error (invalid Windows error message). Stepping through the code we discovered that although execution correctly stepped through the Base1 constructor and Base1::function, the debugger seemed to think that Derived1 was inherited from Base2, not Base1. It wasn't just a debugger fault either. The error was occurring because access to variable_ was actually accessing variable1_ which happened to be where variable_ would have been if the base class really was Base2, not Base1. Something obviously got very confused somewhere. Eventually I resorted to getting a completely clean version of the entire project from source control, which fixed the issue. I still don't know what was wrong. ?nigma
  10. Enigma

    Happy New Year

    Not really much to talk about what with Christmas and the new year. I've spent a little free time looking further into Jpeg2000 over the last couple of weeks. I'm now trying to get my head round entropy decoding and the MQ arithmetic decoder. I printed out 22 pages of source code to take away with me over Christmas. I think I understand enough about the arithmetic decoder, which was about three of those 22 pages. The entropy decoder which took up the remainder of the space appears to be a very complicated "optimised" implementation of a relatively simple algorithm. I put the word optimised in quotes because I'm pretty confident that it was a bad choice of optimisation strategy. I shall find out if I'm right in the new year. I thought I'd leave you with a couple of snippets from the JJ2000 source which made me laugh, when they didn't make me cry. Check the JavaDoc comments: /** * Returns the reversibility of the filter. A filter is considered * reversible if it is suitable for lossless coding. * * @return true since the 9x7 is reversible, provided the appropriate * rounding is performed. * */ public boolean isReversible() { return false; }Tricky stuff, that dyadic decomposition: for (rl=0; rl maxrl-rl) { hpd -= maxrl-rl; } else { hpd = 1; } // Determine max and min subband index minb = 1
  11. I was off work this week. Tomorrow could be interesting since I think the code I checked in just before I left might have broken the build. It should only be a small break, but unfortunately build success is a binary state - the build is either broken or it's not. I did email them about it with steps to fix so hopefully it won't have been much of a problem. I spent my week working on the Jpeg2000 loader again, working through the new source code I talked about last month. Also Christmas shopping, playing DHTML Lemmings and various other random activities, so not actually as much time on the loader as I'd been intending. The new source code is still pretty awful: int i,k1,k2,k3,k4,l; // counters int tmp1,tmp2,tmp3,tmp4; // temporary storage for sample values int mv1,mv2,mv3,mv4; // max value for each component int ls1,ls2,ls3,ls4; // level shift for each component int fb1,fb2,fb3,fb4; // fractional bits for each component int[] data1,data2,data3,data4; // references to data buffers final ImageConsumer[] cons; // image consumers cache int hints; // hints to image consumers int height; // image height int width; // image width int pixbuf[]; // line buffer for pixel data DataBlkInt db1,db2,db3,db4; // data-blocks to request data from src int tOffx, tOffy; // Active tile offset boolean prog; // Flag for progressive data Coord nT = src.getNumTiles(null); // 38 lines which don't modify nT or src // getNumTiles is a non-modifying getter nT = src.getNumTiles(null);Not to mention the seemingly everpresent "what, you mean some people don't use the same size tabs as me" interchange of tabs and spaces for indentation. I really ought to find a beautifier. Still, it's easier to work through that the jasper source. Feels a bit strange to be working with Java again though. Next weeks installment will either be a day early or won't get written, since I'm away for Christmas as of next Sunday. ?nigma
  12. Enigma

    Nooks & Crannies

    The observant amongst you will have noticed that it's not Sunday. The even more observant amongst you will have noticed that it's not Sunday and I'm posting a journal entry. The really observant amongst you will notice something odd about this. There is a reason for this. Drum roll please... I wasn't feeling too good last night and had an early night instead of writing this. So it's a day late. I had another interesting compiler incident at work last week. I had a piece of code performing a number of floating-point operations including some basic trigonometry. It was all working fine until I made a slight modification. After said modification the code worked fine in debug mode but failed with a floating-point stack check error in release mode. Investigations led to much confusion since doing anything differently seemed to result in the code working fine. Even just reading the floating-point operating environment at the start of the function caused the code to stop failing. I hunted through the source code and the generated assembly to see what could be wrong and while the source code looked OK the assembly looked a bit odd. Eventually our lead programmer took a look and after a bit of poking said he'd seen something similar before and it was probably an optimiser bug involving inline assembly (we have our own trig function implementations since our base library is portable across PC and console(s)). If he's right then I'm beginning to lose faith in compilers. That would be two genuine bugs in less than a month! Outside of work I've been poking around some more obscure parts of the C++ standard. Such knowledge sometimes comes in useful, like when a co-worker was trying to suppress a lint error in a macro and wondering why he couldn't get it to work. Lint errors can be suppressed by adding comments of the form //lint -eXXX but adding that to a macro won't do anything since comments are replaced with a single space before preprocessing. In the course of my poking I came across the macro examples in Section 16.3.5, Paragraphs 5 & 6, beautifully obscure examples intended to demonstrate as many macro combinations and effects as possible with the minimum quantity of code: Quote:C++ Standard, Section 16.3.5, Paragraph 5 To illustrate the rules for redefinition and reexamination, the sequence #define x 3 #define f(a) f(x * (a)) #undef x #define x 2 #define g f #define z z[0] #define h g(~ #define m(a) a(w) #define w 0,1 #define t(a) a f(y + 1) + f(f(z)) % t(t(g)(0) + t)(1); g(x+(3,4)-w) | h 5 & m (f)^m(m)results in f(2 * (y+1)) + f(2 * (f(2 * (z[0])))) % f(2 * (0)) + t(1); f(2 * (2+(3,4)-0,1)) | f(2 * (~5)) & f(2 * (0,1))^m(0,1); Quote:C++ Standard, Section 16.3.5, Paragraph 6 To illustrate the rules for creating character string literals and concatenating tokens, the sequence #define str(s) # s #define xstr(s) str(s) #define debug(s, t) printf("x" # s "= %d, x" # t "= %s, \ x ## s, x ## t) #define INCFILE(n) vers ## n /* from previous #include example */ #define glue(a, b) a ## b #define xglue(a, b) glue(a, b) #define HIGHLOW "hello" #define LOW LOW, ", world" debug(1, 2); fputs(str(strncmp("abc\0d", "abc", '\4') /* this goes away */ == 0) str(: @\n), s); #include xstr(INCFILE(2).h) glue(HIGH, LOW); xglue(HIGH, LOW);results in printf("x" "1" "= %d, x" "2" "= %s", x1, x2); fputs("strncmp(\"abc\\0d\", \"abc\", '\\4') == 0" ": @\n", s); #include "vers2.h" (after macro replacement, before file access) "hello"; "hello" ", world"or, after concatenation of the character string literals, printf("x1= %d, x2= %s", x1, x2); fputs("strncmp(\"abc\\0d\", \"abc\", '\\4') == 0: @\n", s); #include "vers2.h" (after macro replacement, before file access) "hello"; "hello, world"Space around the # and ## tokens in the macro definition is optional.Just trying to follow through the expansions to verify the result took me a good five minutes. Any errors in transcribing the above excerpts are my own. I'm also trying to figure out if the following code is valid C++: int main() { int @ = 1; return @; } I've not found a compiler that will accept it, but '@' is not in the basic source character set (Section 2.2, Paragraph 1) and the first phase of translation includes: Quote:C++ Standard, Section 2.1, Paragraph 1, excerpt Any source file character not in the basic source character set (2.2) is replaced by the universal-character-name that designates that character. And an identifier is defined as: Quote:C++ Standard, Section 2.10 identifier: nondigit identifier nondigit identifier digit nondigit: one of universal-character-name _ a b c d e f g h i j k l m n o p q r s t u v w x y z A B C D E F G H I J K L M N O P Q R S T U V W X Y Z digit: one of 0 1 2 3 4 5 6 7 8 9Anyone wanting to argue for or against the validity of @ as a C++ identifier, speak now or forever hold your peas. (Yes, that was a terrible pun. You should be used to them by now). Finally, unless anyone has any other recommendations, I'm planning on adding Journal of EasilyConfused to my list of regularly-read GDNet journals, to replace EDI's journal. (Always three there are, a master, an apprentice, and a very talented indie team). I almost forgot, this week I found myself writing two oddly named functions: consume_hash and consume_carrot. The latter was a typo indirectly caused by the former (No, not for the reasons you're thinking). Who needs drugs when your brain is capable of such nonsense unaided? ?nigma
  13. So, it's been three months already since I started my journal. I had hoped that this would become a fairly regular record of my work for Team UnrealSP, with occasional interludes of randomness. Instead it's turning out kind of the opposite. It's been another slow week. I seem to have acquired a sparrow or similar small bird that seems to like landing outside my window at half six in the morning and wake me up by chattering away for ten minutes before flying off again. As a result, due to tiredness, I haven't managed to do any work on the mod this week. Not much of interest going on at work that I can tell you about either. We had a couple of new programmers start last week which means I'm now officially not the most junior programmer on the team! I've also booked all my holiday now since our leave year runs from January to December. As a result I'll only be in the office for another seven and a half days this year. Even better, half a day will probably be spent doing "research" - a couple of the guys are getting Wiis on Friday and bringing them into the office. I just hope we don't break the company's large flat screen TV. I found some more Visual C++ weirdness at work this past week, though not nearly as bad as the last one: struct Object { Object(int i) : i(i) { } int i; }; struct Array { Object & operator[](unsigned int index) { return array[index]; } Object array[1]; }; Array objects = {Object(2001)}; int main() { return objects[0].i; } When compiled under Visual C++ 8.0 at warning level 4 produces the following warnings: spuriouswarning.cpp(18) : warning C4510: 'Array' : default constructor could not be generated spuriouswarning.cpp(12) : see declaration of 'Array' spuriouswarning.cpp(18) : warning C4610: struct 'Array' can never be instantiated - user defined constructor requiredIt appears somebody forgot about aggregate initialisation when writing that second warning. In other news, I feel I need a new GDNet journal to read since EDI stopped updating regularly. Anybody have any suggestions? Finally, since it's Advent and I feel bad about not giving you anything interesting to read about I'm going to let you in on a little secret. I have another project slowly ongoing. It's not directly game related but hopefully it will be of interest to some people here. It's a long term project, years most likely, especially at the rate I'm going. That's all I'm going to tell you for now. Still, I'm told the first step is admitting you have a problem. Err, I mean secret project. Let the rampant speculation commence. ?nigma
  14. Enigma

    The Source Code Challenge

    I managed to find a bit of free time to get back to my Jpeg2000 decoder this week. Unfortunately it's so long since I last worked on it I've lost track of where I was. I did have lots of notes, but the source code is so impenetrable it would still take me a while to get back up to speed. I say "would". Instead I found another open source Jpeg2000 codec, this time written in Java. Hopefully between the two sources I've be able to get a more solid grasp on the format and accelerate my progress. I keep wondering whether I ought to just buy a copy of the spec. Because two major concurrent projects isn't enough I also keep getting distracted by other random issues. This week I decided to look into parsing. I've written parsers before, even a very simple parser generator. I've also used Boost.Spirit a few times (and I'd love to learn how to use it better, maybe something to get distracted by some other month). This time however I decided to forget everything I new and research from scratch. It's funny what you can learn when you do this. I didn't research in too much depth, but it didn't take me too long to come across Parsing Expression Grammars (PEGs) and Packrat parsers, neither of which I remember coming across before. I'm not completely sold on Packrat parsing - it looks great for simple parsing but not so good for more complex parsing due to the complexity of changing state - but Parsing Expression Grammars seem really useful. I quickly hacked together a simple recursive descent parser for mathematical expressions, along with a generator using an equivalent grammar. After getting the parser working I spent a bit of time working on error handling, for which some of the details mentioned in the Packrat parser paper I was reading were very useful. All-in-all it was an interesting diversion and next time I need a parser I'll have a slightly better foundation to start from. Finally, I present you with The Source Code Challenge(TM). If you remember (assuming anyone actually reads this drivel regularly) a few weeks ago I was freeing up hard drive space to install Medieval II: Total War. I vaguely wondered at the time how much of my hard disk was filled with source code. This week I decided to find out. The Source Code Challenge(TM) is for you to do the same. The target to beat is 27 505 files (.c, .cpp, .h & .hpp) or 537 175 479 bytes (512MB!). Admittedly a large proportion of that come from six compilers with attendant include folders, plus boost, but it's still an awful lot of source code! ?nigma
  15. Enigma

    Iterative Ranting

    Not a bad guess. My guess before encountering this would have been one of: On first reading, expect the code to print the output 2. On second reading, expect the code to fail to compile with an error that Base2 is not a valid identifier (you can only use a template name without template parameters within the definition or a specialisation of that template class). Borland 5.8.2 and GCC 3.3.1 agree with my second reading: Error E2102 example.cpp 37: Cannot use template 'Base2<Type>' without specifying specialization parameters in function Derived::function() Error E2379 example.cpp 37: Statement missing ; in function Derived::function() example.cpp: In member function `void Derived::function()': example.cpp:37: error: use of class template `template<class Type> struct Base2' as expression example.cpp:37: error: syntax error before `;' token Visual C++ 8.0 on the other hand goes for option three. Literally. It compiles with no errors and produces the output 3. It appears that there is a compiler bug which accepts the incorrect explicit scoping and then generates an incorrect this * offset. As you can imagine, this was quite an interesting bug to track down in real code. Σnigma
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!