1. The new Windows Search thrashes your hard disk something chronic and it's so aggressive it completely ignores what the rest of the system is doing at that point in time. Bearing in mind I've got 3x hard disks in a sata mode 3 array, I really dont expect them to be busy for the next several hours every time I install a new piece of software.
2. It feels unfinished, sure it *looks* finished, but that's probably the only bit they did finish!! Even then, it's so horribly inefficient the OS is eating up 1GB (!!!) of ram before I even boot anything. I could cut out gadgets and aero glass but then what you're left with, essentially, is a really buggy vista that thrashes your hard disk looking for the files that as a power user *I already know where they are*.
3. When Microsoft announced the new driver model in Vista I figured it would be a while for other companies to transition to the new model. What I didn't realise however was how little those companies would invest in testing their 64bit vista code paths. Even the NVidia graphics drivers are lagging behind a little, not to mention that several features are just plain disabled because they dont work. One of my favourite features of NVidia graphics drivers is the colour calibration - disabled on Vista x64. God it feels like the dark ages..
4. No hardware sound acceleration? You mean this terribly expensive, extremely high performance DSP processing all singing all dancing shiny creative sound card is more or less worthless under Vista?? It's all well and good telling companies to use OpenAL for hardware accelerated sound but there's one small problem .. OpenAL is open source, which is a big no for a lot of publishers then, not to mention the fact that if OpenAL can have it then why can't DirectSound under Vista? And what about all those amazing games I own that were published depending on DirectSound to provide hardware acceleration? To me audio is extremely important in games and I have an expensive sound card to match. It's just not getting better for Vista..
5. Vista nags me more than any person in my life has ever nagged me before, treating me like an idiot and condescending with the utterly moronic UAC. Oh what a good idea, now we can help idiots stop being so idiotic .. NO. If I click on an installer I *want* it to install, and if you ask me again about if I'm sure I want to do anything I might just uninstall you! Oh...
6. All of the above make Vista slow, unstable, missing features, downright annoying and to be honest can you actually think of a single good reason to use it yet? So my final reason is, there's no reason on this planet that makes me feel that downgrading my computer OS to Vista is justified.
I'm currently running a tri-boot system with XP, Suse Linux and MacOS X 10.5.1. Vista was the weakest link and I'm not trying to stick it to the man, I run XP as my primary OS to access the tools I most frequently wish to use.
Ok, I must now admit to ignorance (as I so often have to). Visual Studio 2008 has some really great new features and they're already coming in useful:-
- win32 and x64 are now separate platform build targets, so no more nasty hacks to swap between them.
- class diagram view is a very powerful new feature which allows you to see the design of your software in a uml class diagram and I'm sure that for C++ programmers who like me are conscious of software architecture it will prove to be an often used feature.
- debugger enhancements make debugging multithreaded software a fair bit easier with the ever-so-handy 'Debug Location' toolbar.
I know this doesn't sound like a huge amount, but I just recently upgraded my Visual Assist to 10.4 (with vs2008 integration) and the new VA Outline feature slots in nicely with these new features in VS2008.
Ok so I got a copy of Visual Studio 2008 and to be brutally honest, initial impressions are that it looks promising but it's not quite there yet.
I think it's safe to say that Microsoft have shoehorned into this release a great deal of stuff which I think a lot of developers will be excited about - particularly the .NET 3.5 support and enhancements will make a lot of C# developers happy I've no doubt.
Trouble is, I aint a C# programmer, am I?
There were 2 things I was really looking forward to in this release - TR1 support and MFC 11. After reading some more stuff I started looking forward to Visual Studio Shell. But where are they I hear you ask? Simple answer is, they aren't anywhere yet because Microsoft haven't finished them yet. So why did Microsoft release a product that isn't finished yet? Probably the same reason they released Vista a pinch early. Make that pinch about 12 months big, and at least in comparison to that the C++ enhancements are a bit closer to completion with the goods being coughed up early next year. Nice.
In other news, the IDE has some enhancements which I'm sure we'll all learn to love or loathe. Enhanced ANSI C++ compatibility and Vista support (my main reason for using vs2008) make using this version worth while, but only just.
All in all I'm disappointed that the most exciting stuff didn't make it to the initial release. I guess I'll have to look towards open source libraries to fill the void for now (Boost & WxWidgets should do nicely, and they have the added benefit of being cross platform compatible). Wink, nudge.
Greetings, it's been a while since I last posted here I notice, so I thought I may as well add an update seeing as several things have happened and I didn't bother to post.
In all honesty I'm a little bit fecked off with Gamedev.net at the moment in that my rating has gone down even lower to < 1000 status. That's really lame, because all I've done is try and offer advice to beginners and I dont seem to get rated up for it. I would try harder, but I fear I'm nowhere near that agreeable ;-) Oh well, to be honest I must be really narking some people off I guess, but I dont know why.
Noone will have noticed because I didn't post it, but I've left Stainless Games to go and work for Sumo Digital in their Advanced Technology Group. It's good fun and the people at Sumo are very nearly as great as the superb Stainless guys. I'm really going to miss working with them, but at the same time I'm really looking forward to working at Sumo Digital.
I've upgraded my home rig. It should be slightly more useful now, and it can play pretty much all modern games, which is always a good sign for a development rig. For those interested the specs are AMD Athlon X2 6000+ (3GHz), 2GB DDR2 667 RAM, 3x Seagate 1TB 7200RPM 16MB in RAID 3 array (XFX Revo64 Controller), GeForce 8800GT 512MB DDR3 VRAM, 22" Samsung LCD 3000:1 C/R 2MS R/T.
 Thank you to the individuals who have read this and rated me up out of sympathy ;-) My rating is actually slightly higher than it was before I even posted about it \o/
Brainfuck is one of those amusing little jokes which has been taken way too far. Yes, technically speaking, it is a programming language. But let's face it, what are you going to use it for??? If you can program in Brainfuck then, technically speaking, you could probably program a windmill .. or a tree .. and possibly take over the world with your devious creations. I mean this is a programming language with 8 commands, and that's it. No add, no mul, no mov. Just inc, dec, loop, jnz. To write something in that you're either a genius or a complete raving maniac!
What kind of sick mutha would inflict such a sick and cruel perversion upon the world? ... Me!
So yesterday I went one step further and wrote a Brainfuck interpreter in C++ (with source) grab it while it's hot. I'm probably going to pick this up and drop it as I feel like it, but some initial thoughts for improvement have already been considered. I hope to make it into an optimising native compiler eventually (hence the parser module). Whether I ever actually bother to get that far though is another question entirely :-)
What does it do? .. Well it runs Brainfuck programs, it can catch errors as it runs (output to std err), and it's jolly fast too! Written in all C++, if you want to support variations on the 'standard' (bwahaha) then there are handy defines and constants you can change.
The code may look slightly elaborate for what it is, but if you wanted to write a Brainfuck variation (like Brainloller or Braincopter) or a native compiler then I'm sure this would be a nice starting point :-)
Ok so I haven't really been keeping up with my journal as much I would like. I wont bother making excuses though and I'll just cut to the meat.
Volumetric clouds. This is one of those topics I see coming up again and again. The first ever technique I ever saw was a 'realtime' cloud animation algorithm that ran at about ~1fps on my now ancient Pentium II 200MHz. At the time that was a lot of horsepower :-) Right now there is an IOTD up on the front page of some guys cloud effect. After following his link for an explanation of his technique he's being all hush hush about it. Why though? You take your own knowledge for free. Ah well, I guess we're all in the rat race.
Anyway. Mega particles. I first found this technique in ShaderX5 and followed up with a Google search. If you watch
">the video trailers for Windlight in Second Life then you may notice a pattern in the cloud animation that looks somewhat uniform .. Yes, it looks suspiciously like mega particles to me. When you see mega particles in action (like you can if you buy ShaderX5 because there's a demo with source on the CD) then you will notice that the effect is pretty much equivalent. Throw in some physics to better simulate the sunlight, some ray casts and ray-sphere collisions later and all of a sudden you could have a really nice atmospheric simulation solution which will even run on some pretty old hardware.
Wow. To think Second Life actually bought the whole company to get a hold of that tech.. The mind boggles :-)
Anyway, so there was that really cool thread in the graphics theory forum here on Gamedev.net, and it was all about Carmack's virtualised textures technology. As a spin off from implementing this technology I found this little gem whilst researching the possibilities for streaming textures in real time.
Nice eh? There's a huge great big chunk of research already in that paper, and it's written by one of the guys at ID Software. From this paper I think it's fairly safe to assume that you can speculate with more insight about how the new virtualised textures technology is working in the tech5 engine. For a start, they're using a custom image format which from the research in the article makes sense because one of the prerequisites of working with the virtualised textures system is that you must be able to load chunks of an image in as they are needed - not the whole image at once - which means that a wavelet based compression technique works best. JPEG2000 would look at first glance to be a good candidate for this except that compression rates aren't the best and decompression in open libraries is unoptimised for SIMD / multi-core. Also it looks as though ID may well perform on the fly DXT compression, though possibly only when video memory budgets are low as even SIMD optimised DXT compression routines require a large amount of processing power. It looks as though ID's technology is aimed squarely at the multi-core / multi-cpu trends of the future then.
So where this leaves me is basically I don't really have the time or resources to implement all this technology on my own so I'm most likely going to go with JPEG2000 which can be decompressed with GPU acceleration. This is because the DWT can be computed on the GPU, and the nice guys at the link given have integrated their solution with JasPer, and provide full source code. Nice guys!
Another question that was pretty much answered is that it looks like ID also generate mipmaps on the fly, using a simple box filter. This is to reduce the amount of data that needs to be streamed off disk and decoded.
I've been toying with the idea of doing actual hardware decoding on the fly with a fragment program that can decompress the image data. To be honest though this doesn't really make sense unless you store the mipmaps in the compressed format too. Also the only reason to have compressed textures in video memory would be to save texture memory - which DXT is better suited for. And the problem with that is doing it quick enough - even on crap hardware.
I got working on my resource management system today. I've made some pretty good progress too but that's not really interesting is it? To be honest that's just donkey work.
Something slightly more interesting I got to work today was redirecting the C++ standard streams cout and cerr to my console window. The console window is rendered in-game using a heavily customised Cegui. Not too shabby eh? It's fast and simple, I think the whole thing is only in the region of 12 - 14 lines of code.
It's funny actually how little the majority programmers seem to know about stl. I don't claim to know everything about stl, but I'm confident enough with templates and the stl syntax to step through the library a bit and figure out what's going on. I have some rather beefy reference books at my disposal too. But I think a lot of programmers just think of the stl as a 'black box' that just always works. I was pleasantly surprised today to work out how to do this and how simple and clean the solution was.
The moral of todays blog entry? Learn your stl, learn it well!
Well ok, work isn't really the root of all evil. To be honest, work is brilliant at the moment but it's been pretty tough going for the past few weeks. The work load was ramped up for a milestone - which against all odds we managed to meet. The producer was very happy with our work, and I think it's pretty safe to say that the entire team really busted a gut over it and it's really paid off. Anyway the down-side to that has been working over the weekend, and some very late nights spent programming my ass off and takeaways. So really no different to normal then! The only difference is that the only work I've done on my game engine project has been in the early hours of the morning.
So I bought a laptop 2nd hand - a Philips Freevents X52. It features the Intel GA-950 DX9 (SM2.0) graphics chip (Software vertex processing, 4 pixel shader pipelines), an Intel Dual Core T2300 (1.7GHz), 80GB SATA hard disk and 1GB DDR2 memory. That's almost better processing power than my desktop computer! I got the laptop at a real bargain price off eBay as the seller really didn't make any effort at all with a 2-line description and a title that didn't mention the make, product line or model number. It was obvious when I received it that it had been given a good thrashing but I hadn't had chance to really clean it up. The screen was smudged, the cover was splatted with sticky label glue and missing several letters on the bezel. Also - more annoyingly - the keyboard was completely fudged with several keys just mysteriously not working (Like control, F10, F11)! To cap all that off, there is no documentation anywhere on the laptop - like say a maintenance manual. Anyway I decided to gut the thing today and figure out why it was broken and after some disassembly I found out how to remove the keyboard. It turned out that someone had tried doing a bit of servicing on it in the past. So I managed to fix the keyboard up and it's easily good enough now for me to work with which is the main thing.
The laptop represents an important investment to me. The GA-950 whilst being a good graphics chip in many respects is absolutely rubbish compared to the Geforce7 on my desktop. In a nutshell if I can make all of my graphics code compatible with this graphics chip then - theoretically - it should work on pretty much anything! Also my desktop doesn't have a dual core processor, and since I'm implementing a multithreaded game engine this setup gives me chance to tap into the extra processing power recent CPU's offer. Pretty much all the performance related articles I've been reading recently are all saying the same thing - batch your draw calls and the majority DX9 games were all CPU bound.
CPU bound? That's bad.
I get the general feeling that a lot of engine programmers invest heavily in GPU optimisation, but to be honest if your code is CPU bound then you need to think about some of the higher level algorithms you're using. I've been monitoring CPU performance of the IS3 engine quite carefully and so far it's all fine but I haven't even started on some of the more processor intensive algorithms.
Anyway, coming back to IS3 again. The 2D drawing routines are all working great at the moment but what I really need to think about now is how to lower the CPU requirements the GUI imposes and also I need to rework the texture management system completely. My next big addition to the engine will be virtualised textures like in the idTech5 engine.
I'm not usually one to moan, but this past week has just been really tough going. I dont like to talk much about work related things here because I wouldn't want to get us into trouble. To be honest I'm sure my peers also have a tough time but they dont moan so I should probably just shut up and get on with the work!
So, moving swiftly on to another - more interesting - topic, it's not much more interesting but it's still an essential step forwards. I've worked with a completely undocumented game engine before, and it's really not too tough when you've got the source code there in front of you, but sometimes even when it's right there you end up making annoying mistakes just because you might make an assumption and you only end up tracing into the source when something breaks.
Because of this I've configured auto documentation generation for IS3 using the very agreeable doxygen and a few simple macros. I've also configured my tools to generate CHM format docs just because I personally find it very quick and easy to reference CHM - plus I can review it on my phone which makes it particularly useful because I can pick up or leave off whenever I have a spare few minutes or I'm sitting idle. Printable docs would also be useful I feel for those who are so inclined to waste all that paper :-) I'm also seriously considering Help 2.0 compatible docs for Visual Studio integration. You just can't generate too many help formats! Different developers work in different ways and it's just generally a good idea to try and keep them all happy where possible.
Not much else in the way of news. This week I'm going to start work on implementing virtualised textures - not a million miles away from some of the more inspiring technology in the idTech5 engine.
Due to work pressure I've been working all day and all night so all I've really had chance to do is a little research. I read a nice article on how valve have implemented multithreading in the steam engine via a hybrid approach of both coarse and fine multithreading, separating each major subsystem so it can run on a separate thread, then further threading each subsystem so each one can take advantage of more cores if available. I wonder whether openmp has had a role to play in any of it. Either way, multithreading certainly paves the way for some interesting tricks.
Speaking of which, that brings me neatly to my next topic of interest - id's tech5 engine. There has been an interesting discussion on the message boards here regarding virtualised textures.
I'm certain texture atlases come in to the equation, because there's no other way you can draw a whole scene in -3- draw calls. These cannot be conventional texture atlases though, I think they're generated on the fly. The problem with this is 2-fold, one is a practical consideration, the other is speed. Practically, a renderable object in a traditional renderer needs to know which texture it is using, and texuv's need to matched to the texture. If you're going to do this dynamically your process may swap your texture and uvs to another map. In terms of performance you may only construct your texture off the gpu and then do an upload and swap over just a few frames in a thread.
The 'mega textures' in tech5 I've heard people saying that it's just geometry clipmaps. Again I think it may be a variation on a standard technique where the base technique is a geometry clipmapping, but maybe with another threaded twist like dynamic streaming the data from the clipmap.
Whatever the result turns out to be I can imagine how much work it would take to put together an engine with the supporting toolchain of that magnitude, and the previews are nothing short of brilliant! Really inspiring work from the I'd guys, and I can't wait to try out the games!!
This evening I've spent most of my time developing the xml importers for 2d data and doing much of the routine maintenance tasks that I normally ignore for one reason or another.
So I've sorted out all of the compiler warnings, updated the project dependencies, sorted out the release build configuration and resolved a few other minor annoyances. It might sound like a lot but really each of these things has been quite trivial.
So I spent a bit of time on my xml importers, it's highlighted a significant issue of resource management. This whole block of code has really been neglected while I ran away with graphics development and now is really a good time to get a hold on it.
Resource management is quite an interesting topic in itself. Do you use handles or smart pointers? If you restart a particular subsystem do you need to destroy and recreate the associated resources? Some resources may be dependant on others also being present. Do you want to support compressed archives? Add into the mix multithreading and cross platform filesystems and you begin to see my pain as I can answer 'yes please' to most of the above.
To resolve the filesystem issue I'm going to write a virtual filesystem so I can think of my data folder as being pretty much the root of all evil. I've got a ton of research and some new ideas to throw in to the mix which is too much to document in this blog entry. I've been writing these updates on my pda phone so I'm deliberately not going in to too much detail otherwise my stylus might melt!
So i'm bringing the 2d engine graphics elements together. Specifically animated sprites and vectors. I've finalised my file formats for this data and it should be quite useful as many games make quite extensive use of 2d visuals. I'm trying to keep the playing field open for special effects as it's the little bits of visual polish which can really make a good game into an awesome game.
My recent efforts have all been centered around making an efficient graphics and game engine codenamed IS3. IS3 is a cross platform engine which has a pretty long feature list in the pipeline and which also features a supporting toolchain for artists / designers.
IS3 is also a C++ engine, and currently supports DX9 & GL2. My recent (completed) topics of research have included resolving cross api issues, 2d on 3d, optimised geometry batching & submission, vertex cache optimisation, and I have implemented multi-streaming for further optimisation later on. Nearly all of the usual software tools necessary for accelerated development have been completed including support for nvperfhud5, custom assert macros, memory management for tracking leaks, etc. IS3 has so far got one proprietary binary file format for storing 3d meshes which supports skinned animation and was originally inspired by the unofficial MD4 3d file format.
I've tried to keep supporting libraries in the engine down to the essentials, but the ones most worthy of note are lua for scripting, cegui for gui, expat for xml parsing and zlib for compression.
Ok so now we're more up to speed with where I'm at I guess I can talk about slightly more interesting stuff and finish with my daily development blog entry.
I've most recently finished font output which turned out to be relatively annoying to get perfect. At the same time I was authoring the cegui renderer module for compatibility with my renderer and it took about a fortnight to make it perfect but now it is done I have excellent performance and very high quality 2d output. The project came together for implementing the engine console which is essentially a lua interpreter. Cegui comes with lua bindings and I've redirected lua output to the console so that should come in useful for debugging amongst other stuff.
So today I've started work on a few things. I wrote some standard templates for IS3 source files to try and streamline my workflow a little bit, and I'm going to document the parts of the engine which aren't likely to change in the near future. I've also started work on 2d sprite and vector file formats which are xml files for ease of use.
I intend to spend a fair amount of time working on the 2d graphics components as a lot of great games can be made in 2d and I don't want to let an easy part of the engine be crappy just because I want to jump into gpu shader development.
I'll need to write up the expat parser modules, code up the sprite engine, and the vector engine, then I can think about adding lua bindings for scripting. Should be fun!
Ok so seeing as I'm just starting this blog now it's probably a good idea to talk a little about who I am, what I do, and why I'm starting this blog!
Professionally I'm currently a junior programmer for Stainless Games Ltd. (Of Carmageddon fame). I work primarily on PC, X360 and PSP platforms with VS2005 though I have experience of working on several more. By day I'm an Xbox Live Arcade specialist, and by night I'm writing a game engine which is intended for next-gen game development.
That brings me smartly to the main topic of this blog - writing a game engine. I specialise in graphics technology so hopefully this blog will make for a relatively interesting read for those who are interested. I will not necessarily be going into a huge amount of detail about various things, but if a topic deserves mention then I will post about it here.