Jump to content
  • Advertisement
  • entries
  • comments
  • views

About this blog

Z80 and C#-related shenanigans - now with added electronics.

Entries in this blog


Site update

I intended on making at least one post in January, but looked like I missed the opportunity by a day. I hope you all had pleasant breaks (and here I refer to breaks from my inane ramblings, not any religious festivals).

The lack of updates stem from having precious little to talk about; I've been busily trying to redevelop my website into a single, modular, extensible code base rather than having a large number of insular sub-webs.

Here's a whistle-stop tour of the various iterations of my website; whilst the underlying technology has improved in quality quite significantly it's evident that I couldn't design good looking websites for toffee.

Initially I just used the site to provide links to and information about the various calculator programs I was working on - hence its name, calc83plus. This was hosted for free, and so was just static HTML with each page having different content copied and pasted in.

I decided that this was a bit inefficient, so in the second version used javascript to draw a consistent header on each page. The content was much the same, I just changed the logo and colour scheme.

I eventually ended up with a real webhost, so changed the name and redesigned the site, adding my VB6 projects to it. I evidently decided against the javascript trick and was now using some ghastly frameset solution.

Hang on, this web host supports ASP scripting, and I know VB, so... why not make it server-side script driven? So I did. This version of the site allowed me to log in and post news articles directly with comments, and used server-side includes to insert the body of pages into a template.

I switched web hosts, and had a static placeholder page up for a fairly lengthy period until I decided to use my new PHP knowledge to build something that I could use to provide an image gallery for some of the VB and C projects I'd been working on. The host also supports ASP, but I'd decided that I never had anything interesting to say anyway so a news and comments system wouldn't be of much use. An inextensible system meant that the various new projects I developed ended up scattered all over the place in unrelated sub-webs and news posts resumed with my GDNet+ account.

Which leads us neatly to this; a design that is quite blatently "inspired" by GameDev.net's (and provides news updates by screen-scraping GDNet) but which can be used to host information about pictures, projects and a gallery in one centralised location and can be easily extended with new modules.





Scripting with .NET is unbelievably easy. [grin]

I wanted to add scripting support to Brass, and have added it using .NET's excellent powers of reflection and its System.CodeDom.Compiler namespace.

The first thing I need to do is find out which language the source script is written in. I use the extension to check for this.

string ScriptFile = ...; // Name of main script file to compile.

CodeDomProvider Provider = null;

// Get the extension (eg "cs")
string Extension = Path.GetExtension(ScriptFile).ToLowerInvariant();
if (Extension.Length > 0 && Extension[0] == '.') Extension = Extension.Substring(1);

// Hunt through all available compilers and dig out one with a matching extension.
foreach (CompilerInfo Info in CodeDomProvider.GetAllCompilerInfo()) {
if (Info.IsCodeDomProviderTypeValid) {
CodeDomProvider TestProvider = Info.CreateProvider();
if (TestProvider.FileExtension.ToLowerInvariant() == Extension) {
Provider = TestProvider;

if (Provider == null) throw new CompilerExpection(source, "Script language not supported.");

Now that we have a compiler, we just set some settings, add some references, then compile the source files:

string[] ScriptFiles = ...; // Array of source file name(s) to compile.

// Compiler settings:
CompilerParameters Parameters = new CompilerParameters();
Parameters.GenerateExecutable = false; // Class lib, not .exe
Parameters.GenerateInMemory = true;

// Add references:

// Compile!
CompilerResults Results = Provider.CompileAssemblyFromFile(Parameters, ScriptFiles);

And that's it! In my case I now pass any errors back up to the assembler, and exit if there were any errors:

// Errors?
foreach (CompilerError Error in Results.Errors) {
Compiler.NotificationEventArgs Notification = new Compiler.NotificationEventArgs(compiler, Error.ErrorText, Error.FileName, Error.Line);
if (Error.IsWarning) {
} else {

// Do nothing if there were errors.
if (Results.Errors.HasErrors) return;

Now the task is passed on to reflection; I go through the compiled assembly, hunt down methods and wrap them up for use as native Brass functions and/or directives.

// Grab the public classes from the script.
foreach (Type T in Results.CompiledAssembly.GetExportedTypes()) {
// ...

I've used this technique in the release of my PAL demo; a C# script file is used to encode an image to the 18x304 resolution and format required by the routine.




Brass 3 and software PAL

My work with the VDP in the Sega Master System made me more aware of how video signals are generated, so thought it would be an interesting exercise to try and generate them in software. This also gives me a chance to test Brass 3, by actively developing experimental programs.

I'm using a simple 2-bit DAC based on a voltage divider, using the values listed here. This way I can generate 0V (sync), ~0.3V (black), ~0.6V (grey) and 1V (white).

My first test was to output a horizontal sync pulse followed by black, grey, then white, counting clock cycles (based on a 6MHz CPU clock). That's 6 clock cycles per us.

The fastest way to output data to hardware ports on the Z80 is the outi instruction, which loads a value from the address pointed to by hl, increments hl, decrements b and outputs the value to port c. This takes a rather whopping 16 clock cycles (directly outputting to an immediate port address takes 11 clock cycles, but the overhead comes from loading an immediate value into a which takes a further 7). The time spent creating the picture in PAL is 52us, which is 312 clock cycles. That's 19.5 outi instructions, and by the time you've factored in the loop overhead that gives you a safe 18 pixel horizontal resolution - which is pretty terrible.

Even with this technique, in the best case scenario you output once every 16 clock cycles which gives you a maximum time resolution of 2.67us. This is indeed a problem as vertical sync is achieved by transmitting two different types of sync pulse, made of either a 2us sync followed by 30us black (short) or 30us sync followed by 2us black (long). In my case I plumped for the easiest to time 4us/28us and hoped it would work.

Anyhow, I made a small three-colour image for testing: .

Of course, as I need to output each scanline anyway I end up with a resolution of 304 lines, which gives me rather irregular pixels, so I just stretch the above image up to 20x304. Eagle-eyed readers would have noticed that the horizontal resolution is only 18 pixels, but somewhere in the development process I forgot how to count and so made the image two pixels too wide.

As you can see, it shows (the entire image is shunted to the right). TVs crop the first and last few scanlines (they aren't wasted, though, and can be used for Teletext) so that's why that's missing. [smile] A widescreen monitor doesn't help the already heavily distorted pixels either, but it does (somewhat surprisingly) work.

With a TI-83+ SE (or higher) you have access to a much faster CPU (15MHz) and more accurate timing (crystal timers rather than an RC circuit that changes speed based on battery levels) as well as better interrupt control, so on an SE calculator you could get at least double the horizontal resolution and output correct vertical sync patterns. You also have better control over the timer interrupts, so you could probably drive hsync via a fixed interrupt, leaving you space to insert a game (the only code I had space for checks to see if the On key is held so you can quit the program - more clock cycle counting). I only have the old 6MHz calculator, though, so I'm pleased enough that it works at all, even if the results are useless!




Brass Beta 1

Brass 3 Website.

I've released a beta version of the new assembler. It comes with the compiler, a GUI builder (see the above screenshot) and the help viewer; it also comes bundled with a number of plugins.

I've also knocked together a quick demo project that can be built directly from Explorer once Brass is installed.

There are a number of missing features (such as a project editor, project templates and multiple build configurations) and no doubt broken, incomplete or untested components - but at least it's out in the wild now, which gives me an incentive to fix it!




Emulating TI-OS 1.15 and a greyscale LCD

OS 1.15 appears to boot, and if I run an OS in Pindur TI, archive the files (copy them to Flash ROM) then use that ROM dump in my emulator the files are still there, where they can be copied to RAM.

Trying to re-archive them results in a fairly un-helpful message, as I haven't implemented any Flash ROM emulation (nor can I find any information on it)...

Applications (which are only ever stored and executed on Flash ROM) work well, though.

I've also updated the LCD emulation a little to simulate the LCD delay; greyscale programs (that flicker pixels on and off) work pretty well now.




Brass 3 and TI-83+ Emulation

Brass 3 development continues; the latest documentation (automatically generated from plugins marked with attributes via reflection) is here. The compiler is becoming increasibly powerful - and labels can now directly store string values, resulting in things like an eval() function for powerful macros (see also clearpage for an example where a single function is passed two strings of assembly source and it uses the smallest one when compiled).

Thanks to a series of hints posted by CoBB and Jim e I rewrote my TI-83+ emulator (using the SMS emulator's Z80 library) and it now boots and runs pretty well. The Flash ROM archive isn't implemented, so I'm stuck with OS 1.12 for the moment (later versions I've dumped lock up at "Defragmenting..."). I also haven't implemented software linking, and so to transfer files I need to plug in my real calculator to the parallel port and send files manually.




Brass 3

Quake isn't dead, but I've shifted my concentration to trying to get Brass 3 (the assembler project) out.

Brass 2 didn't really work, but I've taken a lot of its ideas - namely the plugin system - and kept some of the simplicity from Brass 1. The result works, and is easy to extend and maintain. Last night I got it to compile all of the programs I used for testing Brass 1 against TASM successfully.

I'm taking advantage of .NET's excellent reflection capabilities; one such example is marking plugin functions with attributes for documentation purposes, meaning that all you need to get Brass documentation is to drop your plugin collection assemblies (DLLs) into the Brass directory then open the help viewer app.

The source code examples are embedded as text, but compiled by the viewer (and thus syntax-highlighted) so you can click on directives or functions and it'll jump to their definitions automatically.

Native function support and a much-improved parser means that complex control structures can be built up, like:

file = fopen("somefile.txt")

#while !feof(file)
.db fread(file)


The compiler invokes the plugins, and the plugins talk back to the compiler ("remember your current position", "OK, we need to loop, so go back to this position", "this loop fails, so switch yourself off until you hit the #loop directive again").

The compiler natively works with project files (rather than some horrible command-line syntax) which specify which plugins to load, which include directories to search and so on and so forth. There are a number of different plugin classes:
IAssembler - CPU-specific assembler. IDirective - assembler directive. IFunction - functions like abs() or fopen(). IOutputWriter - writes the object file to disk (eg raw, intel hex, TI-83+ .8xp). IOutputModifier - modifies each output byte (eg "unsquishing" bytes to two ASCII charcters for the TI-83). IStringEncoder - handles the conversion of strings to byte[] arrays (ascii, utf8, arcane mappings for strange OS).
Unlike Brass 2, though, I actually have working output from this, so hopefully it'll get released!

As a bonus, to compare outputs between this and TASM (to check it was assembling properly) I hacked together a binary diff tool from the algorithm on Wikipedia (with the recursion removed) - it's not great, but it's been useful to me. [smile]

using System;
using System.Collections.Generic;
using System.Text;
using System.IO;

namespace Differ {
class Program {
static void Main(string[] args) {

// Prompt syntax:
if (args.Length != 2) {
Console.WriteLine("Usage: Differ ");

// Load both files into byte arrays (sloppy, but whatever).
byte[][] Data = new byte[2][];

for (int i = 0; i try {
byte[] Source = File.ReadAllBytes(args);
Data = new byte[Source.Length + 1];
Array.Copy(Source, 0, Data, 1, Source.Length);
} catch (Exception ex) {
Console.WriteLine("File load error: " + args + " (" + ex.Message + ")");

// Quick-and-dirty equality test:
if (Data[0].Length == Data[1].Length) {
bool IsIdentical = true;
for (int i = 0; i 0].Length; ++i) {
if (Data[0] != Data[1]) {
IsIdentical = false;
if (IsIdentical) {
Console.WriteLine("Files are identical.");

if (Data[0].Length != Data[1].Length) {
Console.WriteLine("Files are different sizes.");

// Analysis:
int[,] C = new int[Data[0].Length, Data[1].Length];
for (int i = 1; i 0].Length; ++i) {
if ((i - 1) % 1000 == 0) Console.Write("\rAnalysing: {0:P}...", (float)i / (float)Data[0].Length);
for (int j = 1; j 1].Length; ++j) {
if (Data[0] == Data[1][j]) {
C[i, j] = C[i - 1, j - 1] + 1;
} else {
C[i, j] = Math.Max(C[i, j - 1], C[i - 1, j]);
Console.WriteLine("\rResults:".PadRight(Console.BufferWidth - 1));

List CollectedDiffData = new List(Math.Max(Data[0].Length, Data[1].Length));

for (int i = Data[0].Length - 1, j = Data[1].Length - 1; ; ) {
if (i > 0 && j > 0 && Data[0] == Data[1][j]) {
CollectedDiffData.Add(new DiffData(DiffData.DiffType.NoChange, Data[0], i, j));
--i; --j;
} else {
if (j > 0 && (i == 0 || C[i, j - 1] >= C[i - 1, j])) {
CollectedDiffData.Add(new DiffData(DiffData.DiffType.Addition, Data[1][j], i, j));
} else if (i > 0 && (j == 0 || C[i, j - 1] 1, j])) {
CollectedDiffData.Add(new DiffData(DiffData.DiffType.Removal, Data[0], i, j));
} else {
break; // Done!

DiffData.DiffType LastType = (DiffData.DiffType)(-1);

int PrintedData = 0;
foreach (DiffData D in CollectedDiffData) {
if (LastType != D.Type) {
Console.Write("{0:X4}:{1:X4}", D.AddressA - 1, D.AddressB - 1);
LastType = D.Type;
PrintedData = 0;
} else if (PrintedData >= 16) {
Console.Write(" ");
PrintedData = 0;
ConsoleColor OldColour = Console.ForegroundColor;

switch (D.Type) {
case DiffData.DiffType.NoChange:
Console.ForegroundColor = ConsoleColor.White;
case DiffData.DiffType.Addition:
Console.ForegroundColor = ConsoleColor.Green;
case DiffData.DiffType.Removal:
Console.ForegroundColor = ConsoleColor.Red;
Console.Write(" " + D.Data.ToString("X2"));
Console.ForegroundColor = OldColour;


private struct DiffData {

public enum DiffType {

public DiffType Type;

public byte Data;

public int AddressA;
public int AddressB;

public DiffData(DiffType type, byte data, int addressA, int addressB) {
this.Type = type;
this.Data = data;
this.AddressA = addressA;
this.AddressB = addressB;


Removals are shown in red, additions are shown in green, data that's the same is in white.




Dynamic Lighting

Quake has a few dynamic lights - some projectiles, explosions and the fireballs light up their surroundings.

Fortunately, the method used is very simple: take the brightness of the projectile, divide it by the distance between the point on the wall and the light and add it to the light level for that point. This is made evident by pausing the early versions of Quake; when paused the areas around dynamic lights get brighter and brighter as the surface cache gets overwritten multiple times.

The next problem is that a light on the far side of a thin wall will light up both sides! Happily, this bug wasn't fixed in Quake, and is very evident in the start level where you can see the light spots from the fireballs in the Hard hall on the walls of the Medium hall.

I'd noticed that fireball-spawning entites would occasionally spawn a still fireball then remove it some seconds later. Looking over the Quake source code, it would appear that in each update the game iterates over all entities, checks their movetype property, then applies physics as applicable. Fireballs don't seem to have any real physics to speak of (they can pass through walls) beyond adding gravity each update to their velocity.

This required some further changes to get the VM working, including console variable support (gravity is defined in the sv_gravity console variable - this allows for the special low-gravity Ziggurat Vertigo).

For some reason the pickups seem to have their movetype set to TOSS resulting in all of the pickups flying away when the level started (not to mention abysmal performance). I added a hack to the droptofloor() QuakeC function that sets their on-floor flag (and hence disables their physics), but I'm not sure what the best course of action is going to be. I'm having to dig deeper and deeper into Quake's source, now...




Bounding Box Bouncing

I've updated the collision detection between points and the world with Scet's idea of only testing with the faces in the leaf containing the start position of the point.

ZiggyWare's Simple Vector Rendering GameComponent is being very useful for testing the collision routines. [smile]

I have also extended the collision routines to bounding boxes. This fixes a few issues where a point could drop through a crack in the floor between two faces.

My technique is to test the collision between the eight corners of the box and then to pick the shortest collision and use that as the resultant position. This results in a bounding box that is not solid; you can impale it on a spike, for example.

In the above screenshot, the cube comes to rest with the corners all lined up on cracks, which means that it slips through as its base isn't solid.

Happily, object collisions are simple bounding-box affairs, the bounding boxes set with QuakeC code, which should simplify things somewhat.




Goodness Gracious, Great Balls of Lava

I've reworked the VM completely to use an array of a union of a float and an int, rather than the MemoryStream kludge I was using before. This also removes a lot of the multiply-or-divide-by-four mess I had with converting indices to positions within the memory stream.

There are (as far as I know) three ways to invoke QuakeC methods. The first is when an entity is spawned, and this is only valid when the level is being loaded (the function that matches the entity's classname is called). The second is when an entity is touched (its touch function is called) and the third is when its internal timer, nextthink, fires (its think function is called).

The third is the easiest one to start with. On a monster-free level, there are still "thinking" items - every powerup item has droptofloor() scheduled to run.

A strange feature of custom levels has been that all of the power-up items have been floating in mid-air.

By hacking together some crude collision detection (face bouncing boxes and axial planes only) I could make those objects sit in the right place:

With many thanks to Zipster pointing me in the right direction I have extended this to perform collision detection of a moving vertex against all world faces.

Here I fire a vertex (with a lava ball drawn around it to show where it is) through the world. When I hit a face I reflect the velocity by the face normal.

It looks much more realistic if I decrease the velocity z component over time (to simulate gravity) and reduce the magnitude of the velocity each time it hits a surface, so I can now bounce objects around the world:

Performing the collision detection against every single face in the world is not very efficient (though on modern hardware I'm still in the high nineties). There are other problems to look into - such as collisions with invisible brushes used for triggers and collision rules when it comes to water and the sky, not to mention that I should really be performing bounding-box collision detection, not just single moving points. Points also need to slide off surfices, not just stop dead in their tracks.

Once a vertex hits a plane and has been reflected I push it out of the surface very slightly to stop it from getting trapped inside the plane. This has the side effect of lifting the vertex a little above the floor, which it then drops back against, making it slide down sloping floors.




Entities Spawned by QuakeC Code

After all that time spent trying to work out how the QuakeC VM works I finally have some real-world results. [smile]

Apart from the obvious boring stuff going on in the background parsing and loading entities go, two functions in particular are of note. The first is a native function, precache_model(string) which loads and caches a model of some description (sprites, Alias models or BSP models). The QuakeC VM I've written raises an event (passing an event containing the name of the model to load), which the XNA project can interpret and use to load a model into a format it's happy with.

Inspiration for the Strogg in Quake 2?

With a suitable event handler and quick-and-dirty model renderer in place, the above models are precached (though they probably shouldn't be drawn at {0, 0, 0}).

The second function of interest is another native one, makestatic(entity). Quake supports three basic types of entity - dynamic entities (referenced by an index, move around and can be interacted with - ammo boxes, monsters and so on), temporary entities (removes itself from the game world automatically - point sprites) and static entities. Static entities are the easiest to handle - once spawned they stay fixed in one position, and can't be accessed (and hence can't be removed). Level decorations such as flaming torches are static. Here's the QuakeC code used to spawn a static small yellow flame:

void() light_flame_small_yellow =
precache_model ("progs/flame2.mdl");
setmodel (self, "progs/flame2.mdl");
FireAmbient ();
makestatic (self);
That ensures that the model file is precached (and loaded), assigns the model to itself, spawns an ambient sound (via the FireAmbient() QuakeC function) then calls makestatic() which spawns a static entity then deletes the source entity. In my case this triggers an event that can be picked up by the XNA project:

// Handle precache models:
void Progs_PrecacheModelRequested(object sender, Quake.Files.QuakeC.PrecacheFileRequestedEventArgs e) {
switch (Path.GetExtension(e.Filename)) {
case ".mdl":
if (CachedModels.ContainsKey(e.Filename)) return; // Already cached.
CachedModels.Add(e.Filename, new Renderer.ModelRenderer(this.Resources.LoadObject(e.Filename)));
case ".bsp":
if (CachedLevels.ContainsKey(e.Filename)) return; // Already cached.
CachedLevels.Add(e.Filename, new Renderer.BspRenderer(this.Resources.LoadObject(e.Filename)));

// Spawn static entities:
void Progs_SpawnStaticEntity(object sender, Quake.Files.QuakeC.SpawnStaticEntityEventArgs e) {
// Grab the model name from the entity.
string Model = e.Entity.Properties["model"].String;

// Get the requisite renderer:
Renderer.ModelRenderer Renderer;
if (!CachedModels.TryGetValue(Model, out Renderer)) throw new InvalidOperationException("Model " + Model + " not cached.");

// Add the entity's position to the renderer:
Renderer.EntityPositions.Add(new Renderer.EntityPosition(e.Entity.Properties["origin"].Vector, e.Entity.Properties["angles"].Vector));

The result is a light sprinkling of static entities throughout the level.

As a temporary hack I just iterate over the entities, checking if each one is still active, and if so lumping them with the static entities.

If you look back a few weeks you'd notice that I already had a lot of this done. In the past, however, I was simply using a hard-coded entity name to model table and dumping entities any old how through the level. By parsing and executing progs.dat I don't have to hard-code anything, can animate models correctly, and even have the possibility of running the original game logic.

An example of how useful this is relates to level keys. In some levels you need one or two keys to get to the exit. Rather than use the same keys for each level, or use many different entity classes for keys, the worldspawn entity is assigned a type (Mediaeval, Metal or Base) and the matching key model is set automatically by the key spawning QuakeC function:

/*QUAKED item_key2 (0 .5 .8) (-16 -16 -24) (16 16 32)
GOLD key
In order for keys to work
you MUST set your maps
worldtype to one of the
0: medieval
1: metal
2: base

void() item_key2 =
if (world.worldtype == 0)
precache_model ("progs/w_g_key.mdl");
setmodel (self, "progs/w_g_key.mdl");
self.netname = "gold key";
if (world.worldtype == 1)
precache_model ("progs/m_g_key.mdl");
setmodel (self, "progs/m_g_key.mdl");
self.netname = "gold runekey";
if (world.worldtype == 2)
precache_model2 ("progs/b_g_key.mdl");
setmodel (self, "progs/b_g_key.mdl");
self.netname = "gold keycard";
self.touch = key_touch;
self.items = IT_KEY2;
setsize (self, '-16 -16 -24', '16 16 32');
StartItem ();

Mediaeval Key

Metal Runekey

Base Keycard

One problem is entities that appear at different skill levels. Generally higher skill levels have more monsters, but there are other level design concerns such as swapping a strong enemy for a weaker one in the easy skill mode. In deathmatch mode entities are also changed - keys are swapped for weapons, for example. At least monsters are kind - their spawn function checks the deathmatch global and they remove themselves automatically, so adding the (C#) line Progs.Globals["deathmatch"].Value.Boolean = true; flushes them out nicely.

Each entity, however, has a simple field attached - spawnflags - that can have bits set to inhibit the entity from spawning at the three different skill levels.




Regrettably, whilst the Quake 1 QuakeC interpreter source code is peppered with references to Quake 2 it would appear that Quake 2 used native code rather than QuakeC to provide gameplay logic, so I've dropped development on the Quake 2 side at the moment.





This journal hasn't been updated for a while, I know. That doesn't mean that work on the Quake project has dried up - on the contrary, a fair amount of head-way has been made!

The problem is that screenshots like the above are really not very interesting at all. [rolleyes]

As far as I can tell, Quake's entity data (at runtime) is stored in a different chunk of memory to the memory used for global variables. I've had numerous problems getting the code to work - most of which caused by pointer confusion. Four bytes are generally used for a field (vectors use three singles, so you have 12 bytes), so I've tried multiplying and dividing offsets by four to try and get it all to work.

The basic entity data is stored in the .bsp file. It takes the following form:

"worldtype" "2"
"sounds" "6"
"classname" "worldspawn"
"wad" "gfx/base.wad"
"message" "the Slipgate Complex"
"classname" "info_player_start"
"origin" "480 -352 88"
"angle" "90"
"classname" "light"
"origin" "480 96 168"
"light" "250"

classname describes the type of the entity, and the other key-value pairs are used to adjust the entity's properties. All entities share the same set of fields, which are declared in a special section of the progs.dat file.

For each classname there is a matching QuakeC function. As far as I can tell the trick is to decode an entity then invoke its QuakeC method.

void() light =
if (!self.targetname)
{ // inert light

if (self.style >= 32)
self.use = light_use;
if (self.spawnflags & START_OFF)
lightstyle(self.style, "a");
lightstyle(self.style, "m");

As you can probably guess, if a light entity is not attached to a particular target it is automatically removed from the world as it serves no useful function (the lightmaps are prerendered, after all). A similar example comes from the monsters which remove themselves if deathmatch is set. The QuakeC code also contains instructions on setting which model file to use for each entity and declares the animation frame sequences, so is pretty important to get working. [smile]

I have a variety of directories stuffed with images on my website, and (thankfully) I have used a faintly sane naming convention for some of these. I knocked together a PHP script which reads the files from these directories and creates a nifty sort of gallery. It automatically generates thumbnails and is quite fast. As I don't have an internet connection at home it's more practical to be able to just drop files into a directory and have the thing automatically update rather than spend time updating a database.

PHP source code (requires GD).

It's INI file driven. The two files are config.ini:

base_dir=../../ ; Base directory of the site
; I put this in bin/gallery, so need to go up two levels :)

valid_extensions=jpg,gif,png ; Only three that are supported

quality=90 ; Thumbnail JPEG quality

...and galleries.ini:

[VB6 Terrain]
key=te ; Used in gallery=xyz parameter
path=projects/te ; Image location, relative to base_dir
date_format=ddmmyyyyi ; Filename format, where i = index.

ignore_extensions=jpg ; Can be used to ignore extensions.

The index in the date format is for files that fall on the same date. Historically I used a single letter (01012003a, 01012003b), currently I use a two-digit integer (2003.01.01.01).

If a text file with the name of the image + ".txt" is found, that is used as a caption. (eg, /images/quake/2003.01.01.01.png and /images/quake/2003.01.01.01.png.txt).

It's not designed to be bullet-proof (and was written very very quickly) but someone might find it a useful base to work on. [smile]




8-bit Raycasting Quake Skies and Animated Textures

All of this Quake and XNA 3D stuff has given me a few ideas for calculator (TI-83) 3D.

One of my problems with calculator 3D apps is that I have never managed to even get a raycaster working. Raycasters aren't exactly very tricky things to write.

So, to help me, I wrote a raycaster in C#, limiting myself to the constraints of the calculator engine - 96x64 display, 256 whole angles in a full revolution, 16x16 map, that sort of thing. This was easy as I had floating-point maths to fall back on.

With that done, I went and ripped out all of the floating-point code and replaced it with fixed-point integer arithmetic; I'm using 16-bit values, 8 bits for the whole part and 8 bits for the fractional part.

From here, I just rewrote all of my C# code in Z80 assembly, chucking in debugging code all the way through so that I could watch the state of values and compare them with the results from my C# code.

The result is rather slow, but on the plus side the code is clean and simple. [smile] The screen is cropped for three reasons: it's faster to only render 64 columns (naturally), you get some space to put a HUD and - most importantly - it limits the FOV to 90?, as the classic fisheye distortion becomes a more obvious problem above this.

I sneaked a look at the source code of Gemini, an advanced raycaster featuring textured walls, objects and doors. It is much, much faster than my engine, even though it does a lot more!

It appears that the basic raycasting algorithm is pretty much identical to the one I use, but gets away with 8-bit fixed point values. 8-bit operations can be done significantly faster than 16-bit ones on the Z80, especially multiplications and divisions (which need to be implemented in software). You can also keep track of more variables in registers, and restricting the number of memory reads and writes can shave off some precious cycles.

Some ideas that I've had for the raycaster, that I'd like to try and implement:

Variable height floors and ceilings. Each block in the world is given a floor and ceiling height. When the ray intersects the boundary, the camera height is subtracted from these values, they are divided by the length of the ray (for projection) and the visible section of the wall is drawn. Two counters would keep track of the upper and lower values currently drawn to to keep track of the last block's extent (for occlusion) and floor/ceiling colours could be filled between blocks. No texturing: wall faces and floors/ceilings would be assigned dithered shades of grey. I think this, combined with lighting effects (flickering, shading), would look better than monochrome texture mapping - and would be faster! Ray-transforming blocks. For example, you could have two 16x16 maps with a tunnel: the tunnel would contain a special block that would, when hit, tell the raycaster to start scanning through a different level. This could be used to stitch together large worlds from small maps (16x16 is a good value as it lets you reduce level pointers to 8-bit values). Adjusting floors and ceilings for lifts or crushing ceilings.

As far as the Quake project, I've made a little progress. I've added skybox support for Quake 2:

Quake 2's skyboxes are simply made up of six textures (top, bottom, front, back, left, right). Quake doesn't use a skybox. Firstly, you have two parts of the texture - one half is the sky background, and the other half is a cloud overlay (both layers scroll at different speeds). Secondly, it is warped in a rather interesting fashion - rather like a squashed sphere, reflected in the horizon:

For the moment, I'm just using the Quake 2 box plus a simple pixel shader to mix the two halves of the sky texture.

I daresay something could be worked out to simulate the warping.

The above is from GLQuake, which doesn't really look very convincing at all.

I've reimplemented the texture animation system in the new BSP renderer, including support for Quake 2's animation system (which is much simpler than Quake 1's - rather than have magic texture names, all textures contain the name of the next frame in their animation cycle).




QuakeC VM

I've started serious work on the QuakeC virtual machine.

The bytecode is stored in a single file, progs.dat. It is made up of a number of different sections:

Definitions data - an unformatted block of data containing a mixture of floating point values, integers and vectors. Statements - individual instructions, each made up of four short integers. Each statement has an operation code and up to three arguments. These arguments are typically pointers into the definitions data block. Functions - these provide a function name, a source file name, storage requirements for local variables and the address of the first statement.

On top of that are two tables that break down the definitions table into global and field variables (as far as I'm aware this is only used to print "nice" names for variables when debugging, as it just attaches a type and name to each definition) and a string table.

The first few values in the definition data table are used for predefined values, such as function parameters and return value storage.

Now, a slight problem is how to handle these variables. My initial solution was to read and write types strictly as particular types using the definitions table, but this idea got scrapped when I realised that the QuakeC bytecode uses the vector store opcode to copy string pointers, and a vector isn't much use when you need to print a string.

I now use a special VariablePointer class that internally stores the pointer inside the definition data block, and provides properties for reading and writing using the different formats.

/// Defines a variable.
public class VariablePointer {

private readonly uint Offset;

private readonly QuakeC Source;

private void SetStreamPos() { this.Source.DefinitionsDataReader.BaseStream.Seek(this.Offset, SeekOrigin.Begin); }

public VariablePointer(QuakeC source, uint offset) {
this.Source = source;
this.Offset = offset;

#region Read/Write Properties

/// Gets or sets a floating-point value.
public float Float {
get { this.SetStreamPos(); return this.Source.DefinitionsDataReader.ReadSingle(); }
set { this.SetStreamPos(); this.Source.DefinitionsDataWriter.Write(value); }

/// Gets or sets an integer value.
public int Integer {
get { this.SetStreamPos(); return this.Source.DefinitionsDataReader.ReadInt32(); }
set { this.SetStreamPos(); this.Source.DefinitionsDataWriter.Write(value); }

/// Gets or sets a vector value.
public Vector3 Vector {
get { this.SetStreamPos(); return new Vector3(this.Source.DefinitionsDataReader.BaseStream); }
set {


#region Extended Properties

public bool Boolean {
get { return this.Float != 0f; }
set { this.Float = value ? 1f : 0f; }


#region Read-Only Properties

/// Gets a string value.
public string String {
get { return this.Source.GetString((uint)this.Integer); }

public Function Function {
get { return this.Source.Functions[this.Integer]; }


Not too elegant, but it works!

If the offset for a statement is negative in a function, that means that the function being called is an internally-implemented one. The source code for the test application in the screenshot at the top of this entry is as follows:
float testVal;

void() test = {
dprint("This is a QuakeC VM test...\n");

testVal = 100;
dprint(ftos(testVal * 10));

while (testVal > 0) {
testVal = testVal - 1;
dprint("Lift off!");


Both dprint and ftos are internal functions; I use a simple array of delegates to reference them.

There's a huge amount of work to be done here, especially when it comes to entities (not something I've looked at at all). All I can say is that I'm very thankful that the .qc source code is available and the DOS compiler runs happily under Windows - they're going to be handy for testing.




Vista and MIDI

I have a Creative Audigy SE sound card, which provides hardware MIDI synthesis. However, under Vista, there was no way (that I could see) to change the default MIDI output device to this card, meaning that all apps were using the software synthesiser instead.

Vista MIDI Fix is a 10-minute application I wrote to let me easily change the default MIDI output device. Applications which use MIDI device 0 still end up with the software synthesiser, unfortunately.

To get the hardware MIDI output device available I needed to install Creative's old XP drivers, and not the new Vista ones from their site. This results in missing CMSS, but other features - such as bass redirection, bass boost, 24-bit/96kHz output and the graphical equaliser - now work.

The Creative mixer either crashes or only displays two volume sliders (master and CD audio), which means that (as far as I can tell) there's no easy way to enable MIDI Reverb and MIDI Chorus.




Quake 2 PVS, Realigned Lightmaps and Colour Lightmaps

Quake 2 stores its visibility lists differently to Quake 1 - as close leaves on the BSP tree will usually share the same visibility information, the lists are grouped into clusters (Quake 1 stored a visibility list for every leaf). Rather than go from the camera's leaf to find all of the other visible leaves directly, you need to use the leaf's cluster index to look up which other clusters are visible, then search through the other leaves to find out which reference that cluster too.

In a nutshell, I now use the visibility cluster information in the BSP to cull large quantities of hidden geometry, which has raised the framerate from 18FPS (base1.bsp) to about 90FPS.

I had a look at the lightmap code again. Some of the lightmaps appeared to be off-centre (most clearly visible when there's a small light bracket on a wall casting a sharp inverted V shadow on the wall underneath it, as the tip of the V drifted to one side). On a whim, I decided that if the size of the lightmap was rounded to the nearest 16 diffuse texture pixels, one could assume that the top-left corner was not at (0,0) but offset by 8 pixels to centre the texture. This is probably utter nonsense, but plugging in the offset results in almost completely smooth lightmaps, like the screenshot above.

Before and after - coloured lightmaps.

I quite like Quake 2's colour lightmaps, and I also quite like the chunky look of the software renderer. I've modified the pixel shader for the best of both worlds. I calculate the three components of the final colour individually, taking the brightness value for the colourmap from one of the three channels in the lightmap.

float4 Result = 1;

ColourMapIndex.y = 1 - tex2D(LightMapTextureSampler, vsout.LightMapTextureCoordinate).r;
Result.r = tex2D(ColourMapSampler, ColourMapIndex).r;

ColourMapIndex.y = 1 - tex2D(LightMapTextureSampler, vsout.LightMapTextureCoordinate).g;
Result.g = tex2D(ColourMapSampler, ColourMapIndex).g;

ColourMapIndex.y = 1 - tex2D(LightMapTextureSampler, vsout.LightMapTextureCoordinate).b;
Result.b = tex2D(ColourMapSampler, ColourMapIndex).b;

return Result;
There is no impact on framerate at this stage (the rest of the code is the problem - I'm not even batching by texture at the moment).




Journals need more animated GIFs

Pixel shaders are fun.

I've implemented support for decoding mip-maps from mip textures (embedded in the BSP) and from WAL files (external).

Now, I know that non-power-of-two textures are naughty. Quake uses a number of them, and when loading textures previously I've just let Direct3D do its thing which has appeared to work well.

However, now that I'm directly populating the entire texture, mip-maps and all, I found that Texture2D.SetData was throwing exceptions when I was attempting to shoe-horn in a non-power-of-two texture. Strange. I hacked together a pair of extensions to the Picture class - GetResized(width, height) which returns a resized picture (nearest-neighbour, naturally) - and GetPowerOfTwo(), which returns a picture scaled up to the next power-of-two size if required.

All textures now load correctly, and I can't help but notice that the strangely distorted textures - which I'd put down to crazy texture coordinates - now render correctly! It turns out that all of the distorted textures were non-power-of-two.

The screenshots above demonstrate that Quake 2 is also handled by the software-rendering simulation. The current effect file for the world is as follows:

uniform extern float4x4 WorldViewProj : WORLDVIEWPROJECTION;

uniform extern float Time;
uniform extern bool Rippling;

uniform extern texture DiffuseTexture;
uniform extern texture LightMapTexture;

uniform extern texture ColourMap;

struct VS_OUTPUT {
float4 Position : POSITION;
float2 DiffuseTextureCoordinate : TEXCOORD0;
float2 LightMapTextureCoordinate : TEXCOORD1;
float3 SourcePosition: TEXCOORD2;

sampler DiffuseTextureSampler = sampler_state {
texture = ;
mipfilter = POINT;

sampler LightMapTextureSampler = sampler_state {
texture = ;
mipfilter = LINEAR;
minfilter = LINEAR;
magfilter = LINEAR;

sampler ColourMapSampler = sampler_state {
texture = ;
addressu = CLAMP;
addressv = CLAMP;

VS_OUTPUT Transform(float4 Position : POSITION0, float2 DiffuseTextureCoordinate : TEXCOORD0, float2 LightMapTextureCoordinate : TEXCOORD1) {


// Transform the input vertex position:
Out.Position = mul(Position, WorldViewProj);

// Copy the other values straight into the output for use in the pixel shader.
Out.DiffuseTextureCoordinate = DiffuseTextureCoordinate;
Out.LightMapTextureCoordinate = LightMapTextureCoordinate;
Out.SourcePosition = Position;

return Out;

float4 ApplyTexture(VS_OUTPUT vsout) : COLOR {

// Start with the original diffuse texture coordinate:
float2 DiffuseCoord = vsout.DiffuseTextureCoordinate;

// If the surface is "rippling", wobble the texture coordinate.
if (Rippling) {
float2 RippleOffset = { sin(Time + vsout.SourcePosition.x / 32) / 8, cos(Time + vsout.SourcePosition.z / 32) / 8 };
DiffuseCoord += RippleOffset;

// Calculate the colour map look-up coordinate from the diffuse and lightmap textures:
float2 ColourMapIndex = {
tex2D(DiffuseTextureSampler, DiffuseCoord).a,
1 - (float)tex2D(LightMapTextureSampler, vsout.LightMapTextureCoordinate).rgba

// Look up and return the value from the colour map.
return tex2D(ColourMapSampler, ColourMapIndex).rgba;

technique TransformAndTexture {
pass P0 {
vertexShader = compile vs_2_0 Transform();
pixelShader = compile ps_2_0 ApplyTexture();

It would no doubt be faster to have two techniques; one for rippling surfaces and one for still surfaces. It is, however, easier to use the above and switch the rippling on and off when required (rather than group surfaces and switch techniques). Given that the framerate rises from ~135FPS to ~137FPS on my video card if I remove the ripple effect altogether, it doesn't seem worth it.

Sorting out the order in which polygons are drawn looks like it's going to get important, as I need to support alpha-blended surfaces for Quake 2, and there are some nasty areas of Z-fighting cropping up.

Alpha-blending in 8-bit? Software Quake didn't support any sort of alpha blending (hence the need to re-vis levels for use with Quake GL as underneath the opaque waters were marked as invisible), and Quake 2 has a data file that maps 16-bit colour values to 8-bit palette indices. Quake 2 also had a "stipple alpha" mode used a dither pattern to handle the two translucent surface opacities (1/3 and 2/3 ratios).





Following sirob's prompting, I dropped the BasicEffect for rendering and rolled my own effect. After seeing the things that could be done with them (pixel and vertex shaders) I'd assumed they'd be hard to put together, and that I'd need to change my code significantly.

In reality all I've had to do is copy and paste the sample from the SDK documentation, load it into the engine (via the content pipeline), create a custom vertex declaration to handle two sets of texture coordinates (diffuse and lightmap) and strip out all of the duplicate code I had for creating and rendering from two vertex arrays.

public struct VertexPositionTextureDiffuseLightMap {

public Xna.Vector3 Position;
public Xna.Vector2 DiffuseTextureCoordinate;
public Xna.Vector2 LightMapTextureCoordinate;

public VertexPositionTextureDiffuseLightMap(Xna.Vector3 position, Xna.Vector2 diffuse, Xna.Vector2 lightMap) {
this.Position = position;
this.DiffuseTextureCoordinate = diffuse;
this.LightMapTextureCoordinate = lightMap;

public readonly static VertexElement[] VertexElements = new VertexElement[]{
new VertexElement(0, 0, VertexElementFormat.Vector3, VertexElementMethod.Default, VertexElementUsage.Position, 0),
new VertexElement(0, 12, VertexElementFormat.Vector2, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 0),
new VertexElement(0, 20, VertexElementFormat.Vector2, VertexElementMethod.Default, VertexElementUsage.TextureCoordinate, 1)


uniform extern float4x4 WorldViewProj : WORLDVIEWPROJECTION;

uniform extern texture DiffuseTexture;
uniform extern texture LightMapTexture;

uniform extern float Time;

struct VS_OUTPUT {
float4 Position : POSITION;
float2 DiffuseTextureCoordinate : TEXCOORD0;
float2 LightMapTextureCoordinate : TEXCOORD1;

sampler DiffuseTextureSampler = sampler_state {
Texture = ;
mipfilter = LINEAR;

sampler LightMapTextureSampler = sampler_state {
Texture = ;
mipfilter = LINEAR;

VS_OUTPUT Transform(float4 Position : POSITION, float2 DiffuseTextureCoordinate : TEXCOORD0, float2 LightMapTextureCoordinate : TEXCOORD1) {


Out.Position = mul(Position, WorldViewProj);
Out.DiffuseTextureCoordinate = DiffuseTextureCoordinate;
Out.LightMapTextureCoordinate = LightMapTextureCoordinate;

return Out;

float4 ApplyTexture(VS_OUTPUT vsout) : COLOR {
float4 DiffuseColour = tex2D(DiffuseTextureSampler, vsout.DiffuseTextureCoordinate).rgba;
float4 LightMapColour = tex2D(LightMapTextureSampler, vsout.LightMapTextureCoordinate).rgba;
return DiffuseColour * LightMapColour;

technique TransformAndTexture {
pass P0 {
vertexShader = compile vs_2_0 Transform();
pixelShader = compile ps_2_0 ApplyTexture();

Of course, now I have that up and running I might as well have a play with it...

By adding up and dividing the individual RGB components of the lightmap texture by three you can simulate the monochromatic lightmaps used by Quake 2's software renderer. Sadly I know not of a technique to go the other way and provide colourful lightmaps for Quake 1. [smile] Not very interesting, though.

I've always wanted to do something with pixel shaders as you get to play with tricks that are a given in software rendering with the speed of dedicated hardware acceleration. I get the feeling that the effect (or a variation of it, at least) will be handy for watery textures.

float4 ApplyTexture(VS_OUTPUT vsout) : COLOR {

float2 RippledTexture = vsout.DiffuseTextureCoordinate;

RippledTexture.x += sin(vsout.DiffuseTextureCoordinate.y * 16 + Time) / 16;
RippledTexture.y += sin(vsout.DiffuseTextureCoordinate.x * 16 + Time) / 16;

float4 DiffuseColour = tex2D(DiffuseTextureSampler, RippledTexture).rgba;
float4 LightMapColour = tex2D(LightMapTextureSampler, vsout.LightMapTextureCoordinate).rgba;

return DiffuseColour * LightMapColour;


My code is no doubt suboptimal (and downright stupid).

Naturally, I needed to try and duplicate Scet's software rendering simulation trick. [smile]

The colour map (gfx/colormap.lmp) is a 256x64 array of bytes. Each byte is an index to a colour palette entry, on the X axis is the colour and on the Y axis is the brightness: ie, RGBColour = Palette[ColourMap[DiffuseColour, Brightness]]. I cram the original diffuse colour palette index into the (unused) alpha channel of the ARGB texture, and leave the lightmaps untouched.
float2 LookUp = 0;
LookUp.x = tex2D(DiffuseTextureSampler, vsout.DiffuseTextureCoordinate).a;
LookUp.y = (1 - tex2D(LightMapTextureSampler, vsout.LightMapTextureCoordinate).r) / 4;
return tex2D(ColourMapTextureSampler, LookUp);

As I'm not loading the mip-maps (and am letting Direct3D handle generation of mip-maps for me) I have to disable mip-mapping for the above to work, as otherwise you'd end up with non-integral palette indices. The results are therefore a bit noisier in the distance than in vanilla Quake, but I like the 8-bit palette look. At least the fullbright colours work.




Less Colourful Quake 2

I've transferred the BSP rendering code to use the new level loading code, so I can now display correctly-coloured Quake 2 levels. [smile] The Quake stuff is in its own assembly, and is shared by the WinForms resource browser project and the XNA renderer.

I'm also now applying lightmaps via multiplication rather than addition, so they look significantly better.

A shader solution would be optimal. I'm currently just drawing the geometry twice, the second time with some alpha blending enabled.




Keyboard Handler Fix

ArchG indicated a bug in the TextInputHandler class I posted a while back - no reference to the delegate instance used for the unmanaged callback is held, so as soon as the garbage collector kicks in things go rather horribly wrong.

/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* XnaTextInput.TextInputHandler - benryves@benryves.com *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* This is quick and very, VERY dirty. *
* It uses Win32 message hooks to grab messages (as we don't get a nicely wrapped WndProc). *
* I couldn't get WH_KEYBOARD to work (accessing the data via its pointer resulted in access *
* violation exceptions), nor could I get WH_CALLWNDPROC to work. *
* Maybe someone who actually knows what they're doing can work something out that's not so *
* kludgy. *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
* This quite obviously relies on a Win32 nastiness, so this is for Windows XNA games only! *
* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */

#region Using Statements
using System;
using System.Runtime.InteropServices;
using System.Windows.Forms; // This class exposes WinForms-style key events.

namespace XnaTextInput {

/// A class to provide text input capabilities to an XNA application via Win32 hooks.
class TextInputHandler : IDisposable {

#region Win32

/// Types of hook that can be installed using the SetWindwsHookEx function.
public enum HookId {
WH_CBT = 5,
WH_MAX = 11,
WH_MIN = -1,
WH_SHELL = 10,

/// Window message types.
/// Heavily abridged, naturally.
public enum WindowMessage {
WM_KEYDOWN = 0x100,
WM_KEYUP = 0x101,
WM_CHAR = 0x102,

/// A delegate used to create a hook callback.
public delegate int GetMsgProc(int nCode, int wParam, ref Message msg);

/// Install an application-defined hook procedure into a hook chain.
/// Specifies the type of hook procedure to be installed.
/// Pointer to the hook procedure.
/// Handle to the DLL containing the hook procedure pointed to by the lpfn parameter.
/// Specifies the identifier of the thread with which the hook procedure is to be associated.
/// If the function succeeds, the return value is the handle to the hook procedure. Otherwise returns 0.
[DllImport("user32.dll", EntryPoint = "SetWindowsHookExA")]
public static extern IntPtr SetWindowsHookEx(HookId idHook, GetMsgProc lpfn, IntPtr hmod, int dwThreadId);

/// Removes a hook procedure installed in a hook chain by the SetWindowsHookEx function.
/// Handle to the hook to be removed. This parameter is a hook handle obtained by a previous call to SetWindowsHookEx.
/// If the function fails, the return value is zero. To get extended error information, call GetLastError.
public static extern int UnhookWindowsHookEx(IntPtr hHook);

/// Passes the hook information to the next hook procedure in the current hook chain.
/// Ignored.
/// Specifies the hook code passed to the current hook procedure.
/// Specifies the wParam value passed to the current hook procedure.
/// Specifies the lParam value passed to the current hook procedure.
/// This value is returned by the next hook procedure in the chain.
public static extern int CallNextHookEx(int hHook, int ncode, int wParam, ref Message lParam);

/// Translates virtual-key messages into character messages.
/// Pointer to an Message structure that contains message information retrieved from the calling thread's message queue.
/// If the message is translated (that is, a character message is posted to the thread's message queue), the return value is true.
public static extern bool TranslateMessage(ref Message lpMsg);

/// Retrieves the thread identifier of the calling thread.
/// The thread identifier of the calling thread.
public static extern int GetCurrentThreadId();


#region Hook management and class construction.

/// Handle for the created hook.
private readonly IntPtr HookHandle;

private readonly GetMsgProc ProcessMessagesCallback;

/// Create an instance of the TextInputHandler.
/// Handle of the window you wish to receive messages (and thus keyboard input) from.
public TextInputHandler(IntPtr whnd) {
// Create the delegate callback:
this.ProcessMessagesCallback = new GetMsgProc(ProcessMessages);
// Create the keyboard hook:
this.HookHandle = SetWindowsHookEx(HookId.WH_GETMESSAGE, this.ProcessMessagesCallback, IntPtr.Zero, GetCurrentThreadId());

public void Dispose() {
// Remove the hook.
if (this.HookHandle != IntPtr.Zero) UnhookWindowsHookEx(this.HookHandle);


#region Message processing

private int ProcessMessages(int nCode, int wParam, ref Message msg) {
// Check if we must process this message (and whether it has been retrieved via GetMessage):
if (nCode == 0 && wParam == 1) {

// We need character input, so use TranslateMessage to generate WM_CHAR messages.
TranslateMessage(ref msg);

// If it's one of the keyboard-related messages, raise an event for it:
switch ((WindowMessage)msg.Msg) {
case WindowMessage.WM_CHAR:
this.OnKeyPress(new KeyPressEventArgs((char)msg.WParam));
case WindowMessage.WM_KEYDOWN:
this.OnKeyDown(new KeyEventArgs((Keys)msg.WParam));
case WindowMessage.WM_KEYUP:
this.OnKeyUp(new KeyEventArgs((Keys)msg.WParam));


// Call next hook in chain:
return CallNextHookEx(0, nCode, wParam, ref msg);


#region Events

public event KeyEventHandler KeyUp;
protected virtual void OnKeyUp(KeyEventArgs e) {
if (this.KeyUp != null) this.KeyUp(this, e);

public event KeyEventHandler KeyDown;
protected virtual void OnKeyDown(KeyEventArgs e) {
if (this.KeyDown != null) this.KeyDown(this, e);

public event KeyPressEventHandler KeyPress;
protected virtual void OnKeyPress(KeyPressEventArgs e) {
if (this.KeyPress != null) this.KeyPress(this, e);


I wrote a crude ZSoft PCX loader (only handles 8-bit per plane, single-plane images, which is sufficient for Quake 2).

Using this loader I found colormap.pcx, which appears to perform the job of palette and colour map for Quake II.

.wal files now open with the correct palette. I've also copied over most of the BSP loading code, but it needs a good going-over to make it slightly more sane (especially where the hacks for Quake II support have been added).




Loader Change

I've started rewriting the underlying resource loading code to better handle multiple versions of the game.

To help with this I'm writing a WinForms-based resource browser.

(That's the only real Quake-related change visible in the above screenshot. I've written a cinematic (.cin, used in Quake 2) loader).

To aid loading resources I've added a number of new generic types. For example, the Picture class always represents a 32-bit per pixel ARGB 2D picture. The decoders for various formats will always have access to the resource manager, so they can request palette information if they need it. To further aid issues, there are some handy interfaces that a specific format class can implement - for example, a class (such as WallTexture for handling .wal files) implementing IPictureLoader will always have a GetPicture() method.

The loader classes are also given attributes specifying which file extensions are handled. (This project uses quite a bit of reflection now). The only issue I can see with this are files that use the same extension but have different types, such as the range of .lmp files.

In addition, certain single files within the packages have multiple sub-files (for example, the .wad files in Quake). I'm not sure how I'll handle this, but I'm currently thinking of having the .wad loader implement IPackage so you could access files via gfx/somewad.wad/somefileinthewad, but some files don't have names or extensions.




Quake 2 and Emulation

The current design of the Quake project is that there are a bunch of classes in the Data namespace that are used to decode Quake's structures in a fairly brain-dead manner. To do anything useful with it you need to build up your own structures suitable for the way you intend on rendering the level.

The problem comes in when you try to load resources from different versions of Quake. Quake 1 and Quake 2 have quite a few differences. One major one is that every BSP level in Quake contains its own mip textures. You can call a method in the BSP class which returns sane texture coordinates as it can inspect the texture dimensions inside itself. Quake 2 stores all of the textures externally in .wal resources - the BSP class can no longer calculate texture coordinates as it can't work out how large the textures are as it can't see outside itself.

I guess the only sane way to work this out is to hide the native types from the end user and wrap everything up, but I've never liked this much as you might neglect to wrap up something that someone else would find very important, or you do something that is unsuitable for the way they really wanted to work.

Anyhow. I've hacked around the BSP loader to within an inch of its life and it seems to be (sort of) loading Quake 2 levels for brute-force rendering. Quake 2 boasts truecolour lightmaps, improving the image quality quite significantly!

The truecolour lightmaps show off the Strogg disco lighting to its best effect. One of the problems with the Quake II BSP file format is that the indexing of lumps inside the file has changed. Not good.

That's a bit better. [smile] Quake II's lightmaps tend to stick to the red/brown/yellow end of the spectrum, but that is a truecolour set of lightmaps in action!

The lightmaps tend to look a bit grubby where they don't line up between faces. Some trick to join all lightmaps for a plane together into a single texture should do the trick, and reduce the overhead of having to load thousands of tiny textures (which I'm guessing have to be scaled up to a power-of-two). I'll have to look into it.

On to .wal (wall texture) loading - and I can't find a palette anywhere inside the Quake II pack files. I did find a .act (Photoshop palette) that claimed to be for Quake II, but it doesn't quite seem to match. It's probably made up of the right colours, but not in the right order.

Fortunately I have some PAK files with replacement JPEG textures inside them and can load those instead for the moment.

The brightness looks strange due to the bad way I apply the lightmaps - some kludgy forced two-pass affair with alpha blending modes set to something that sort of adds the two textures together in a not-very-convincing manner.

Can anyone recommend a good introduction to shaders for XNA? I'm not really trying to do anything that exciting.

This is a really bad and vague overview of the emulation technique I use in Cogwheel, so I apologise in advance. Emulation itself is very simple when done in the following manner - all you really need is a half-decent knowledge of how the computer you're emulating works at the assembly level. The following is rather Z80-specific.

At the heart of the system is its CPU. This device reads instructions from memory and depending on the value it reads it performs a variety of different actions. It has a small amount of memory inside itself which it uses to store its registers, variables used during execution. For example, the PC register is used as a pointer to the next instruction to fetch and execute from memory, and the SP register points at the top of the stack.

It can interact with the rest of the system in three main ways:

Read/Write Memory Input/Output Hardware Interrupt Request

I assume you're familiar with memory. [smile] The hardware I refer to are peripheral devices such as video display processors, keypads, sound generators and so on. Data is written to and read from these devices on request. What the hardware device does with that data is up to it. I'll ignore interrupt requests for the moment.

The CPU at an electronic level communicates with memory and hardware using two buses and a handful of control pins. The two buses are the address bus and data bus. The address bus is read-only (when viewed from outside the CPU) and is used to specify a memory address or a hardware port number. It is 16 bits wide, meaning that 64KB memory can be addressed. Due to the design, only the lower 8-bits are normally used for hardware addressing, giving you up to 256 different hardware devices.

The data bus is 8-bits wide (making the Z80 an "8-bit" CPU). It can be read from or written to, depending on the current instruction.

The exact function of these buses - whether you're addressing memory or a hardware device, or whether you're reading or writing - is relayed to the external hardware via some control pins on the CPU itself. The emulator author doesn't really need to emulate these. Rather, we can do something like this:

class CpuEmulator {

public virtual void WriteMemory(ushort address, byte value) {
// Write to memory.

public virtual byte ReadMemory(ushort address) {
// Read from memory.
return 0x00;

public virtual void WriteHardware(ushort address, byte value) {
// Write to hardware.

public virtual byte ReadHardware(ushort address) {
// Read from hardware.
return 0x00;


A computer with a fixed 64KB RAM, keyboard on hardware port 0 and console (for text output) on port 1 might look like this:

class SomeBadComputer : CpuEmulator {

private byte[] AllMemory = new byte[64 * 1024];

public override void WriteMemory(ushort address, byte value) {
AllMemory[address] = value;

public override byte ReadMemory(ushort address) {
return AllMemory[address];

public override void WriteHardware(ushort address, byte value) {
switch (address & 0xFF) {
case 1:

public override byte ReadHardware(ushort address) {
switch (address & 0xFF) {
case 0:
return (byte)Console.ReadKey();
return 0x00;


This is all very well, but how does the CPU actually do anything worthwhile?

It needs to read instructions from memory, decode them, and act on them. Suppose our CPU had two registers - 16-bit PC (program counter) and 8-bit A (accumulator) and this instruction set:

Extending the above CpuEmulator class, we could get something like this:

partial class CpuEmulator {

public ushort RegPC = 0;
public byte RegA = 0;

private int CyclesPending = 0;

public void FetchExecute() {
switch (ReadMemory(RegPC++)) {
case 0x00:
RegA = ReadMemory(RegPC++);
CyclesPending += 8;
case 0x01:
WriteHardware(ReadMemory(RegPC++), RegA);
CyclesPending += 8;
case 0x02:
RegA = ReadHardware(ReadMemory(RegPC++));
CyclesPending += 16;
case 0x03:
RegA = ReadMemory((ushort)(ReadMemory(RegPC++) + ReadMemory(RegPC++) * 256));
CyclesPending += 16;
case 0x04:
WriteMemory((ushort)(ReadMemory(RegPC++) + ReadMemory(RegPC++) * 256), RegA);
CyclesPending += 24;
case 0x05:
RegPC = (ushort)(ReadMemory(RegPC++) + ReadMemory(RegPC++) * 256);
CyclesPending += 24;
// NOP
CyclesPending += 4;


The CyclesPending variable is used for timing. Instructions take a variable length of time to run (depending on complexity, length of opcode, whether it needs to access memory and so on). This time is typically measured in the number of clock cycles taken for the CPU to execute the instruction.

Using the above CyclesPending += x style one can write a function that will execute a particular number of cycles:

partial class CpuEmulator {

public void Tick(int cycles) {
CyclesPending -= cycles;
while (CyclesPending 0) FetchExecute();


For some truly terrifying code, an oldish version of Cogwheel's instruction decoding switch block. That code has been automatically generated from a text file, I didn't hand-type it all.

Um... that's pretty much all there is. The rest is reading datasheets! Your CPU would need to execute most (if not all) instructions correctly, updating its internal state (and registers) as the hardware would. The non-CPU hardware (video processor, sound processor, controllers and so on) would also need to conform to data reads and writes correctly.

As far as timing goes, various bits of hardware need to run at their own pace. One scanline (of the video processor) is a good value for the Master System. Cogwheel provides this method to run the emulator for a single frame:

public void RunFrame() {
this.VDP.RunFramePending = false;
while (!this.VDP.RunFramePending) {

In the Master System's case, one scanline is displayed every 228 clock cycles. Some programs update the VDP on every scanline (eg changing the background horizontal scroll offset to skew the image in a driving game).

The above is embarrassingly vague, so if anyone is interested enough to want clarification on anything I'd be happy to give it.





The unofficial Quake specs are a bit confusing on this matter. To get the size of the texture, find the bounding rectangle for the face (using the horizontal and vertical vectors to convert the 3D vertices to 2D in the same way as it's done for the texture coordinates). Then divide by 16 and add one, like this:

Width = (Ceiling(Max.X / 16) - Floor(Min.X / 16)) + 1
Height = (Ceiling(Max.Y / 16) - Floor(Min.Y / 16)) + 1

It'd certainly work, but it'd probably look a bit odd (mixing and matching, that is - emulating the appearance of the software renderer on its own is very cool). [smile]

Quake II appears to contain a lump that maps 16-bit colour values to values in the palette. I don't know what that was used for, but you could probably use something similar to convert the truecolour textures to 8-bit textures.

The rest of this journal post goes off on a bit of a historical tangent.

Before Quake was released, ID released a deathmatch test program, QTEST. This featured the basic engine, three deathmatch levels and not a whole lot else.

However, the PAK files contained some rather interesting files, including a variety of models - some of which were later dropped from Quake entirely!

The model version is type 3, and I've made a guess at the differences between type 3 and type 6 (6 is the version of the models in retail Quake). Type 6 has 8 more bytes after the frame count in the model header. I skip the "sync type" and "flags" fields, as I don't know what these do anyway. [rolleyes] Type 3 files don't have a 16 byte frame name, either (between the frame bounding box information and vertices in type 6).

The most impressive extra model is progs/dragon.mdl. It appears in this early screenshot:

One character which changed design considerably was progs/shalrath.mdl.

Appearing in registered Quake, it became the following (QPed seems to mangle the palette, sorry):

The charmingly-named progs/vomitus.mdl is likewise untextured:

The frame data appears to be corrupted here, so I don't think my model loader is working properly. However, you can still get the rough idea of what progs/serpent.mdl might have looked like:

The fish model is quite different, but like the above its last few frames appear corrupted.

A number of textures changed in the final release. Some model textures, such as the grenade and nail textures, were originally much larger.

The Ogre before and after his makeover

The original 'gib' textures, eventually unidentifiable meat, were also toned down quite a lot from the rather more graphic ones in the QTEST PAKs.

The screenshots (taken directly from QTEST - I can't load those BSPs myself) show a few other changes - billboarded sprites for torches and flames were replaced with 3D models, the teleporter texture was changed and the particle effects for explosions were beefed up with billboarded sprites.




Let There Be Lightmaps

Heh, you might like the Killer Quake Pack. [grin]

I've added some primitive parsing and have now loaded the string table, instructions, functions and definitions from progs.dat but can't do a lot with them until I work out what the instructions are. Quake offers some predefined functions as well (quite a lot of them) so that'll require quite a lot of porting.

I haven't looked at the data structures so could be making this up entirely: Quake does lighting using a colour map (a 2D structure with colour on one axis and brightness on the other). I'm assuming, therefore, that for the fullbright colours they map to the same colour for all brightnesses, rather than fade to black.

How could you simulate that? I guess that the quickest and dirtiest method would be to load each texture twice, once with the standard palette and once with the palette adjusted for the darkest brightness and use that as a luma texture. I believe Scet did some interesting and clever things with pixel shaders for DOOM, but that would end up playing merry Hell with 24-bit truecolour external textures.

Aye, it's fun. [smile]

I think I've cracked those blasted lightmaps.

The lightmaps are small greyscale textures applied to faces to provide high-quality lighting effects with a very small performance overhead. Most of the visible faces have a lightmap.

They are stored in the BSP file. Extracting them has been a little awkward, not helped by a very stupid mistake I made.

Each face has a pointer to its lightmap. To get a meaningful texture out of the BSP we also need to know its width and height, which are based on the bounds of the face's vertices.

However, a lightmap is a 2D texture, and a face occupies three dimensional space. We need to scrap an axis!

Each face is associated with a plane. Each plane has a value which indicates which axis it closest lies orthogonal to. I could use this property to pick the component to discard!

This didn't work too well. Most of the textures were scrambled, and most of them were the same width. This should have rung warning bells, but I ignored this and moved on to other things. The problem was that each face (made up of edges) specifies which of the level's global list of edges comes first, and how many edges it uses (edges are stored consecutively).

// My code looked a bit like this:
for (int i = 0; i Edge = level.Edges;

// It should have looked like this:
for (int i = 0; i Edge = level.Edges[i + Face.FirstEdge];

With that all in position, sane lightmap textures appear as if by magic!

The textures aren't really orientated very well. Some are mirrored, some are rotated - and the textures of some are still clearly the wrong width and height. This 3D-to-2D conversion isn't working very well.

Each face references some texture information, including two vectors denoting the horizontal and vertical axes for aligning the texture. This information can surely also be used to align the lightmaps correctly (where 2D.X = Vector2.Dot(3D, Horizontal), 2D.Y = Vector2.Dot(3D, Vertical))?

I now draw these textures after the main textures and before the luma textures.

Problem: There are typically > 4000 lightmap textures. Rendering all of the lightmaps drops the > 200FPS framerate down to about 35FPS. This isn't great!

Coupling this problem with the other problem that drawing non-world BSP objects (such as ammo crates) isn't very practical at the moment gives me a good excuse to write a new renderer.

Quake uses a BSP tree to speed up rendering. Each leaf has a compressed visibility list attached to it, indicating which other leaves are visible from that one. Each leaf also contains information about which faces are inside it, and so by working out which leaf the camera is in you can easily get a list of which faces are available.

/// Gets the leaf that contains a particular position.
/// The position to find a leaf for.
/// The containing leaf.
public Node.Leaf GetLeafFromPosition(Vector3 position) {
// Start from the model's root node:
Node SearchCamera = this.RootNode;
for (; ; ) {
// Are we in front of or behind the partition plane?
if (Vector3.Dot(SearchCamera.Partition.Normal, position) > SearchCamera.Partition.D) {
// We're in front of the partition plane.
if (SearchCamera.FrontLeaf != null) {
return SearchCamera.FrontLeaf;
} else {
SearchCamera = SearchCamera.FrontNode;
} else {
// We're behind the partition plane.
if (SearchCamera.BackLeaf != null) {
return SearchCamera.BackLeaf;
} else {
SearchCamera = SearchCamera.BackNode;

The following three screenshots show a wireframe view of going around a sharp corner using the visibility list information from the level's BSP file.

Another fix in the new renderer is the correction of faces that don't have a lightmap texture. Some faces - such as those which are completely dark, or the faces used for the sky texture - don't have such information, and are currently rendered at full brightness.

Before and after the fix

If the renderer encounters a lightmap-less face, it enables lighting, sets the ambient light to the face's brightness level, draws the face, then disables lighting again. As you can see from the screenshots this looks a lot better. [smile]

The new renderer not only renders the levels using the BSP tree - it also breaks them down into individual models. A level file contains multiple models. The first model is the main level architecture. Other models form parts of the level that can be moved.

Models 0, 1 and 2 of The Slipgate Complex

Having multiple BSP renderer objects means that I can now render the ammo boxes and health packs.


I'm not sure what advantage there is to using BSP models instead of Alias models for these items.

Place of Two Deaths has a Place of Two Medikits




High-Resolution/Luma Textures and Monsters

Ah, makes sense! According to the Unofficial Quake Specs it's p-code, which at least makes parsing easier. Working out which opcodes do what will (I assume) require a perusal of the Quake/QuakeC compiler source.

One quick and dirty way to (possibly) improve Quake's elderly graphics is to use a modern texture pack, which provides high-resolution reinterpretations of Quake's own offerings.

Personally, I'm not too keen on these packs - high-resolution textures on low-poly structures emphasises their sharp edges, and texture alignment issues - Quake's textures are aligned pretty badly in places - are made even more obvious. In the same vein, low-resolution textures look very bad to me when magnified using a smoothing filter - I'd much rather see chunky pixels.

Anyway, adding support for them is very simple. When loading textures from the BSP, I simply check to see if a file textures/texturename.tga is available, and if so load that instead.

The Slipgate Complex with the original and high-resolution textures.

One advantage of these high resolution texture packs are their luma variations. These are available for certain textures that need to 'glow', such as lights, computer displays or runes. These are mostly black textures with the only the part that lights up drawn.

I draw the world geometry twice. The first time I draw it normally. The second time I disable lighting and fog, enable alpha blending, use an additive blend function and the luma textures instead of the regular textures.

The Abandoned Base before and after the addition of luma textures.

Quake's maps have limited support for animated textures. If the texture is named +0xyz (where xyz is the descriptive name of the texture) then chances there's a +1xyz, +2xyz (and so on) texture to go with it. Once a level's textures have been loaded, I go through looking for any names starting with +0. From this I can search for the indices of its other frames.

Texture2D OldFirstWall = this.WallTextures[ATD.TextureIds[0]];
Texture2D OldFirstLuma = this.LumaTextures[ATD.TextureIds[0]];
for (int i = 0; i 1; ++i) {
this.WallTextures[ATD.TextureIds] = this.WallTextures[ATD.TextureIds[i + 1]];
this.LumaTextures[ATD.TextureIds] = this.LumaTextures[ATD.TextureIds[i + 1]];
this.WallTextures[ATD.TextureIds[ATD.TextureIds.Length - 1]] = OldFirstWall;
this.LumaTextures[ATD.TextureIds[ATD.TextureIds.Length - 1]] = OldFirstLuma;

Once I've collected up the indices of each animated texture's frames, I simply rotate them using the code above.

The levels do look rather lonely without the various monsters patrolling them. This can be remedied by parsing the entities lump. In the true brute-force tradition, the models are loaded once (at startup), but to render them I just loop through the entire entities block (which contains many irrelevant entities) hunting for types referencing a model - if so, I pull out their angle and offset and render the model at the specified position.

Most of the items spread out throughout the level - powerups, weapons and armour - are represented by Alias models, so adding support for those is easy enough:

Some of the other items - such as ammo crates - are represented by BSP models, so are currently not supported.



  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!