Jump to content

  • Log In with Google      Sign In   
  • Create Account

The Mad World Of Me

More Hardware

Posted by , 19 August 2006 - - - - - - · 314 views

I haven't really progressed much since my last post due to lack of time. However I have done some stuff. I've hooked up a composite video connector so I can display stuff on my TV, connected a SNES pad I got off eBay (decided I didn't need the ps/2 connector for a keyboard, I may add one at a later date) and my sample order arrived from Microchip so I have a 256k EEPROM as well.

Here's a labeled picture of what the setup is now:



What I need to do now is actually write a game for the thing [grin]. What I currently have running is a (highly unimpressive) test program that just draws a rectangle on the TV you can move around with the SNES pad. It's horribly flickery but at least it works. Here's a picture of the output on my TV:



Here's a little test program that comes with the Propeller dev environment:



The SNES Pad

The SNES pad is a very simple device to interface with, it basically consists of two 4021 shift registers (The 4021 datasheet is here). A shift register is a very simple device that basically holds a load of binary digits, then on a clock pulse every bit shifts along by 1 (You can look at like this, if the contents of the register were a variable x, every clock pulse x = x << 1). The 4021 has parallel load and serial input. The parallel load allows you to set every bit in the shift register at once, the serial input is read from when you shift everything along one and is used as the new bit 0 (When you shift every along 1 there's nothing to the right of bit 0 so it's read from the serial input instead). The 4021 also has 3 parallel outputs which just output the current state of bits 7, 6 and 5. In the SNES pad there are two 4021s with the parallel output of bit 7 of one of them connected to the serial input of the other, so basically it acts like a single 16-bit shift register.

Every button in the SNES pad is just a simple switch, and each button is connected directly to a parallel load input of a shift register. So to read the current button state you pull the parallel load line of the pad high, then read the first bit from the output line, pulse the clock line then read the next bit from the output line and so on. This gives you a 16 bit number where 1 signifies the button is up and 0 signifies the button is down. There are only 12 buttons on the pad so the last 4 bits are always high.

I decided as a little side-project I'd build something to connect the SNES pad to my PC. This is really rather simple, I just programmed a PIC to read the state of the pad and send it over the serial when it received a lowercase s. Here's a picture of the hardware setup:



Here's the complete C code for the PIC program:


#include <p18f1320.h>
#include <stdio.h>
#include <usart.h>

#pragma config WDT=OFF, OSC=INTIO2

char Read(void)
{
int i;
int f;
unsigned char ret;

//A4 Is the parallel load line
//A1 Is the clock line
//A0 Is the output line

//Pulse parallel load to load button state
//Into shift registers
PORTAbits.RA4 = 1;
PORTAbits.RA4 = 0;
//Set clock low
PORTAbits.RA1 = 0;

//Set parallel load and clock lines as outputs and pad output line as an input
TRISAbits.TRISA4 = 0;
TRISAbits.TRISA1 = 0;
TRISAbits.TRISA0 = 1;

for(f = 0; f < 2; ++f)
{
ret = 0;

for(i = 0; i < 8; ++i)
{
unsigned char b = 0;

//Read a bit from the output line
b = PORTAbits.RA0;
//Pulse the clock
PORTAbits.RA1 = 1;
PORTAbits.RA1 = 0;

ret |= (b << i);
}

printf("%c", ret);
}

return ret;
}

void main()
{
OSCCON=0x73;

//Setup serial comms
OpenUSART( USART_TX_INT_OFF &
USART_RX_INT_OFF &
USART_ASYNCH_MODE &
USART_EIGHT_BIT &
USART_CONT_RX &
USART_BRGH_HIGH,
25 );

while(1)
{
//If we receive an 's'
if(DataRdyUSART() && getcUSART() == 's')
{
Read();
}
}
}



I then wrote a little program which sends an s to com1 and reads back the returned 2-byte number. It then prints the names of the buttons that are currently down. Here's a screenshot of it in action:



The next step is to write a driver so I can use the pad as a gamepad in games. I'm not entirely sure how I'm gonna go about that though, writing a driver for windows is probably a fair amount of work (I've never written any drivers for windows, though I have dabbled with drivers in linux, it would probably be pretty easy in linux actually so prehaps it might be easy in windows). It would be nice if there was a way I could get it to work as a generic game pad without needing to write a full-blown driver (If anyone knows how, please tell me).


Hardware

Posted by , 21 July 2006 - - - - - - · 197 views

So I've decided to build my own console of a sort. Originally I was going to buy a Spartan-3E starter kit and do something with an FPGA however Xilinx won't sell me one due to EU regulations about hazardous materials in electronics. So I've gone and bought a Propeller chip instead. It's an 8-core chip that can run at 80 MHz which should be fun to code for [grin].

Here's the circuit I've come up with, it's got stero sound output, composite video output, a PS/2 connector (for connecting a keyboard), an eeprom for program storage and it's connected to the computer via serial. It's pretty much ended up being like the Propeller demo board actually.



I ordered all the various bits and they arrived a couple of days ago. Being an idiot I ordered a male rather than a female serial connector, so currently I have to use the serial cable I had for use with a picaxe chip which would be fine but it lacks a reset line. So I've had to setup a reset switch that I have to hit at exactly the right time when trying to download a program to the Propeller which is a bit of a pain but it's doable.

Here's a few pictures of what I've currently got setup:





Currently it doesn't do all that much, I've got it to talk to the computer and blink an LED next I've got to solder up a composite video connector and see if I can get something on my TV. It's all a bit of a mess of wires and breadboards atm but it works [grin].


D3D 10

Posted by , 18 December 2005 - - - - - - · 333 views

Well the December 2005 update to the DX SDK came out a few days ago, and you're probably aware it came with a preview of D3D 10 which will run on Vista using the reference device (no actual hw is available yet unfortunately). D3D 10 is quite a big change from D3D 9, the entire API has been revamped. Here's an image of the pipeline in D3D 10 taken from the docs (note that the fixed function pipeline has been eliminated in D3D 10).



The rounded boxes indicate programmable stages. IMO the most notable things on this diagram are the geometry shader, the stream output stage and the memory resources block. Lets examine these things in turn.

The memory resources block

In D3D 10 memory has become unified, you basically have two resource types, buffers and textures (there are several different types of textures) and these resources can be accessed by any shader. Buffers are generally used for vertex data and index data. You also have a new concept called constant buffers. These are, as the name suggests, buffers of constant data which replace constant registers as used in D3D 9 shaders. An interesting addition to textures is texture arrays. That is in a shader you can access an array of textures (the textures in an array must be homogeneous, that is they share the same format, resolution and number of mip levels) indexed by an integer. Resources are accessed by the hardware using views. A view describes the way in which the hardware sees the resource and it is the resource view that you bind to the pipeline.

The geometry shader

A geometry shader works on per-primitive data instead of per-vertex data as in the vertex shader. It receives the data for a full primitive in it's inputs and can then add vertex data to a stream which is then ouput. Thus you can create new geometry or block a primitive from being rasterized using a geometry shader.

The stream output stage

This allows you to write data from a geometry shader back into a buffer resource without rasterizing the data. You can then read this data back into the pipeline in another rendering pass or use what is known as a staging resource to read this data back to the cpu. This has applications in things like GPU particle systems and physics. You can update the system using a vertex and geometry shader in one pass writing the resulting data back out to a buffer using the stream output stage and then render in the next pass.

Another notable thing not indicated on the diagram is that there is now a unified shader core and an unlimited instruction length in shaders, also shaders are now written in HLSL only. The unified shader core means all shaders can perform the same operations though there are certain instructions that only make sense in a particular shader stage (e.g. Operations on a geometry stream in a geometry shader and discarding a pixel in the pixel shader stage). Shader semantics have also altered. Now instead of having a fixed set on constant semantics you can name them whatever you want (the names are given in the input layout for a buffer, the D3D 10 equivalent of vertex declarations). There is a set of semantics known as system values that have a special meaning. Some of these system values give per vertex and per primitive data that identifies a vertex or primitive to a vertex or geometry shader these identifiers are just sequential numbers generated at the input assembler stage (i.e. The first vertex is numbered 0, the second 1, the primitives are numbered in a similar way). Others allow you to set the index of the render target something is rendered to, and others are the equivalent of semantics in D3D 9 (the SV_Position semantic is the equivalent of POSITION and the SV_Target semantic is the equivalent of COLOR). Shaders also now have an integer data type and allow bitwise operations on integers.

Textures are handled differently in D3D 10 shaders compared to D3D 9. A sampler is now a single object that is not bound to any particular texture. The sampler object describes things like how to filter the texture. You then sample a texture giving texture coordinates and a sampler object. The syntax used has also changed. Textures are know treated as templated objects. Which works like this:

Texture2D<float4> tex;
sampler samp;
...

tex.Sample(samp, TexCoord);

There's still a lot more to D3D 10 but hopefully the above has given you a sample of the new things found in D3D 10, if you want to learn more I refer you to the D3D 10 docs, and these two articles written by Jack Hoxely (jollyjeffers) Beginning Direct3D 10 programming and An Overview of Microsoft's Direct3D 10 API.


Ramblings on programming

Posted by , 19 November 2005 - - - - - - · 256 views

Recently I've been thinking about visual programming languages, or rather how to get away from doing programming using just a load of plain text files.

Computers are all about abstraction, you can take a computer at look at it from many levels of abstraction, right now I'm typing in open office which is an abstraction, it allows you to concentrate on composing your document without worrying about the details of how your keyboard works, how to display your document to the screen, how to store it in memory etc. Indeed if I had to worry about all these things I'd never actually be able to write a document using my computer it'd just be too complicated. Yesterday I was at a lecture which was about basic computer architecture. We were looking at a computer at a level where it's a collection of hardware devices, a processor, memory etc that we can control using machine code. Sometimes you have to think of a computer like this (such as when you're writing a compiler, an operating system or a device driver) and sometimes you have to think of it as a glorified typewriter (such as when you're writing a document) thinking about a computer as a glorified typewriter while you write an operating system on the other hand just doesn't work. You have to match what you're trying to do to the correct level of abstraction (or indeed to several levels of abstraction).

We use abstraction in programming as well, APIs such as OpenGL and DirectX allow us to specify polygonal data, states and shader programs without needing to worry about or know about how the underlying hardware actually works. We create new abstractions using constructs such as classes in C++ (e.g. We might abstract an API like DirectX using a class so we can think about rendering in terms of specifying materials and meshes, instead of polygonal data, states and shader programs). Abstraction is a very important concept without it we just wouldn't be able to use computers (just imagine having to think about the quantum behavior of a single transistor in your CPU when you want to write pong).

In electronics you have components such as transistors, capacitors, resistors, integrated circuits etc. All of these are 'black boxes' you know their inputs, their outputs and how they behave, but you don't actually know how they work (or at least you don't have to know how they work in order to use them), they abstract away the details and allow you to concentrate on your circuit. When you're designing a complex piece of hardware you're going to design it as several different levels of abstraction, you're going to have an overall block diagram showing how high-level systems connect to cover, for each high-level system you're going to have further block diagrams. At some point you're going to have a circuit diagram, if you're making an integrated circuit you'll eventually have a diagram showing exactly how every transistor is implemented and connected in to the entire system. You're designing at several different levels and which level you're designing at depends upon the task you want to accomplish.

Now if you take a look at the source code of a program, all you get is text, the text defines the entire program and when you're working on the program you're going to be working with text. I think the process of creating a program should be more like creating a piece of hardware where you have many levels you can look at the system and alter the design on. Currently with program design you may well draw UML diagrams etc to aid in the design but once that's done you translate your diagrams to textual code (either manually or have some tool do it for you) and that's that. I propose that all the diagrams you might draw in the design stage should be coupled far more tightly to the actual program. So you could look at a diagram showing how all the systems in your program are connected together and rearrange it however you want and that change would automatically be reflected in the actual program.

Going back to electronics if you draw a block diagram of a system you may well have the blocks representing specific ICs you're going to have in your circuit and lines connecting them to show how they communicate with one another and these lines can directly translate to wires within the circuit connecting the pins of different ICs together, that is the abstract design I draw trivially converts to the implementation. Now back to programming, lets say I'm drawing a diagram of the design of my new super-duper next gen engine. I may draw a line between the block marked 'scene graph' and the one marked 'renderer' and then I could write something next to the line such as 'scene graph sends potentially visible objects to graphics queue' however this line and sentence does not trivially convert to the implementation. Now imagine I have in the block marked 'scene graph' a bunch of fields (basically a load of names separated by lines) one of which is 'visible objects' and in the renderer block I have a field marked 'render queue' I could now draw a line between visible objects and render queue to symbolise that the visible objects are moved to the render queue and this could be trivially mapped to the implementation (OK it's pretty much exactly the same as just writing a sentence next to the line, however it's far easier for a computer to interpret and for a human to see at a glance what's going on) as I'm basically saying move data from here to here. What you need to do is define the fields correctly as well as a way to transfer data.

Think of a programming environment where you build programs using blocks like I described as above, you could use them recursively, that is you could have a single block at one level that is defined using many blocks on another level. The blocks would all have inputs and outputs (like the fields mentioned above) that could be connected to represent data transfer between blocks. Also I wouldn't remove actual textual code this would be seen as the lowest level to implement a block in. I would however get rids of text files, the program would be stored in a format appropriate to the way we're representing the programs (probably some kind of complex graph).

This kind of approach is interesting for multi-threaded programming as well. Seeing as the program wouldn't actually define a strict execution order a scheduler would be free to run the code for blocks whenever it is deemed appropriate so you could have several blocks executing in parallel, the connections and data transfers between the blocks are explicit and as such the scheduler could take them into account to avoid concurrency problems.

I've been thinking quite a bit about this over the past few days and I'm constantly coming up for conflicting arguments for ways to do various things and as such the above text is probably pretty confused though hopefully you can get the basic idea of what I'm getting at. I may start playing round with actually creating a programming environment like I described above during the Christmas holidays to see if I can actually get anywhere with it.

Oh and if you've actually read this entire thing could you leave a comment telling me what you think even if it's just a single word, thanks. [smile]


Hello

Posted by , 06 October 2005 - - - - - - · 335 views

Well I haven't written anything in this journal for several months and seeing as I'm paying for GDNet+ I may as well start using it [grin].

I've just started doing computer science at Cambridge university (Fitzwilliam college) which has been good so far, though I've only had one day of lectures so I don't know how bad the workload is going to get (apparently they really pile it on). Got my matriculation dinner tonight which should be good fun (Formal meal with decent free food and wine).

I've basically got 3 things I'm working on atm. The main one is the HL2 mod Realms of Valhallon, I used to be working on another mod called Valandil however that disappeared and various team members from it are now working on RoV. I'll refer you to the site to find out more about it (I don't know what's public and what's not so I won't talk about it here).

The second thing is what was a 4E4 entry (I saw was because it's only in the very early stages and there's absolutely no chance I'd get it in a fit state in time for 4E4). It's going to be a 2D platformer utilizing Physical to make a more interactive world. It's also gonna be using pixel shaders and what-not to give some nice graphics. I'm using DX to handle rendering/input/sound etc and wxWidgets to create an editor (I plan to integrate the game into the editor so you can play it from the editor which should makes certain things easier). There's a very unimpressive shot of what I've got so far below, as you can see it's a long way to go until I've got an actual game.

The third and final thing I'm working on in an operating system, I'm not actually expecting to produce anything usable or particularly noteworthy (I lack the time and experience to do so), it's just a little toy project as I'm quite interested in low level stuff and operating systems. Here's a screenshot of what I've got so far in BOCHS:



It does pretty much nothing currently, it doesn't even have keyboard driver yet (Those lines to do with the keyboard init are lying) though I have written a bit of code to put up a load of info when you press a key to do with what you pressed and the state of various keyboard things (The lines you see are a result of me pressing alt-print scrn). Oh and the 'GBOS' flashes between green and yellow, rather pointless and annoying but I just wanted to test a few things [smile].

Well that's it for now, next time I post it'll probably be about one of the projects listed above, either that or a rant about how they give us far too much work to do at uni [razz]


Things that kick ass

Posted by , 30 December 2004 - - - - - - · 165 views

Well I finally got round to seeing The Incredibles a few days ago and I thought it was a (dare I say it) incredible film. It's rather different from Pixar's previous work (think James Bond rather than finding Nemo) and my favourite Pixar film to date. Technically it's got some really nice stuff in it. Top notch animation and rendering with lots of very well done hair and cloth simulation (aparently before this film Pixar had only done one cloth simulation before in Monsters Inc.). The plot was rather predictable but then I was expecting it to be as such. If you haven't seen this film already I'd recomend seeing it.

I received a copy of Snow Crash this christmas. Which I've just finished reading and highly enjoyed. It's a cyberpunk novel about this guy call Hiro Protagonist, who's a hacker (and the greatest sword fighter in the world). The book starts off when he's working as a pizza delivery guy for the mafia and he's ends up getting involved in a plot surrounding this drug known as Snow Crash. The plot moves incredibly fast and it shouldn't be taken too seriously but it's a lot of fun.

I recently bought a GF 6600 GT with some money I got for christmas. The performance of games doesn't seem to have changed much but I think that's because of my CPU (an Athlon XP 1800+). The main reason I bought this card was so I could start playing around with PS 3.0 anyway. PS 3.0 has some very nice features, I can see multiple render targets and texture reads in vertex shaders being very useful. This allows you to do things like complex particle simulations done entirely on the GPU. You store per particle info in textures (stuff like position, velocity etc) and for each data texture you have you have two copies. You then use a pixel shader to process the data by setting one set of the textures as render targets and the use the other set of textures as the data to process (i.e. this set would contain particle positions, velocities etc from the last frame). You then use texture reads in a vertex shader so this data can be used to transform vertexes and actually display the particles(This Gamasutra article has more info).

Oh and I finally received a reply back from Cambridge, I've been put into the inter-collegiate pool which means my application could be picked up by other colleges and I may have to go for another interview and may be made an offer, so fingers crossed [grin].


Things

Posted by , 12 December 2004 - - - - - - · 232 views

Well the Full HL2 SDK came out just under two weeks ago so I've started doing the coding for the HL2 mod I'm working on Valandil which is comming on rather slowly mainly due to my unfamiliarity with the HL2 SDK codebase which could really do with some better documentation and commenting. Valve have supplied some basic documentation but it's in the form of articles/tutorials on how to do a few basic things rather than a decent reference and I'm not overly fond of the structure of the code itself. However I want to make an HL2 mod so I'm stuck with it, though I may have a go at writing my own version sometime [grin].

All the universities I've applied to (Warrick, Bristol, Cambridge, York, Edinburgh all for Computer Science) have now sent me offers (with the exception of Cambridge, I had an interview there last week and should be getting a letter from them in january). So I've got to start deciding which offer I'm actually going to accept. I've got exams comming up in January which are going to be pretty cruical, so my christmas holiday is going to involve lots of revision. I really need to think of something to do for my chemistry and physics coursework as well....


Long time, no update

Posted by , 19 September 2004 - - - - - - · 202 views

Well it's been ages since I've written anything in here, mainly because I've had nothing to write about [smile] now I'm back at school I've got loads of work and less time to spend on other things. Anyways here's a nice screenshot of the editor for my 2D RPG engine (alluded to in my first journal post).



Still have a lot of work to do on it though. Currently it's only just got basic map editing capability. Once it's finished it will be able to edit every single aspect of an RPG that you'd create with my engine (called SimpleRPG). I'm hoping to get the map editor portion running with a half-decent set of features by the end of the week.






Recent Entries

Recent Comments