Jump to content
  • Advertisement
Sign in to follow this  
  • entries
  • comments
  • views

About this blog

Ramblings of programmyness, tech, and other crap.

Entries in this blog


Farewell GDNet.

I figured before I bow out of this excellent community I would give an appropriate farewell. I have been a member of this site since 2006 and enjoyed a lot of moments here. But it is time to move on. The first thing bringing me to this conclusion is the lack of passion I have for game development anymore. I have fallen into a world where I am more enthused learning the inner workings of various data structures and languages as well as Security and Computer Architecture. These are things that will always give me something to strive for that does not involve needing art or music or even game play. The next thing bringing me to this conclusions is that although I truly love the new site design the overall community has taken quite a awkward turn from what I grew up in. This site shaped me as a effective programmer and problem solver. This was because I knew when to ask the proper questions to receive the right answer. Those kinds of questions seem to have long expired on this site. Despite the few great questions and discussions most questions anymore fall unto the answer of Learn to Google or RTFM. This has become the norm because a lot of questions out there are horribly thought out and downright inept.

I miss the days of the old GDNet where we would break threads because of truly great intellectual discussions. I also miss the days of the great news posts written by our own which are now nothing more then add blurbs. But most of all I miss the debates about implementation and algorithms. These days are gone and it is because there is a new generation of wannabe game developers that have less prior experience and less enthusiasm for self research.

I have a huge passion for learning. I force myself to try and tackle the same problem in different ways even if it means diving into the inner workings and re inventing the wheel because of this passion. For every answer I receive I have to know WHY it is that way and WHY it works that way and then HOW that conclusion is reached. This is the very essence that has left the community.

Thank you for all of the help and great discussions maybe I will poke in here and there but I am moving on to a place where I can dig a hole of learning that will never end.

If anyone out there has enjoyed my twisted musings of doing things the hard way on purpose and other general really geek oriented thoughts and questions. Stop by my new blog at http://partsaneprog.blogspot.com/ There is nothing up yet but keep checking in I am working on something totally crazy atm.




Current Project + HashTables part 1

Hello GDNet.

First I would like to bring up the project I am working on because that is the inspiration behind this multi part hashtable blog entry set. I am working on a game called Orbis. Orbis came to thought a few years ago. It is basically an asteroids clone that has a twist of pattern matching added in to bring a little complexity and immersion into the game play. I chose pattern matching because it is not to difficult or complicated to dull and frustrate the game play but instead it adds some challenge to the asteroids game type.

When I originally chose to do Orbis I was going to do it in C#/XNA or Python, however, that was years ago and I found that I like to use games as a way to feed my need for the knowledge of how things work internally at a lower level. This is not lower level as in language but low level as in API/functionality. So coming back to my original point I chose to return to Orbis and develop it using pure ANSI-C and SDL. This really gives me a chance to dive into the heart of algorithms which is a real peek into the interests that I have as well as give me a chance to develop software in a non OOP manner. So this brings us to the topic of HashTables.

When designing Orbis I realized I need a way to cache media resources such as images and music so that I can minimize the amount of memory that needs to be allocated as there is no need to keep around duplicates of such resources as you can just reuse them. It just so happens that HashTables are a great way to do this and lets explore why.

What is a HashTable?

A HashTable is a data structure that maps values to keys. If you are a C++ programmer you would know that when you want to have this functionality you would use std::unordered_map. When you tell a HashTable you want a value for some key it goes into the table finds the value and returns that value to the caller.


The main Pro to HashTables is that they are ridiculously fast. In a perfect situation a HashTable has O(1) access. This means that when it looks in the table it is guaranteed to find the value instantly.


The main con is that O(1) access is not guaranteed. If you pass in a key that does not exist the HashTable has to iterate from that point forward giving us a O(I - N) lookup time. Conflicting Keys can also increase lookup time but lookup time will always be better then a typical linear array search of O(N).

So how do they work?

HashTables are a very interesting data structure. When someone uses std::unordered_map, a Dictionary, or some other HashTable type in another language without knowing the inner workings you would assume that the HashTable is storing the actual key and the value side by side and that the key is the actual index into the "list". This is actually quite false in the way the work. The general idea behind a HashTable is to actual store the key and the value as a node and index that node with a standard integer index.

So in reality a HashTable is a linked list where each node is indexed. The index is generated from the key to determine where the node should be placed in the chain. In order to do this internally every HashTable uses a hash function to figure this out. The rule is that a HashTable is only as good as it's hash function and how well it handles duplicate keys or avoids duplicate keys.

In this post I want to focus on the basics of my HashTable implementation. So I want to try to cover the my creation function, hash function, and lookup function.

The first thing we need in my case are 2 structures. The first is the actual node which is a single linked list which stores the key, the value, and a pointer to the next node in the chain. This is what the definition of that looks like.

[source lang="cpp"]
/* single linked hashtable node */
struct htnode
char *key; /* the node key */
void *value; /* the node value */
struct htnode *next; /* the next node in the chain */
} HashNode;

Next is the structure to the actual HashTable. It contains a fixed size array to node pointers and a function pointer to a node disposal function because some nodes may contain dynamically allocated memory like a SDL_Surface which needs to be freed in a special way.

[source lang="cpp"]
/* a hashtable containing an array pointers to its nodes */
struct htable
HashNode *table[HASHSIZE];
void (*disp)(HashNode *); /* function pointer to dispose func */
} HashTable;

Now the structures are out of the way we can look at the creation function for the actual HashTable. This function is designed to allow us to dynamically allocate a HashTable. This technically is not necessary. We could easily just use the actual HasTable structure as a variable somewhere and do the assignment manually but because this HashTable is dealing with dynamically allocated memory for my purposes I feel it is better to design this way so that we can make sure everything gets cleaned up. With this function present the caller will know that the HashTable is dynamic and they should call the appropriate cleanup function for the table which will ensure the nodes get disposed of properly.

[source lang="cpp"]
/* create_hashtable:
* Creates a new hashtable returning said hashtable.
* The function pointer argument is for a dispose function
* if no dispose function is needed for the stored nodes pass NULL.
HashTable *create_hashtable(void (*disp)(HashNode *))
HashTable *tbl = malloc(sizeof(HashTable));
tbl->disp = &disp;
return tbl;

I now want to look at the most important function which is the hash function. The hash function I am using is not 100% full proof but it is simple to understand and hopefully it will be efficient enough for my purposes. I will do some extensive testing on it as the game progresses to make sure things stay sound. It is easy to modify if the need arises but I find it best to start simple and avoid premature optimization. This hash function generates a index based off of the key string which mangles the characters by a seed value multiplied by the previous index size. Once the final index is present the function with return the final index which is the generated index modulus the size of the HashTable. I am hoping this is plenty enough to prevent collisions for my purposes but if they do occur I will have to make some modifications because my intentions are to discard collisions when they occur. The hash function is very important because it is the core of how the actual lookup, insertion, and removal function operate.

[source lang="cpp"]
/* hash:
* A hash function for turning a null terminated string to a unsigned
* index set.
unsigned hash(char *key)
unsigned index;

/* generate a salted index */
for (index = 0; *key != '\0'; key++)
index = *s + 31 * index;
return index % HASHSIZE;

Finally for this article we are going to look at the actual lookup function. Out of the lookup, insertion, and removal functions the lookup function is the most straight forward. We take in the HashTable and the key as arguments and we iterate through the linked list starting at the index they key corresponds to. The iteration is very straight forward linked list navigation. If the key is found we return a pointer to the node and if the key is not found we return NULL.

[source lang="cpp"]
/* lookup:
* Look up a node in given hashtable.
HashNode *lookup(HashTable *tbl, char *key)
HashNode *nptr;

for (nptr = tbl->table[hash(key)]; nptr != NULL; nptr = nptr->next)
if (strcmp(key, nptr->key) == 0)
return nptr;
return NULL;

If you noticed yes there is a lot of pointers being thrown around. Welcome to the world of ANSI-C. Without references all we have is pointers and various pointer arithmetic tricks. In this case no pointer arithmetic is really being thrown around so hopefully this post is very straight forward to understand.

In the next post I will cover the rest of the implementation and may do a third about how to put the code to use. When this series is all said and done we will be able to thank all of the glorious programmers that implemented various complete standard library implementations of this data structure so we don't have to do it all the time.




Taking High Level Programming for Granted

Recently there have been some posts around about people considering using C over C++ for god knows what reasons they have. As per usual the forum crowd advises them to stay away from C and just learn C++. This is great in theory because over all despite the insane complexity of C++ it is a much safer language to use then C. C is a very elegant language because of it's simplicity. It is a very tiny language that is extremely cross platform (more then C++) and has a very straight forward and tiny standard library. This makes C very easy to learn but at the same time makes C very difficult to master. This is because you have to do a lot of things by hand in C because there is no standard library equivalent or particular language features that cover all bases. C++ is safer in a lot of situations because of type safety. C++ keeps track of the type of everything where C actually discards all type information at compile time.

With that little blurb aside I personally feel a lot of programmers should learn C. Not as a first language but at some point I think they should learn it. This is because it allows you to understand how the high level features of modern day high level languages work and people take these things for granted now a days. Today there are not that many programmers that actually understand what templates are doing for them and what disadvantages/advantages they have. The same goes for objects. A lot of programmers fail to understand how objects work internally. This information can make you a much better programmer over all.

Over the last few years I have spent a lot of time in C compared to C++ or other high level languages. This is not only to understand the internals of high level features I have used in the past but because I am preparing for a up coming project I am designing. This project almost has to be done in C for portability, performance, and interoperability reasons. This project not only will be targeted towards the Linux desktop but possibly embedded devices as well. So today I am going to show something that C++ gives you that C does not and how to get the same functionality in C anyway. Then I will explain why the C version is more efficient then the C++ version but at the same time not as safe because of programmer error potential. We will keep the example simple instead of making a Generic Stack we are going to make a Generic Swap function and for simplicities sake I am going to keep the two examples as close as possible.

C++ gives us a feature known as templates. Templates are a powerful meta programming feature that will actually generate code for us based of the the type of data it receives. They can do more then just this but this is a very common use. The main downfall of this particular method of creating swap is that if you pass in over 50 different types to swap during the course of the application you are actually generating over 50 different functions that have to be added by the compiler. So with that said here is a generic swap function in C++ using templates.

[source lang="cpp"]
void swap(T &v1, T &v2)
T temp;
temp = v1;
v1 = v2;
v2 = temp;

There are 2 specific features to C++ we are using here. First we are using templates to generalize the type we are swapping and lastly we are using references so that we are actually swapping the variables passed in. This basically means we take in any type of variable and then we swap the address those variables hold. When this is compiled for each different version of swap we call C++ will generate a type specific function for us.

Now we need to make the equivalent of this function in C. The first thing to note is C does not have templates and C does not have the concept of references and it does not retain type information after compile time. So with some C trickery and clever assumptions based of the specs we can achieve the same result. There are other ways to do this but I am going to do it the 100% portable way this is both ANSI and POSIX compliant standards wise. Here is the code and the explanation of why I can do what I am doing will be explained afterwards.

[source lang="cpp"]
void swap(void *vp1, void *vp2, int size)
char *buffer = (char *)malloc(size * sizeof(char));
assert(buffer != NULL);
memcpy(buffer, vp1, size);
memcpy(vp1, vp2, size);
memcpy(vp2, buffer, size);

Ok so there is a lot there. First a void ptr is a generic pointer we can do this because pointers are always 4 bytes in size so the compiler does not care what is in them because we are just pointing to the storage location. Since we also don't know how big the data being pointed to is we also need to pass in the size. Now we need to find a replacement for our temp variable we used in C++. We don't know what type is stored in our void pointer we need to figure out what to store and calculate how big of a space in storage we need. We don't care what is stored we just want to hold that bit pattern. Because we know in C that a char is only 1 byte we can use that to our advantage to do the pointer arithmetic necessary to calculate the size of our storage. So we will dynamically allocate an array of char types to store our bit pattern. We will also do an assertion to make sure that we are not null before we attempt to copy data into this location. The assertion will bail if we have no space allocated. Next we need to use memcpy this will copy our bit patterns around for us. Lastly we need to make sure we free our temp storage location.
The main advantage of this is the application does not generate a new function for each different type we call through it. This uses the same assembly no matter what we pass into it. This efficiency does come with a price. If swap is not called properly we don't know what we will get back. Also because we are using void pointers the compiler will not complain it actually suppresses what compiler checking we do have. Also you must keep in mind that if the 2 types being swapped are actually different say a double and an int or int and a char* we enter the realm of undefined behavior and have no idea what will happen.

When calling swap with 2 ints you would call it as

swap(&val1, &val2, sizeof(int));

If you are swapping 2 character strings you need to call it as

swap(&val1, &val2, sizeof(char *));

With the character strings you still need to pass the address and you need to pass the size of the pointer to that address range.
This is important because a character string or char * is actually a pointer to an array of characters so you need to make sure you are pointing to the address of that
array of characters.

With all that said you can see how the C++ makes things like this very easy at a price of generating duplicate instructions. With the C you can see of a very efficient way to do the same thing with its own set of drawbacks on the caller side. It is very similar to what the C++ would do internally behind the scenes the only difference is they are passing through hidden type information so that they can generate exact type casting so you retain your type safety. This is a great and simple demonstration of what we take for granted when we use the various high level features of different programming languages. So next time you use these features stop and say thank you to the designers because without their efforts your features would not exist and you would have to do pointer arithmetic on a daily basis.

Last note if you read this and you still are thinking of using C over C++ the decision is ultimately up to you. Personally I love C it is a very elegant and clean language and I really enjoy using it, however, ask yourself if it is the right tool for the job because in C you have to reinvent the wheel constantly to achieve the functionality that newer languages give you almost for free.




Linux saves my day again

Why hello there GDNet. Once again the odd ball me gets to share something that not may GDNet people get to experience all that often. The topic of today has to do with how Linux has saved the day for me. I am sure many people here already know I am a very avid Linux user. I don't have anything against Windows I have that too after all. I like to play AAA games and to do that effectively I just dual boot. Despite this I still do a majority of my work under Linux. This is because I find it very very productive. I find the POSIX interface to be a life saver in many circumstances like the one I will explain today.

First as you may know if you read my blog I am in the process of learning OpenGL. This is a huge step for me because I have been working with 2D for way too long. I fee this is the next logical step for me in interest. To do this I am using the OpenGL Superbible 5th edition which covers the GL 3 core profile. During the book the author eases you into OpenGL by introducing concepts through the library they developed called GLTools. As you progress through the book they start to strip away GLTools so they can introduce you into each concept a little at a time.

Now the problem. I need to set up this library on Linux. The first issue was getting the code. This was mentioned in my last blurb on the blog I was using git-svn to pull down the svn repo. This took forever about 20 min or so. For such a small amount of code this was shocking. I realized later it was actually capitolized on because even tho SVN is slow git had to rebuild the entire repo. Oh Well task one done.

Now task two I need to build the GLTools library and the Glew library. So I navigate into the repo and stop dead wait a second there is no Makefile. So I shoot back to the Linux directory and look there wait no Makefile. They had Makefiles for every project but none for GLTools/Glew then I saw it. They were building GLTools and Glew for every project and storing the libs local to the project. EWE. So now I need to write a new Makefile for this stuff.

Step 3 ok so I fire up Emacs and hack up a Makefile. Once it is done I type make all and it starts and KABOOM. Cannot find header glew.h WTF. So at this point nothing built because GLTools uses Glew to fire up the Extensions required for the OGL 3 core profile. So I navigate up and see that the glew. h file is present so I go and look at my make file and see if I made a mistake. I did not. Turns out in the ifdef preprocessor for Linux they are including instead of which is where they had the file stored. So I moved the header and tried again. KABOOM can't find glew.h. Turns out Glew looks for glew.h in the GL directory. Oh Bugger. Now how are we going to fix this? Before we get into that here is the Makefile if anyone else actually needs to go through this.

# Compiles the OpenGL SuperBible 5th Edition GLTools Library including
# the Glew library.

# Below are project specific variables to setup the proper directories
# to pass to the compiler and linker.
GLTOOLS = libGLTools
GLEW = libglew
SRCPATH = ./src/
INCPATH = -I./include

# Below are variables for the compiler, linker, and also the flags
# pertaining to the compiler and linker.
CXX = g++
AR = ar
ARFLAGS = -rcs

# The actual compilation and linking process goes on down here.

# Compile and link everything
all : $(GLTOOLS) $(GLEW) $(GLTOOLS).a $(GLEW).a

# Basic setup of object file dependencies
GLBatch.o : $(SRCPATH)GLBatch.cpp
GLShaderManager.o : $(SRCPATH)GLShaderManager.cpp
GLTriangleBatch.o : $(SRCPATH)GLTriangleBatch.cpp
GLTools.o : $(SRCPATH)GLTools.cpp
math3d.o : $(SRCPATH)math3d.cpp
glew.o : $(SRCPATH)glew.c

# Compile GLTools
$(CXX) $(CXXFLAGS) -c $(SRCPATH)*.cpp

# Archive GLTools
$(GLTOOLS).a : GLBatch.o GLShaderManager.o GLTriangleBatch.o GLTools.o math3d.o
$(AR) $(ARFLAGS) $(GLTOOLS).a *.o

# Compile Glew
$(GLEW) :
$(CXX) $(CXXFLAGS) -c $(SRCPATH)glew.c

# Archive Glew
$(GLEW).a : glew.o
$(AR) $(ARFLAGS) $(GLEW).a glew.o

# Cleanup
clean :
rm -rf *.o

Ok now that this is out of the way how do we fix it. Well POSIX + Linux to the rescue. So here is the problem. We have a directory of 11 header files. We do not know which header files contain the declaration for glew.h because the make file is bailing on us before it tries the others due to dependencies needed to continue the build. We don't want to open all 11 files into an editor and manually change all of those. For one we are programmers and programmers are lazy. This is a total waste of time so lets use the power of our POSIX based command line BOOYAH. So here is what we need to do. We need to first find all the header files then we need to search each header files for and replace it with . I know you are asking how are you going to do that? Well let me explain. On POSIX based systems each command you use at the terminal has 3 different streams stdin, stdout, and stderr. The nice thing is since every command has a proper in, out and err we can actually by definition in the POSIX standard "Pipe" together different commands to transfur the data onto another process. So to do this task there are 2 commands we need. The first is find which basically reads the specified directory structure and outputs a list of that structure. Then we need a command called sed which is actually a data stream manipulation command. It basically allows you to hack and modify the data streams to bits. So what we need to do is find the headers and modify them with sed so that we can make the correction in one swoop without needing to open all the files and type the fixes by hand. Here is how this is done.

find . \( -name GL -prune \) , \( -name '*.h' -type f \) -exec sed -i 's/\(/g' '{}' +

Basically what is going on here is we are telling find we want all of the header files in the current directory structure minus the directory structure of GL and pipe it into sed to use a regex search to find and change it to for every file in the list that find provides.

Cool stuff one line fixes all the the appropriate files and boom make all compiles everything I need. Go Go POSIX and Linux.





Not really journal worthy or at least typical for my Journal.

So I am sitting here right now on my Linux Dual boot getting ready to setup some stuff for the OpenGL SuperBible 5th Edition that I am working my way through. I wanted to get everything set up on my Linux development side of things because unless I am playing Eve or doing school work I am using Linux anyway.

My first step is to pull down the code and I am ready to fall asleep. Basically all of the latest code for the book with bug fixes is located in a google code subversion repository. As of late I am really hammering home on git because in reality it is a really nice VCS once you get use to it. So I decided to pull down the code with git-svn. *SLEEP* So this has been pulling the code down now for the last 10 min or so. Still not done. Before you say anything it is not git-svn. Beleive it or not git-svn is going a lot faster then my first attempt at pulling it down with the svn client.

This is just rediculous and all this is just so I can compile GLTools :S




Eclipse CDT 8.0!

This is kind of hilarious. After I went through all that effort and made that post about tool chains for the stubborn I am no longer stubborn. Let me explain why.

First and foremost that whole post was pretty much how I have been working for the last year or so. This is mainly because of how much I feel visual studio gets in my way when I am coding. The features it has are nice but I don't like the way certain things feel with it because mainly I had to use Visual Studio Express. When you are working with a Tool Chain the less interruption you have the better. With Visual Studio Express it is very easy to get interrupted because every time you need to do something the IDE does not support because it is "Express" you need to break concentration and go to another tool. When I worked with vim and makefiles from the CLI my flow was never interrupted because of various scripts and other things I had set up to do my work. But like a typical Linux junkie I am constantly looking for better ways to do things. When I heard that Eclipse Indigo launched I just had to go and try it out because I like Eclipse a lot and use to use it all the time for my Java development. When it came to C++ though Eclipse was kind of stale. UNTIL NOW.

I introduce you to Eclipse CDT 8.0 the wonderful piece of software that it is. So lets go over some of the new features this beast has.

1. Reworked the new project wizard.
- This is really nice. As you go through your wizard there is an option to click advanced and set up your extra includes, libs, and linker settings. What I like about this the
the most is once it is done in the wizard as soon as you create that new file you are ready to go.
2. Reworked build settings.
- Long gone are the convoluted build settings everything was streamlined and placed where it makes sense. This work is not done they have more plans to make it even
better for the next updates.

Now for the Big Ones that I love.

3. Full static code analysis.
- This is amazing. They did some really sweet work on their C++ parser and this is near instantaneous feedback that makes coding a breeze to the point where it even
knows how it probably should fix your screw up. It even gives logical suggestions for all sorts of things if you so wish to look at them. All this is to help prevent
a lot of commonly made errors. They also have more plans on how to make this even better.

4. Actual refactoring.
- Yes the parser is that good. It can actual do full refactorizations. Right now it is limited in what kind of refactoring can be done because they did not have time to add
more before release. They got a lot more of these planned and because the grunt work is done they can add them quickly.

5. Git/Mylyn GitHub Integration
- Git integration is finally here and functional. It even comes with a nice Mylyn plugin that will hook right up into the GitHub bug tracking system.

All these features are great and the nice thing about it is the work on there parser made their code completion phenomenal and ridiculously fast. The best part about the code completion is that it does not get in your way unless you invoke it with ctrl+Spacebar. Once you have your header files included all you need to do is Save and you are good to go those headers you included can now be seen from your code completion system. It also build code completion based off your file as you go. It does this without the notorious sluggish nature people have been known to see.

To give you an idea about how good this code completion is on its filtering here is an example. If you type GL_COLOR_B and hit ctrl+Space it will give you the choice of GL_COLOR_BUFFER and GL_COLOR_BUFFER_BIT. Another nice thing is this filtering is slick on the fly in case there are a lot of functions with similar names.

This is a great IDE now. You don't even have to use MinGW if you want you can install the Windows SDK and use the microsoft compilers and eclipse instead of visual studio.

In all honesty words cannot really express how amazed I am with this release of the product. If you don't believe that Eclipse CDT is slick and fast go to the website and try it out especially if you are using Visual Studio Express. If you are a C++ DirectX guy make sure you use microsofts compiler and you will have no issues.

For someone like me to be impressed by an IDE this much it goes to show it is good. I can't afford to go out and buy VS Pro so this is a phenomenal piece of software that solves my dilemma it is free and extensible as well. Go try it seriously.




Tool chain's for the stubborn

Hello GDNet. It has bee a while since I last posted partially because I have been really busy and just really did not have too much to post about. Today I am going to talk about toolchain's for the stubborn people among us. Every so often I see a thread come by on for beginners asking about command line compilation and various out of the ordinary editors. This entry is for those people. Like them I am stubborn and often prefer to do things the "more difficult way". So in order to do this properly I am going to have to make a few assumptions and here they are...

1. You like me are stubborn. This is defined as an individual who refuses to "see the light" and don't understand why everyone uses IDE's. You fail to see the benefits to your productivity and feel the feature they provide often get in the way and also feel that they are bloated, cumbersome, and restrictive to your workflow.

2. You want to develop cross platform and are completely clueless to the whole GCC/Make world and don't want to restrict yourself to a pure Microsoft compiler so that you can carry your new found knowledge over to Linux, BSD, and Mac OSX.

Note: As of this posting I don't have a particular editor of choice to put in the tool chain list I am still trying to find one I am 100% comfortable with.

Attention: At the end of this post I will also provide a self contained zip file with the appropriate files for you to try this out yourself. I will be using an example from the OpenGL Superbible 5 for this as this is GDNet. In order to compile and run this example your video card must support a minimum of OpenGL 3.0.

What we will be working with:

1. MinGW GCC + Msys
2. Makefiles
3. Editor of choice

Ok that is really all you need.


First we need to install the MinGW GCC + Msys collection onto our Windows System.

1. First go to MinGW Latest Link and download the file labeled mingw-get-inst-20110530.exe
2. Run this exe file to start the installation. Ensure you install this to C:\MinGW and make sure you also click the check box for C++ and Msys base.
3. Next we need to set a environment variable so that we can use these tools from the windows command line for simplicity of use.
- Open My Computer a.k.a. Computer (Windows 7) from the start menu and in the top bar select System Properties.
- Next select advanced system properties from the side bar. (Windows 7 not sure about XP it has been a while).
- Then click the Environment Variables button.
- Under user variables press new and call it PATH then under value type C:\MinGW\bin;C:\MinGW\msys\1.0\bin; and then press ok.
- Now press ok on the rest of the windows to close them out.
- We are now ready to go.

The Infamous Makefile:

Ok now in order to make larger projects and projects with dependencies easier to compile this thing called the Makefile was invented. The Makefile is a small script that the make tool reads so it knows what steps it must take to compile the code.

A Makefile consists of a few properties usually what are known as Variables and Targets. The Variables are designed so you can save yourself a lot of typing in lengthy Makefiles and Targets are basically labels of what you want to compile. When you use the make command the syntax is make the target tells make where to enter in the Makefile so it knows what to compile.

Here is the Makefile for the Triangle project don't worry I will break it down...

[source lang="make"]MAIN = Triangle
SRCPATH = ./src/
LIBDIRS = -L../libs/freeglut-2.6.0/lib -L../libs/GLTools/lib
INCDIRS = -I../libs/freeglut-2.6.0/include -I../libs/GLTools/include -I../libs/GLTools/include/GL

CXX = g++
LIBS = -lfreeglut32_static -lgltools -lglew -lwinmm -lgdi32 -lopengl32 -lglu32

all : $(MAIN)

$(MAIN).o : $(SRCPATH)$(MAIN).cpp

$(MAIN) : $(MAIN).o

clean :
rm -f *.o

Ok so what does all this mean. Well at first glance it may seem overwhelming due to variables but in all honesty it is quite simple.

First line 1-8 are variables. The syntax for variables is as follows... =
Very simple. Here in this Makefile I am using variables to handle the Main application name, the source code path, the library path, the include path, the compiler to use, the flags for the compiler, and the libraries that need to be linked to the binary file. In GCC -L is used for library path, -I is use for include path and -l is used for linking.

The rest of the file consists of targets, dependencies and rules(commands).
The general syntax for the structure of a target is as follows.


It is important to note that for the Rules section you must use a hard tab not a tab that was converted to spaces.

The first target is "all" This is executed when you type make or more explicitly make all at the command prompt. The dependency to this target is $(MAIN) which is the variable from above that actually points to our target later in the file known as $(MAIN).

The second target is $(MAIN).o with a dependency of $(MAIN).cpp. In this case remember our variable resolves to Target so this is the equivalent of dictating what the dependencies for our programs object file are.

The next target is $(MAIN) with a dependency of $(MAIN).o This target is what actually compiles and links our application. It has a dependency on our Triangle.cpp's object file. This target is slightly different in that it actually contains a rule. This rule is pretty straight forward if you resolve the variables. It is the actual call to the g++ compiler to compile and then link our application.

The last target is clean with no dependency. This has one rule which deletes all of the object files in our program path so we can get a clean compile from running make. To execute this target just like any target you need to type make clean.

That concludes our look at Makefiles so how do I actually use this?

Basically open your command prompt navigate to the projects root directory (The one with the Makefile in it. Makefiles are simply named Makefile or something with Makefile in the name) and then simply type make and hit enter and it will compile the application and the directory will now contain the exe file you can run.


Ok so what is there to conclude about. The ultimate conclusion is this Makefile's are a very flexible way to compile an application. They allow for the flexibility to logically structure your code paths within your version control system and create a very organized and linked way to structure your complication process. The make system can actually call itself from within a make file which is very nice this can lead to a smooth way to test individual components and can even aid in building yourself an agile development workflow as well. Most of all this compilation process can be used on any OS as long as they have a way to get the tools to build the project. If you write your Makefile properly you will be able to make a small modification or even no modifications and it will compile on any OS that can get access to make. In our case we got access to make on windows with MinGW + msys.

Now go and be proud of your stubborn behavior because now you have no excuse to not be stubborn.

Remember: You can try this out yourself just install the tools as stated in the article and extract the archive I provide below to your hard disk.




Looking for two people for an e-mail Interview.

Hello everyone. I figured I would post this here in my blog since it get a fair number of views and will not get lost in the forums.
I am conducting two interviews to use some of the information as sources in my research paper for school. The topic of the paper is on the escalating violence and obscene behavior in video games and it's effect on gamers and children alike. I will be using some psychological research, personal knowledge, and hopefully these interview results as a argument and opposing view to the situation as it is a very controversial topic currently affecting the industry.

The interview will consist of several carefully selected questions to hopefully get a good perspective at what the issue looks like from inside the industry. I am looking for at least one industry professional, hopefully from a AAA quality studio. I know there are a few on this site. I would also like the other individual to be a member of the independent games industry as well, preferably with at least one successfully shipped title on any platform mobile devices included. It would be also helpful if you took some part in the development of a violent video game.

I really would appreciate your time in answering the few questions I have. If you are interested send me a private message on this site or shoot me a e-mail with how I can get in contact with you and your credentials in case I get more then two individuals willing to participate so I can make an informed decision.

Thank you in advance again.




Objective-C and Delegates

So I had a chance to sit down and do a challenge from my Objective-C book today. The challenge was to create a delegate to the Window object to control resizing so that the window is always twice as wide as it is tall.

For people who don't know in Objective-C a delegate is a way to reroute method calls from one object to another object for handling. This is more flexible then delegates in C# because those delegates are technically listeners where delegates in Objective-C are protocols that must be conformed to. A protocol in Objective-C is similar to an interface in C# where it is a strict set of guidelines that must be followed however Objective-C allows you to omit implementation of certain aspects allowing for you to decide what you want to handle. The nice thing about delegates is it allows for the modification of the behavior of an object without your delegate needing to know about the object it is modifying the behavior of. This basically means it removes the need of obsessive subclassing which is a good thing.

So onto the implementation of this challenge. In order to modify the sizing control of a Window object we need to delegate the windowWillResize message to our handler class.

For this we will create a simple handler class we will just call WindowDelegate for simplicity.


@interface WindowDelegate : NSObject {




[size=1]This code is very simple basically all we do is tell our WindowDelegate class to conform to the NSWindowDelegate protocol which contains our windowWillResize message.

[size=1]So now we need to implement the windowWillResize message so that it will keep our window twice as wide as it is tall.
[size=1]To do this we need to know the signature of the windowWillResize message the signature is...
[size=1]- (NSSize)windowWillResize:(NSWindow *)sender toSize:(NSSize)frameSize;

[size=1]Basically this message brings in a pointer to the calling object and a new size for the calling object and we return the size back to the calling object.

[size=1]Now the implementation.

[size=1]#import "WindowDelegate.h"
@implementation WindowDelegate
- (NSSize)windowWillResize:(NSWindow *)sender toSize:(NSSize)frameSize {
[size=1] float newWidth = frameSize.width;
[size=1] float newHeight = frameSize.height;

[size=1] if ((newWidth/newHeight) != newHeight) {
[size=1] NSLog(@"Width is not equal to twice height modifying width.");
[size=1] NSSize newSize;
[size=1] newSize.width = newHeight * 2;
[size=1] newSize.height = newHeight;
[size=1] NSLog(@"New window size: %f x %f", newSize.width, newSize.height);
[size=1] return newSize;
[size=1] }

[size=1] NSLog(@"New window size: %f x %f", frameSize.width, frameSize.height);
[size=1] return frameSize;


[size=1]NSSize is a C struct that contains the width and the height of the object. so we don't need to use it as a pointer.
[size=1]We don't need to declare the method in the header because the Protocol declares the method and we basically insert it into our class at compile time. We also don't even need to instantiate the class at all in code. This is one of my favorite things about Objective-C and Cocoa. To use this delegate you open up Interface Builder and add the Object to your application and then simply connect the windows delegate property to your object in interface builder. When the nib file is serialized into memory the Objective-C runtime will automatically instantiate the delegate class for us.

[size=1]That is all for today. Again I apologize for the screwed up code tags I am not sure if it is IPB or Safari at this moment that is screwing them up. Either way I hope it is fixed soon.




A little something

So lately I have been slamming away at papers for school and preparing for my final research paper. On the side I have been dabbling with World of Warcraft Cataclysm, playing zombie farm on my iPhone and dabbling with Assassins Creed 2. During all this I spawned an idea for a game.

Originally I was learning objective-c for the purpose of creating a RSS reading app for MacOSX now my brain is on a detour per usual. As I sit here fleshing out some basic game design in the back of my head I am thinking should I write this in Ruby minimizing the learning I have to do or should I write this in Objective-C and have access to the App Store and potentially a release for the iPhone on its app store. The major issue with the Objective-C route is it involves lots of learning from persisting my growth in the Objective-C language down to learning OpenGL for rendering as well. If anyone has any input please comment below.

I will keep this post short my mind gears are a brewing.




I <3 GDNet

Hello everyone I have another version control post in the works. It will be similar to the last one however slightly different and more academic in nature. I am currently taking a Advanced Composition class in school and I need to write an expository (informative) essay. I chose to do it on why a software developer should use version control and the ramifications of not using it when developing software. I am sure you guys will really enjoy reading it. But that is not why I am blogging today.

The reason I am blogging today is to say how much I love gamedev.net. I first came to gamedev.net back in 2004 as a young aspiring game developer. After a hiatus I came back under a new user id because I totally forgot what my last one was and email address changed so on and so forth. I had my ups and downs here pissing some people off because of my sometimes sarcastic replies I have a tendency to make. They technically are not but come off that way because from time to time I get irritated with how lazy people are and don't do their proper research before asking questions. We have all had those moments. Either way I have learned so much about development from this site and I have given back the same learnings to some others that this site pretty much became a second home for me.

Through out these years I learned so much about myself from this site and this site shaped me so much as a developer. Out of all the lessons I learned from GDNet the largest lesson I learned was that game development is not for me. Sure I can make games but that does not interest me anymore I guess I outgrew the childhood fantasy and found my real niche in the software community of developing software to make tasks easier and more intuitive. I also learned I have a passion for Information System Security as well (my current degree). All of this is because GDNet helped shape my identity over the years.

At this point in time I have a burning desire for GDNet to be more then Game Development centric. I mean I rarely ever talk about game development anymore it is all about Version Control, my progress learning Obj-C and cocoa and other software related topics. This community is a great community I just wish it was more Software centric covering the basis of both game development and normal software development.

Either way Home is Home and I




Still Alive

Hello everyone just popping in to say I am still alive. Been pretty busy the last few with prepping for final exams and such. Now that exams are out of the way I can get back to relaxing and working on school work plus learn some more about Objective-C before my next final exam in 8 weeks. One thing about going to school Online with a compressed program structure is you really hammer through content fast in a class. I mean reading 2 - 3 chapters a week, plus discussions and assignments and a quiz here and there. I must say if anything can prepare you for tight deadlines it is a compressed school course. For instance it is now Week 1 of the spring session I read 2 chapters got my 2 discussions to take care of and a 2 page paper that is due at the end of next week + next weeks work on top of that paper. Can get kind of rough and really taxes the organizational skills.

On another note I ended up getting another Cocoa book to get a second perspective on Objective-C + Cocoa and I must say I like this book a lot better then the other one. The other one was good but this one ups it in every way shape and form. This book is what people call the Hillegass book also known as Cocoa Programming for Mac OS X done by the world famous Cocoa teacher. I must say it really shows his explanations and style Put Cocoa and Objective-C Up and Running to shame. My favorite feature of the book is at the end of each chapter the author presents challenges for the reader to go out and write code on your own which is important when learning something new. It is not a book that is all about copy down the example and see what happens he actually gives challenge assignments that make use of the concepts you have learned thus far in the book. Very good stuff. As I move through this book everything is starting to make sense. Cocoa + Objective-C is a whole new perspective on development that goes against the particular trend of modern languages today and right now as things click I can see why so many developers are in love with the system and why Mac OSX applications are so robust and solid compared to the applications on other operating systems. Not to say windows and Linux don't have good software they do but Mac seems to have more of it and part of the reason for that is Cocoa.

I will definitely make sure I keep you updated as things mould together and I really can't wait to start my own first Cocoa Application. I am hoping to actively talk about my project I will be starting as well in this journal. Even tho the application is not a game I still find it important to talk about it because a lot of the issues and concepts I will be dealing with from both a design and programming standpoint are great for everyone to learn from. This is a developer journal after all and I don't see anything stating it has to be game related in nature so all gloves are off and heck it beats having to go out and find another place for a blog and cross link like I tried previously. The new journal system is so much nicer then the old one.




Book Review: Cocoa and Objective-C Up and Running

Well I just finished my first book on Mac OSX software development. First and foremost I should go into a little bit on why I am in the process of learning Objective-C and what drew me into using this book in the first place. First reason I decided to dive into Objective-C is my new desktop/development platform is a iMac. Secondly the new phone I will be getting once the tax return comes in is a IPhone 4 due to it hitting my carrier Verizon this month. Currently I have a Android phone and I am very very disappointed maybe it has to do with the fact that mine is a samsung not sure. I want to be able to develop applications for both my phone and my desktop computer. To have any potential to sell these applications I need to use the right tool for the job and according to apple this is Objective-C/Cocoa. Now that is out of the way what drew me to this book.....

First when I was looking at the selections of books out there I saw a few very high quality titles. I on the other hand have experience in development in both GUI and games from a hobby perspective and have a solid background in programming concepts. So I did not really want a long drawn out book because I know how to read a APIAPI doc I just want a feel for the language. With a background in C and C++ already this was not too much of a leap for me. This books is short and to the point unlike some others but that can be a flaw now onto the review.

After reading this book I must say this is not a book for beginners. It says it was written for beginners, however, if you have never programmed a line of code in your life this book moves way way to fast. First you need to understand that Objective-C is just a layer on top of C so you are basically using C with some runtime extensions. With this in mind this book covers C in 2 chapters. I learned C from the K&R book so this is ok for me as a nice swift refresher but for a newbie to programming this is just not going to cut it. Next the basics of OOP are covered in just a single chapter. UH..... sorry not for beginners again it took me C + structs, C++, Java, and C#, and Python and a few years of experimental throwaway practice projects to get this concept right. It took me 1 year just to understand why Interfaces in C# were even useful in the first place. Took even a year before polymorphism slapped me in the face. Uh to go on even further the basics of Objective C are again taught in 2 chapters. NOT FOR BEGINNERS can I say it enough. With no experience in programming a newbie to software development won't understand a damn thing and be confused as all hell after the first 4 chapters of the book.

Ok with all this said I still thought this was an amazing book to learn from. First and foremost I was able to breeze through the first 4 chapters and get right into learning the syntax and the way Objective-C works. After 2 chapters of learning the new runtime/language extensions it got me right into the Mac OSX Cocoa API framework kind of like the .Net framework but for Objective-C. This is where the final 4 chapters of the book take place so you can write effective GUI apps for OSX. The last chapter of the book gives some useful pointers and some more pointers to further information to learn more. That is it 11 chapters compared to the typical 30 chapter books out there. Now some people may think this is not a good thing personally I found it refreshing.

The biggest commendation I give to this author in his book is the effective use of the tools apple provides ala XCode and Interface Builder. These tools are amazing and I am so happy the author did not do what a lot of Java books do and force you to use notepad and the command line. Face it people yea Vi and Emacs are great for a quick and dirty file edit but we are in the 21st century now and have better tools use them damn it. It is amazing at how powerful these Mac tools are and if you don't learn them or use them you are stupid and the author even rightly states so. The author does not souly rely on interface builder because if you want to make your app truly fluid like Mac apps tend to be you do have to write custom view boilerplate code and they author definitely goes there. Another thing is the author does something I personally really like first he explains the concepts of the chapter then he puts you into code working on a project. This is great using what you learned helps learning it.

Word of warning there are lots of Text Walls in this book. The author expects you to understand previous concepts so a lot of the projects are code code code with some explanations here and there. If you payed attention and learned the previous content of the book you should be able to easily parse the code and understand what is going on. Personally I like this style because it really makes you think about what the code is doing and not just telling you what it is doing. If you want to get anything out of this book you have to do every example to completion.

The author also uses screenshots in a very effective manner for tool demonstrations to make sure you got it just right. This author really takes his tools to heart and I love that. Like I said earlier he states you are stupid if you don't use the tools apple provides because it makes things so much easier and makes you so much more productive. He goes into how to hook up Actions, Outlets, and Bindings through interface builder and even shows something that made my jaw drop and that is XCodes amazing data modeler for the Core Data framework. The Core Data framework is a persistence framework for cocoa that allows you to save data across sessions and even gives you some juicy free stuff along they way like undo and redo.

Overall I feel this is a amazing book and that I learned a lot of information from it. I definitely think if you are NOT A BEGINNER developer and want to dive into OSX development this book is a great way to hit the ground running. Now I just need to put this stuff to use and start a project. More on that next blog post....




Learning Objective-C

Well now that I have a iMac I have been working though a Objective-C/Cocoa book to wrap my head around the main language used by Apple Products. My initial impressions of the language was WTF why can't they just use C++ like other platforms. As I go though the book however, my opinions have been changing. For those that don't know Objective-C is basically a layer on top of the C compiler to introduce object oriented programming. You say why C++ does that. Yes C++ does that but not in the way Objective-C is. Objective-C is what I like to call a compiled static dynamic language hybrid. One of the most powerful features of Objective-C is the capability to modify code on the fly with code right down to the basic objects that are at Cocoa's core. Ruby programmers would know this as monkey patching. First a word of warning Objective-C is worthless on other platforms besides apple platforms because the reason objective-c is so powerful is because of the Cocoa framework that apple provides. Without Cocoa Objective-C is just another C + Objects language.

Now the most confusing thing for me with OC at the moment is the syntax. When you have an object you call its methods with the bracket syntax. For example...

OBJFoo* foo = [[OBJFoo alloc] init];
[foo sayHello];
[foo release];

The basic gist is [ ] in OC methods are called messages go figure damn IT types love their fancy words.

So lets say foo has a property call helloText. We can set this text using the set message and we can get the text using the get message.

OBJFoo* foo = [[OBJFoo alloc] init];
[foo setGreeting:@"Hello Foo"];
NSLog(@"%@", [foo greeting]);
[foo release];

With the new version of objective-c you can now use dot syntax to get at the accessor methods by saying foo.greeting = @"Hello Foo"; and foo.greeting to get the message.

This just adds to the confusion in my opinion. Just when I was understanding that everything in objective is pretty much a message and got use to reading the bracket syntax they throw this dot syntax at me. Yes I understand dot syntax but you can't use dot syntax for messages in general only accessors aka properties. So I have personally started converting all the code in the book to use brackets because for me it is easier to understand objective-c code with the brackets. Take this for instance...
Say we are using this window object inside of a class only so we prefix with self to make more sense.

NSWindow* myWindow = [[NSWindow alloc] init];
NSButton* myButton = [[NSButton alloc] init];
[[[self myWindow] contentView] addSubview:myButton]; // this is more understandable then
// this
[self.myWindow.contentView addSubview:myButton];

The reason I find it more readable is because
[[[self myWindow] contentView] addSubview:myButton] reads perfectly from left to right
get my instance of the myWindow object and then get its contentView now add the my button to our array of subviews.

the dot syntax
[self.myWindow.contentView addSubview:myButton];
is harder to understand what it is doing unless you 100% understand that the dot syntax is only used for properties and or instance variables.
yea the dot syntax makes the code look a bit cleaner but at the same time it is easy to glance at it and forget what is going on especially if you are not familiar with the underpinnings of objective-c. I think apple should have let this one out for consistencies sake.

Despite my syntax hiccups I am really starting to like the language overall as a whole. It is very well designed especially because of the amazing Cocoa framework.
I understand I did not go into detail on a lot of stuff because in all honesty I am just starting to get over the few hurdles the language throw's at me. But rest assured you will see more. My first project in the coming future is going to be a RSS reader. I understand that this is a game dev site but who cares it will be a interesting write up.




Version Control.

First I must open with the simple fact that a lot of people just don't use version control. The main reason for this is because in all honesty a lot of people just don't understand it. Not to mention the major version control war going on at the current moment just tends to confuse people all together even more. I am going to do my best to give some informative information on the different version control systems (VCS) out there to try and help make sense of the decision I need to make for my next project as well as hopefully help others make the decision for their projects.

First there are currently two types of VCS's out there at the current moment first is the CVCS and next is the DVCS. CVCS systems like subversion and cvs have a central server that every client must connect to. The client basically pulls revision information from these system to a working copy on your hard drive. These working copies tend to be very small because it pulls only the information it needs to be considered up to date basically the latest revision. The major gripe people seem to have with these types of systems at the current moment is the lack of enough information to do a proper merge of a branch back into the main code base. DVCS systems are what people call distributed. I hate that term because I think it makes things harder to understand. Examples of these are git, mercurial, and bazaar. The reason I hate the term distributed is because currently a lot of people use DVCS systems just like a CVCS system but with benefits. Typically the main code base is stored in a central location so that people can stay properly up to date with the code base. DVCS pulls the entire repository to your system including all of the revision history not just the latest. This allows for you to not be connected to the server and do everything right from your machine without a internet connection. What I like about this is in its own right everyone has a complete copy so if something happens to the central repository numerous people have a backup of the code and revision history. The nice thing about DVCS and why I think so many people are fanatic about it is the easy branching and merging. Because you have the total revision history it allows you to easily branch merge and cherry pick changes at will without a lot of risk of "pain". So when looking at the two different types of systems think CVCS (most recent revision only in working copy), DVCS (total history in working copy).

*Warning opinionated*
My main gripes with the current arguments out there have to do with the fact of branching. The DVCS group of people seem to like the idea of branching every time they make a code change or add a new feature. They argue that this is insane and painful to do in SVN because of horrible revision/merge tracking. Ok I agree branching and merging can be painful in subversion but at the same time it is not as bad as people say because they are taking it to a extreme and not using the system properly. I am not the kind of person that likes to make a branch for every feature I add into a application. I feel branching should only be used for major changes, re-factors, and other changes that have a large chance to BREAK the current working development tree. Maybe this mentality is why I never really had to many issues with subversion to begin with when it came to branches and merges. Maybe it is because I was using subversion as the developers INTENDED according to the svn red-book. I don't know maybe I am just a hard sell.

So which one should you use. It is hard to say each system has its pros and cons as you have seen. The one feature I love about the DVCS camp is the speed at which you can get up and running and the fact that sites like GitHub and Bitbucket are amazing places to host your code. I also like the speed of DVCS systems, because you are doing everything from your local machine and not over a network DVCS is blazing fast compared to CVCS. Lastly the thing I like the most about DVCS is the fact that they are very flexible and you can use them with whatever workflow you desire. The main cons I have for DVCS are the lack of repository permissions, large file support is really not that good because of full repo history pulls, no locking for binary files and other code files to prevent major issues if both are modified at the same time. For example if you only want to branch when you feel something has a good chance to break the entire main line of development you can do so. If you want to be considered the central repo instead of a host like GitHub you can do so. What I like about CVCS is the fact that it can handle very large files quite well. Another thing I like is the fact you can have repository permissions making sure you know who can write to the repository. I also like the fact that you can lock binary files and other files if you wish to prevent other people from making changes to the file as you make your changes. This alone can save tons of time if you are working as a team and your art assets are in version control with your code. The major issues I see with CVCS are the required network connection and can cause speed issues *coffee anyone*, not having the full revision history present on the machine making it difficult to cherry pick or inject merges into different parts of the development line.

Keep in mind the pros and cons are as I see them other people may have different pros and cons. Yes I listed less cons for subversion however, there are times when these cons can definitely outweigh the pros. The cons of DVCS are hard to ignore and can outweigh the pros at times as well. So with these things in mind I am sure you can better make the decision you need to make. As for me I have used both and I like both a lot which make the decision for me extra hard. The one big pull factor for me is hosting and DVCS has the huge win there. So for my next project I will be using DVCS because I feel the pros outweigh the cons more so under most circumstances. Not to mention I really like the speed and the fact of having a whole repository backup on my machine. Ultimately the decision is yours but with this information hopefully you can weed through the google fluff that is out there.

In the future I just might have to return with my experiences and if anyone wants more detail into the inner workings of Version control systems let me know in the comments and I will go out and find the video that goes into the details of the internal differences of how revisions are stored.




Hello from my new desktop.

Hello new GDNet. First off I must say I really love the look and feel of the new website just phenomenal. Despite the small hiccup where somehow during the migration my password got corrupted the staff did a wonderful job in getting me the information I needed to get up and running again on the site in record time. Excellent support guys.

Now on to the real reason for the post. I am saying hello from my brand spanking new iMac. It takes some getting use to but I am in love. The computer is fast, responsive, well integrated, and extremely powerful all in one. Not to mention it feels great to have the power of a *nix kernel + tools without the really nasty support issues and continuity/integration issues of the open source Linux system. Granted I was able to do wonders with a Linux system but I just got sick of the crappy support and things getting broken all the time with kernel updates and what not. So I decided it is time to make the switch to a Mac. Boy I am so glad my $1200 was well spent I would have been pissed if I hated the system. So what is in this beastly computer. Keep in mind I did not by the best iMac on the market so the video card is not the top of the line but I could not justify spending the extra $300 for the kind of development/other purposes I use my desktop for. I am a console gamer so my PS3 handles that for me.

So the specs.
21.5" Screen with speakers mic and webcam built in.
Mac OSX 10.6.6
Intel Core i3 3.06 ghz
ATI 4670 256 mb
4Gb of 1333 mhz ddr3
500Gb serial ata hd
8x Superdrive + sd card reader
Bluetooth wireless keybaord
and a Magic mouse Bluetooth wireless.

So my first impressions is the integration with the hardware is slick. For the specs the machine is a total speed demon. The construction quality is superb I am really impressed with the care taken to make sure everything is top notch materials aka aluminum and glass instead of plastic. The clarity and brightness of the monitor is top notch as well. The os ui is easy to navigate and does so without taking away the power house *nix features and gives easy access to those as well. My favorite so far is the Magic mouse. This mouse is the most well designed / innovative things I have ever used. First it is very aerodynamic but at the same time it remains comfortable and reduces wrist stress. Second it is constructed out of aluminum and the top of it is glass. Great responsiveness from the mouse with great precision. My favorite feature is the lack of a scroll wheel and only having one button. The mouse is multitouch capable. This allow for 1 button to actually work as both right and left click by recognizing which finger is touching the mouse at the click time. Sliding the finger down the mouse allows for scrolling and you can even flick fast it hit the end or beginning of a list or web page. Flicking two fingers left or right allows for back or forward in your web browser just fantastic. You may think the mouse might go flying but it does not move at all thanks to the nice weight to the mouse not to heavy not to light just right. Hold down control and move your finger like you are scrolling and you zoom in and out of the desktop from pointer origin. Very nice for accessibility or doing web video screen casts. The keyboard is nice as well. Very comfortable. I was using a microsoft ergonomic keyboard however, this keyboard is just as stress free. They got just the right angle and the key placement is excellent. The keys don't take much effort to respond to input so it is great for touch typing like I do. It also give you accessibility to all the features of the iMac/OSX like ejecting a cd, expose, brightness, volume, hidden widget dock and media keys.

Software installation is a breeze and initial setup is amazingly simple. Can't wait to give the TimeMachine backup cloud a spin to back up all my documents and such as well. Will let you know how that works out.

Conclusion if you are on the fence of buying a Mac or sticking with windows the answer I can give you is if you play a lot of games and like to constantly upgrade you hardware to stay cutting edge use windows and build your PC I have done that. If you want a amazing stable computer experience without any hiccups and don't mind not having the best video card on the market go with a Mac you won't be disappointed not to mention you get all this great and amazing software that you will never see on a PC. I will say the software for Mac computers is just phenomenally well designed and works like you expect it to with very few bugs. I hit a few bugs with the Safari web browser so I am using Firefox atm but it is not Safari's fault I blame it on bad web coding from my online universities software. I am sure in time those bugs will go away especially with the impressive power of the Safari beta getting ready for HTML 5. The decision is yours either way you will be happy as for me I think I am now officially addicted. I was skeptical at first but now I see why they say once you go Mac you never go back.




Wow it has been a while

Hello everyone it has been another long while since I posted been so busy. Work and School seems to occupy my life and what little spare time I have had I use to play COD Black Ops on my PS3.

School is going great I am acing my classes so far. I must say I never expected going to school online to be so much more difficult then going right to a campus and classroom. The main reason I am going online is because I really have no choice with my job and what schools offer around here. It really takes a lot of time and dedication to stay on track when going to school online. There are no reminders you need to make sure everything is planned and organized when it comes to your time and that helps avoid the procrastination bug that people tend to have.

On another note as of late I have been learning the workings of the android framework. I have never done any sort of embedded development before so this is new territory for me when it comes to game development. So not only have I had to learn the layout of the phones architecture but I also have to brush up a bit on my Java. While doing this I have been watching some of Googles I/O videos about best practices for embedded android development to avoid some common architecture performances issues for when I start my project. From the looks of it to build a solid Android app you really need to watch what you do because so many things we take for granted doing desktop development are really expensive in performance cost when you toss it on a 1 ghz ARM processor.

That is all for now I will keep posted on the project process as I move further into development.




It has been a while remix.

Hello GameDev.Net,

It has been a while since I have posted yet again. I have been really busy as of late with the new job hours and getting set up for school. Yes that is right I am going back to school woot woot. It really feels great to be working towards my degree again.

Currently I am attending Devry University pursuing a bachelors degree in Computer Information System with a focus in Security. So far I really like the school. The faculty is phenomenally helpful and the teachers really are great. So I just finished week 1 and currently have the maximum points I could earn making my grade as of right now a 100% woots. Good way to start out the semester I think.

On another noted I ended up picking up a Sony Viao laptop. Granted it only has Intel Accelerated HD graphics but it is running a Intel i3 processor. This thing is blistering fast and stay very cool.

That is all for now wish me luck :D.




Been a while what am I up to?

Well it has been a while since I last post. I recently have purchased a PS3. What a amazing machine I think it trounces the XBox 360 in a lot of ways from pure performance to graphical quality. Not to mention I have not purchased one game that was a disappointment. So I have been tied up playing catch up on some of the latest rave games. You know like Call of Duty Modern Warfare, Call of Duty Modern Warfare 2, Assassin's Creed, Heavy Rain, Demon's Souls, Final Fantasy XIII, Resonance of Fate, and soon many many more.

Out of all those my favorite has to be Heavy Rain. I have never in my life played a game that has brought out real morals and emotions out of me. Until now. If you want to know what kind of person you are get a PS3 and play that damn game. I really really hope we start to see this kind of innovation in the future. I think it really takes games up to a new level closer to what modern film making is doing today.

On another note I have also been hammering my computer hard drive lately. Most of Gamedev knows me as a *nix loving freak. I really really do love my linux and I have been hammering down all the latest distros to see where I will settle down. That is one main flaw of Linux in general so many distros to choose from until I found this little gem. The one distro I have been looking for all these years of hopping. It goes by the name of Arch Linux.

Arch is the kind of distro for people who have been using linux for a long time and are not afraid of touching configuration files by hand without all the GUI perks. Once you get a base install done you have nothing except the core of linux and a flashy cursor. From there you can do your hearts desire.

Arch is not the first distro to do this. In Slackware this is possible and Gentoo as well. The main difference of Arch is its amazing package system known as Pacman. Full dependency resolution and when you need to compile something you can use the very nice bash based scripting system to resolve dependencies when you run a build. Much more flexible and less time consuming then Gentoo for sure.

For me I have never been a huge fan of Gnome or KDE. Usually I always used XFCE or Gnome if I absolutely had to. However, thanks to the arch community I finally found what I was looking for. I have been using a Tile based window manager known as wmii. Screen Shots are present there. I am a big fan of using the terminal and multitasking lots of apps at once and wmii is perfect for this. Not to mention fully controlled via the keyboard. Yes you can still use the mouse if you have to. This window manger allows me to easily manage and move around windows with a few key presses making it excellent for programmers running linux. This way you can have a browser, terminals, emacs, web browser, music, and your other favorite apps going all at once without a cluttered up screen.

Last but not finally I have been looking for a programming language to dabble in. I am always looking for new languages that can give me new incites to problem solving. I could careless if I use the language full time at any point but every knew language learned is new knowledge gained if it is different enough from the other languages you already know.

I found Lisp. Yes I know Gamedev has a notorious reputation for Lisp being mentioned and flames following. I don't care. I picked up a copy of Practical Common Lisp and I am hacking my way through it. I must say at first the syntax will scared the shit out of me. But I said this is good for me to learn and pushed through the first 3 chapters so far and it has already grown on me. I like it.

That is all for know feels good to post again.




Finally a chance to update

As promised I will cross post here to my other blog. Come check it out and don't be afraid to comment.





New Blog.

The time has come for me to move my blog. With a new project spurting up it seems like the perfect time to do the move. I should be blogging considerably more often now that my shift at work is changing to a 4x4. That gives me 4 days off to get work done meaning I should be posting more. I will do my best to cross post and hopefully the work I will be doing will be interesting to many of you. No word on the project just yet however. I put up a nice little intro post tho.

New Blog Clicky!




Fullscreen ahoy.

I just added Fullscreen capability to the D3DApp class. This is done by Pressing the F key. The one issue I noticed with SlimDX is the fact it does not play nice with Alt + Enter D3D10 Default fullscreen switch. Probably due to .Net Forms. No biggie tho very easy to get your own fullscreen switch going. If anyone is interested in how here is the code.

private void m_Window_KeyDown(object sender, KeyEventArgs e)
// When the V-Key is pressed we will force SwapChain to
// Present at the Vertical Refresh Rate of our monitor.
// Under most circumstances this will be 60 FPS.
if (e.KeyCode == Keys.V)
if (m_VSync == 0)
m_VSync = 1;
m_VSync = 0;
else if (e.KeyCode == Keys.F)
if (!fullscreen)
m_SwapChain.SetFullScreenState(true, null);
fullscreen = true;
m_SwapChain.SetFullScreenState(false, null);
fullscreen = false;



Sign in to follow this  
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!