Jump to content
  • Advertisement
Sign in to follow this  
  • entries
  • comments
  • views

About this blog

Ramblings of programmyness, tech, and other crap.

Entries in this blog


The project from insanity

Wow has it really been this long since I posted a journal entry. Man time really flies right by it is just insane. Over the last few months I have been going through the motions of designing a project. The project is rather over ambitious for sure and
99% of the worlds population would probably call me insane. Even as I was going through and laying out the design I realized how insane
I really was but it does not matter I want to work on something long term a huge almost impossible endeavor just because I can. I know I have the
capability to complete said project and at this point it is more figuring out how to approach the project effectively. So lets get into some of the
decisions I have to make to do so after I give a brief layout of what it is I want to do. First and foremost my game is a RPG but not the typical RPG. I don't want to create just another RPG or ARPG to add to the meat grinder.
I want to create a RPG that can evolve and hold longevity without costing a player 1 penny. This project is not about making money or creating a
business it is about creating a community. The key goals of this project all combine around this fact of community and having friends be able to gather together and adventure.

A modular scenario based system (the ability to mod in your own custom adventures in a easy way)
A Turn based Action system
The ability to customize the rule set
The ability to customize various actions in the game (spells, attacks, etc...)
The ability to use premade or custom assets for the scenario's
The ability to play solo or with friends
Open Source/Cross Platform (This project is very ambitious and 100% free + I love open source)
As far as technologies to use I have no clue at the moment. I ruled out Unity/UE4 simply because the do not fit the open source motto even if
they would be great to use they just do not fit the project. I also need something very flexible that will allow me to create the necessary
tools needed to create a good environment for building the custom scenario modules. Since I have a wide variety of applicable programming skills I began evaluation of some potential target technologies. Currently I am evaluating
JME3 which just so happens to be very nice to work with despite some of the quirks and lack of direction in its tooling the core engine itself is
really well done and easy to pick up on. +1 for great documentation. The only thing I really do not like here is the Netbeans based SDK as I find
it very off putting for some reason or another, however, it may be possible to work outside of the sdk and develop some custom tooling to replace
some of the features. The goal is to abstract creators away from needing to actually touch the programming language behind the game and from
having to install the whole engine + sdk to create scenarios. I have also looked at SDL/SFML way back in the past but the new versions for sure are very slick, however, I am not sure I want to go the route of
a 2D game. It for sure would work and it would solve the issue quite quickly of having to work around the JME3 SDK system. This approach could
however remove some people from wanting to help contribute to the project due to the use of C/C++ . Sure there are other bindings but they tend to be quirky and awkward to use because they rarely follow the structure the other languages are known for. Any input on other tech that I did not mention would be much appreciated just leave it in the comments and if you want you can even just
comment to call me insane. Can't think of anything else to type so see you again soon.




Choosing a platform for software

One thing I have noticed over the years is that software development is becoming ever more fragmented. When I say fragmented I mean the platform choices are expanding rather dramatically. Years ago if you wanted to develop a piece of software you mainly had one choice the desktop. Whether it was a game or a software application you built it for the desktop or in the case of a game you had the additional option of a console if you were part of a large company. Now not to far in the future our options are huge. We can choose between desktop, tablet, phone, console, and even web. The software landscape has changed so much. More and more options are becoming available for the average Joe who wants to get their foot into the door and get their own little startup going.

So now the real question is not really about what development technologies you want to use but more about what platform will your application get more of a benefit from. We are now looking at instead of just looking at what your target market base needs but you now need to take into account what platforms the target market base uses most often. After you solidify this quite often you find that this inherently decides what software development technologies you have to use. It is actually quite interesting and it makes various decisions quite complicated and requires quite a bit of extensive research.

Currently I am going through this very process with my latest crazy application idea. This is the main reason I have decided to post this entry as it will really help me think about all these various options more clearly. I find this a very complicated process as this is the first real large project I have done in quite a long time. So lets see where this process can take us.

Target Audience:
The target audience for a piece of software is rather important so lets get this out of the way. I find that every truly great software idea which spawns outside of a corporate environment often is a direct extension of a gap the developer has in their computing experiences. In essence this means the software developer wants to do something but for some reason they can't find a great way to do that task. Often the software is out there to do these tasks but often to get the required result for them they need to use multiple pieces of software.

This is the exact boat I am in currently. For those who do not know I have many hobbies ranging from software development, to writing and much more. I like to be very active and busy. For the longest time I have wanted to write a novel. My real issue is the various technologies to do such a thing the way I want becomes rather convoluted. Sure you can write a novel directly in Microsoft Word but you really lose the fluidity required to write something beyond great without having to jump through hoops to keep track of various divergent plot lines and characters. This could often require multiple documents or other methods. Then their is Emacs and org mode but despite what some think personally I feel org mode is not the right tool for the job and is a pain to use. Other software out there exists but it is actually quite difficult to find, expensive, or very old and will not run on modern PC operating systems. Beyond this they seems to slightly have the idea of what I want but are not quite there.

So this software is targeted at individuals who want to write. The goal ultimately is to create a dynamic writing tool that is very fluid to use.

This is actually really hard for the kind of tool I want to make. With my research I have hear that authors love tablets and they really wish there were great tools to write their content with on various tablet devices. It seems that their are huge gaps that they really wish were filled as often it seems to be one way. You have a desktop application but no compatible tablet application or you have a tablet application that is very limited and it is difficult to get that content to the desktop. For me I really think the issue is the developers not having their scope quite right and it is leading to these issues.

Desktop Platform:
The desktop platform is known to work with these types of application as there is tons of flexibility. The real issue I find with writing and desktops, or laptops is the fact that they are not very portable and when I write I like to be away from everything. Helps keep a clear mind and focused. This is difficult with a desktop PC style system even with ultra portable platforms out there like the UltraBook or MacBook Air. The screen densities are awful as well and after looking at the screen for extended periods of time it really places a lot of stress on the eyes. I think this is really where tablets excel in fixing. The other issue with the desktop is distribution and getting the application noticed. Apple fixed this with the app store, windows is well behind on this and their system is a mess for this approach requiring expensive certificates and redirection to application downloads and such. Quite a shame.

In all reality the tablet has everything I would want. Nice portability with solid screen densities and nice and easy on the eyes. There are various nice attachments and the new Samsung tablets are of nice size 10.1 inch and has a stylus. There are keyboard attachments and docs for it as well. Battery life is solid and distribution and noticeability are taken quite good care of in these environments. In my opinion if done right I think tablets will over time revolutionize computing even further as developers begin to really push what the platforms can do. I think it will just take a clear mindset.

Not much for me to say here. Cloud services and software as a service are beginning to become very common. I however feel the development ecosystem is quite poor. JavaScript, css, html, backend service programming. It is really a mess and needs some consolidation if it is ever going to become the norm. The technology is just very convoluted on the frontend side and could really use some love.

My conclusion is heavily skewed towards the tablet. For the longest time I just did not really see their advantages as I never owned one nor did a care to have one. By chance I ended up getting my hands on a Samsung Note 10.1 32Gb device and I am hooked. This device I am already finding quite useful and I can really see the potential these devices can have. I think I found my platform for development. From what I have experienced thus far the Android development ecosystem is quite nice and relatively easy to dive into with a little guidance. Lets see where this tablet device can take me.




Java 8 very interesting

This is a rather short blog post. I have had some ideas for a project recently with some of the various endeavors I have been contemplating.

One of these endeavors is either a desktop application or web application not sure which but I think it makes more sense as a desktop application due to it's purpose.

When I was thinking about the project I new I would want it cross platform so my real choices would be either Java or C++. I never made a GUI application in C++ before so I said let me modernize my java install and upgrade to IntelliJ Idea 13.1. Oh by the way IntelliJ idea is worth every penny. If you develop in Java you should really spend the $200 and pick up a personal license which can be used for commercial applications. Really great IDE and I can't wait to see what they do with their C++ ide they are working on. Jetbrains makes amazing tools.

So I upgraded everything to Java 8 and decided to make a quick and simple GUI application and use Java 8 features. I will say one thing Java should have added Lambda's a long time ago... With this in mind the following Swing code turns from this...

[code=java:1]import javax.swing.*;import java.awt.event.*;import java.awt.*;public class TestGui extends JFrame { private JButton btnHello = new JButton("Hello"); public TestGui() { super("Test GUI"); getContentPane().setLayout(new FlowLayout()); getContentPane().add(btnHello); btnHello.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { System.out.println("Hello World"); } }); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setSize(300, 100); setLocationRelativeTo(null); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { new TestGui().setVisible(true); } }); }}
to this...

[code=java:1]import javax.swing.*;import java.awt.*;public class TestGui extends JFrame { private JButton btnHello = new JButton("Hello"); public TestGui() { super("Test GUI"); getContentPane().setLayout(new FlowLayout()); getContentPane().add(btnHello); btnHello.addActionListener(e -> System.out.println("Hello World")); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setSize(300, 100); setLocationRelativeTo(null); } public static void main(String[] args) { SwingUtilities.invokeLater(() -> new TestGui().setVisible(true)); }}
So much more elegant and readable I think Oracle just really hooked me back on Java with just this one feature.




On IDE's and Editors

The development environment predicament has been a on going thing with developers for years. Constant arguments over the smallest things such as programming languages, version control tools, even the great editor wars. I find it quite intriguing how much developers really like to argue over petty things as it can be quite amusing to read many of the baseless arguments. For me personally choosing many of these items has never been difficult except for one the editing environment. That is what this entry is about trying to make sense of it all.

When I develop code I want to be productive. I think this is the case for everyone. Through the years the one thing I noticed is that the IDE or Editor you are using can have a huge effect on productivity. Not in the sense of tasks being difficult but in the sense of not interrupting the stream of thought you are trying to put into code. For me personally one of the worst things ever is to be working on a algorithm and realize you made a mistake 10 lines up and having to go back and fix it then go back and start working again. Each environment out there be it a IDE or Editor has specific features to help combat this for the most part I would think but do not hold me to it.

The IDE is the modern editor of the day. It contains quick ways to refactor large blocks of code, code completion through syntax analysis and parsing, integrates all the tools you need, and the best of all graphical debugging representations of what you are working on. There is more then just this but a solid sampling of features. The key word here is Integrated everything is there and often works with very little configuration. In my experience however the biggest downfall of the IDE is the editor. When you make that critical mistake you need to stop typing, grab the mouse, and then fix your mistake then go back to working again. The other issue is the fact that many of these features often may not have some sort of quick keybinding causing you to have to go through the menu systems with the mouse yet again. Sure the most commonly used features you have keybindings for and I am sure you have them memorized but it is the odd things that are less common that you happen to use more often then others that hurts. One such example could be the selection of text. You usually have 2 options either the mouse to select the block of code or to use shift+arrow key. This is awkward.

The Editor front you have dumb editors and smart editors. Most use old smart editors like Emacs or Vim. These lack many of the IDE fancy features and if they do have a plugin for it odds are it is not as good. The one place they excel however is editing text. When editing text even the novice with very little experience can really reap benefits. For example I have been experimenting with Emacs for a few days now and man I feel productive editing text. Moving around my characters, words, lines, sentences and rapid selection is just awesome. Want to select a line of code 10 lines up from the cursor is easy... C-u 10 C-p, C-Space C-e then DEL or if you want to cut it C-u 10 C-p, C-k. I think one of the most powerful features here is setting the "mark". You can set a mark set with C-Space C-Space then move and make your edit then use C-u C-Space to jump immediately back to where you previously were. I think the overall benefit of these features is to minimize the amount of thought interruption you have when you need to jump and make a edit. No need to grab the mouse and move the cursor.

I am not sure what I appreciate more when I am writing code. Massive Integration with some powerful features or just a great editor environment that minimizes interruption. Could code completion and refactoring really make you more productive enough to sacrifice the power you get from some of these smart text editors. I find myself making lots of small edits in code rather then massive refactors so something like Emacs makes me personally feel really productive. So it comes down to is a sacrifice for a editor worth graphical debugging tools. I have no idea either way with embedded development you are often looking at hex and binary values as well as assembly code all the time so no GUI debugger really makes it look much better.

So my ultimate question is why can't we have a IDE with a amazingly powerful editor? The best of both worlds without it being a hacked plugin that does not really work like the editor it is trying to emulate in the IDE.

Even after writing this out I still do not know what direction to go. Was hoping the post would clear my mind a bit and help me logically put out what I appreciate in a editor. I guess the issue is I appreciate the features both offer and I want both but nothing gives me both. I am not sure I have the time or energy to develop a new IDE from scratch that works how I want it to work. Eclipse is a huge mess and I doubt I can write a new editor component for it to emulate say Emacs. Ultimately all I want is a environment that understands my code and has a really powerful editor to minimize my line of thought breakage and nothing does exactly that.

What is your take on this leave it in the comments I enjoy reading what other people think and what their experiences are like with odd topics like this. Oh and no flame wars :D




Piecing together a development environment

It has been a while since my last post for good reason I have been mighty busy. Now that things have settled down I have finally gotten the chance to start to piece together my embedded development environment. Embedded development is a quite an interesting beast in that many of the development concepts are quite behind standard desktop development. Overall I have come to believe it is this way because quite honestly embedded development is incredibly low level. There are really no huge api's in existence because abstractions really do not help with portability as no matter how well abstracted you still need heavy modifications for cross target support due to various CPU and peripheral features being located at different memory addresses etc. So in this respect I think there was never really a need to build massively robust software tools to develop on typical 8 bit and 32 bit micros.

So my particular development platform of choice is my new MacBook Pro. This machine is amazing quite a beast. The reason I chose a Mac over a PC with Windows is quite simple. Despite Windows having quite a following in the IDE department for embedded development Windows is still a very gimped platform. Every embedded toolchain for instance uses make files under the hood and these are GNU makefiles running on GNU make. The various IDE vendors ported make over to windows themselves and distribute it with the IDE. This actually makes the build process quite slow because make was really designed around POSIX. As I said previously embedded development still uses quite a few old concepts and the main reasoning behind this is the arcane architectures and the need to be able to select where code goes in memory and it just so happens that GCC, Make, and Linker Files are still the best way to do this. So my main reason for choosing make was the "It Just Works" system with the strong UNIX core that provides POSIX features and a powerful terminal like bash. It really is a win win as you no longer have to worry about crap breaking, not working at all, or various hardware incompatibilities that come with Linux which is getting better but still horrible.

So now that the machine is out of the way we need tools to use. The first obvious tool you need is a GCC cross compiler for ARM. For those that do not know a cross compiler is a compiler that runs on one system type say a PC but instead of generating machine code for that machine it generates machine code for a different architecture which allows you to even do embedded development at all. Without cross compilers you would never really be able to develop for these small chips as you typically can't run a PC like OS on the chip to compile your code. This is a simple task all you need to do is download the compiler set which includes everything you need like GDB, GCC, G++, Linker, Assembler, etc... All you do is download, extract to a directory and add the compiler to your path and you are done.

The next task is needing a GDB server for GDB to connect to for remote debugging. In order to debug hardware related code it needs to run on the hardware. You also need to be able to get the binary burned into the chips memory. Most ARM development boards come with a programming/debugging module on it already. This module can typically burn the chip on the development half of the board or also burn to a external chip via certain pin hookups. Still to operate these features you need another piece of software. In my case for maximum compatibility and to be able to use the same tool for possibly different chips I chose OpenOCD. On Linux/Mac/Windows OpenOCD needs to be compiled. There are sites that provide binaries for Windows but this often is not needed because the vendor usually has a tool ready for windows. On linux/mac OpenOCD or a tool someone else wrote like stlink made by a ST employee is required. On Mac open OCD can be taken care of quickly with the homebrew package utility. This allows for not only a debugging server but also a interface to burn your code to the chip.

Over all that is all that is needed besides driver code like CMSIS or Vender supplied libraries. When I say driver code it is not what people think of as a driver. All driver code is are various source files and headers which pre map peripheral and cpu memory addresses for the chip in question. Think of it more like a very tiny and low level API. Then you need the programming manuals, reference manuals, and datasheets.

As for IDE's on windows there are tons of choices. Many are quite expensive but there are a few free ones that work relatively well. On any platform you can easily use Eclipse with CDT and maybe 1 or 2 embedded plugins to handle this. Then there is always the non IDE route using a text editor like Emacs or VIM. This is a decent option considering you are not really working with large and confusing API's like you would be in C++, Java, or C#. The api's are very slim so "intellisense" is not paramount. I have not chosen what I am going to use on this front quite yet. Like always there are heated debates in this camp some saying Eclipse is the way to go and others saying Vim and Emacs are the way to go because you should know how your tools work for when stuff breaks.

I am not much for heated debates so I will figure out what I want to do here I will probably end up going with Eclipse because quite honestly I hate having to configure every little tiny piece of my editors.

That is all for now have fun and write awesome code.




Just got my new toy in the mail

Hey guys it has been a while since my last post. So I would first like to give a few little updates to what I have been upto.

First and foremost my attempts to get back into game development was a total fail. It just did not work out. I was starting then I lost interest quickly and proceeded to get slamed into the dirt by massive ammounts of school work. On the bright side I am only 3 1/2 classes from graduation woo. After all these years of slugging it away at a pointless job it feels good to be almost to my goal of correcting my past mistakes of dropping out of college.

Now onto more goodies. I have always loved electronics such fun to make electricity do cool things and it is even a very good experience to become a much better developer. Having to deal with everything at such a low level it really brings to light some skills that can even help developers create better software at the high level. It is amazing what high level languages sacrifice often for ease of use and it is also amazing how universities do not teach there students the low level stuff really anymore.

So I have been looking into building a interesting robotics project well not exactly robotics but more of a drone project. This is a aspect of engineering I really enjoy because it is a tough project with lots of room to learn and also a larger project that can grow overtime. The issue with a lot of the simpler electronics projects is that they have small room for growth. After some design I realized I am going to need lots of power for this project so it is time for me to leave the world of PIC and AVR and move to ARM Cortex-M. The overall reasoning behind this is that you need some decent processing power to handle all the math needed for the flight controller and the smaller chips have a very hard time with this.

The board I chose is quite powerful for a development board.
Cortex-M4 processor (has hardware FPU)
Contains a mulit axis accelerometer
Contains a Mag sensor for reading magnetic fields of the earth

These few features are awesome because both sensors are needed for accurate flight and maximum stability adjustments.

The board is made by STM as well as the chip and has a built in programmer/debugger making life a lot cheaper then buying external debugging hardware. Super powerful dev package for only $10 can't go wrong. Here is a link to the site for the board if you are interested...

Here is also a picture of the beast if you choose not to visit the link above...


Now that this is all said and done I need to test various IDE's to see what I like. Right now I am testing out CooCox on windows which is free. Seems rather solid despite being a really stripped down version of eclipse as in missing the good features. Eclipse is another option but would have to be run on linux due to the need for make and some other unix tools to function properly without having to run through massive windows GNU loopholes to get it working on windows. Commercial IDE's are not an option because for some reason the Embedded world things $4000 for an IDE is normal.

I will have some more updates on my learning in the future until then have fun coding.




A bit about my game and some slow progress

Hello Everyone,

I feel it is time for some updates on my game as I really did not say much about it. So I would like to introduce you to the concept of a game I have been wanting to make for years. The game is called Orbis. The general idea behind the game is Asteroids with a twist.
So ultimately I will be making a Asteroids clone with a few twists to spice up an old game I use to love to play at the Arcades or even on the Atari!!!!
I am not sure if I am ready to really detail out all the features quite yet as I am not sure exactly what will make it into the game just yet. So we will leave it at Asteroids with a twist for now till I flesh out more of the concepts.

I also decided to make some tool changes for the game. I decided I would stay with C++ even though after my first foray back into C++ I wanted to scream back to C. Ultimately I ditched QTCreator and MinGW. For some reason I was having issues with MinGW on Windows 8 so I decided to install Visual Studio 2013 Express Windows Desktop edition. I must say I am really impressed. I also decided to stick with SFML. To use SFML with VS2013 I needed to rebuild the library and building SFML 2.1 did not work out to well so I ended up going with the Git repo and building from there. So far so good. So here is what my new environment looks like.

Visual Studio 2013 Express Windows Desktop
SFML (master)
Git Version Control (on BitBucket)

I am still using CMake because if I do decide to build the game on Linux for testing on my laptop CMake will save my life. So right now I use CMake to generate the Visual Studio projects and work from there. Not pretty but saves tons of headaches. Visual Studio leaves me out of my comfort zone as I am not a huge IDE fan period but we will see where this setup takes me.

Now a bit on the progress. Not much honestly. Much of my time is taken up by school and on top of it I am trying to get back into the groove of C++ after spending a few years in the world of C. So bear with me we will get there.

The first task I really wanted to get done was make sure SFML actually worked and it did. From there I felt the most important thing I should get out of the way is Resource Management because this is something I really can't have a game without. Sadly this was probably not the best place to start when I am trying to get my C++ groove back but none the less I think I was successful. My goal here was to put together a cache for my resources. This will be the core of ensuring all resources are properly freed up when no longer needed and will also be the core of my TextureAtlas system which is what I will be building next. I really needed this to be generic because SFML has many types of resources. So this resource cache is built to handle sf::Image, sf::Texture, sf::Font, and sf::Shader. There may be a few things but this is what I can think of off the top of my head. It will not handle music because sf::Music handles everything very differently so I will need to take a different approach for music.

I also wanted to ensure that the memory of the cache was handled automatically. Since I am not in the world of C and the fun void* generic programming world I figured I might as well try to use some C++11.

So my first foray into C++ after years and years of not touching it includes Templates, and some C++11. In other words AHHHH MY EYES!!!!
Sorry for no comments but here is the code I came up with using unique_ptr for the resource which gets stored in a map. The actual key to the map will be implemented as a enum elsewhere so I can index into the cache to get what is needed. There are 4 methods. 2 load_resource methods and 2 get_resource methods there is no way to remove a resource at this point as I am not sure I need it yet for this game at least.
One load_resource takes care of the basic loadFromFile. sf::Shader as a extra param and so can sf::Texture so the overloaded load_resource takes care of that. get_resource just returns the resource and there is a overloaded version to be called in case the cache is a const.

Again sorry for no comments I feel the code is simple enough to not need any.#ifndef RESOURCECACHE_H#define RESOURCECAHCE_H#include #include #include #include template class ResourceCache{public: void load_resource(ResourceID id, const std::string& file); template void load_resource(ResourceID id, const std::string& file, const Parmeter& parameter); Resource& get_resource(ResourceID); const Resource& get_resource(ResourceID) const;private: std::map> resources;};template void ResourceCache::load_resource(ResourceID id, const std::string& file){ std::unique_ptr resource(new Resource()); if (!resource->loadFromFile(file)) throw std::runtime_error("ResourceCache::load_resource: Failed to load (" + file + ")"); resources.insert(std::make_pair(id, std::move(resource)));}template template void ResourceCache::load_resource(ResourceID id, const std::string& file, const Parameter& parameter){ std::unique_ptr resource(new Resource()); if (!resource->loadFromFile(file, parameter)) throw std::runtime_error("ResourceCache::load_resource: Failed to load (" + file + ")"); resources.insert(std::make_pair(id, std::move(resource)));}template Resource& ResourceCache::get_resource(ResourceID id){ auto resource = resources.find(id); return *resource->second;}template const Resource& ResourceCache::get_resource(ResourceID id) const{ auto resource = resources.find(id); return *resource->second;}#endif
Here is the main.cpp file which I used for my functional test as well so you can see it in use.#include #include "ResourceCache.h"enum TextureID{ Background};int main(){ sf::RenderWindow window(sf::VideoMode(250, 187), "SFML Works!"); ResourceCache TextureCache; TextureCache.load_resource(TextureID::Background, "./Debug/background.png"); sf::Texture bkg = TextureCache.get_resource(TextureID::Background); sf::Sprite bkg_sprite(bkg); while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { if (event.type == sf::Event::Closed) window.close(); } window.clear(); window.draw(bkg_sprite); window.display(); } return 0;}
Like stated this is my first foray back into C++ so feel free to let me know if you see anything obviously wrong with the ResourceCache class. Much appreciated in advance.

Until Next Time.




Solving the automated copy C++ Runtime issue

Hello Everyone,

In my last post I was using QTCreator with QMake to build some SFML sample code. In order to get the code to run from the ide if you remember correctly I needed to copy the SFML Runtime dll files to the directory because we were dynamically linking. I did mention that I could not run the .exe from the build directory due to missing some C++ runtime files which the .exe is linked against. This post is about finding a solution to this problem.

Initially I though I would be able to use QMake to copy the C++ runtime files for gcc, pthreads, and stdc++ to the build directory. I wanted to do this so that if I wanted I can run the code from outside the IDE directly from the Build directory. Everything was fine till I tried to copy the stdc++ dll file. After some investigation I found that QMake is using the DOS xcopy to do the copying and I feel for some reason it does not like the ++ characters in the file name. This assumption was confirmed by renaming the dll file and copying it over which worked. The issue with this is the code can't find the dll if you rename it so on to another way.

The second attempt I tried to use QMake to statically link to the stdc++ library using the -static-libstdc++ linker option. This was a total fail. This might be an issue with mingw I am not sure. So I bailed on this Idea quickly. Time to try something else...

QTCreator can also use CMake which is another Makefile generation system. CMake is awsome and I have dabbled with it in the past. I never used it to solve this problem before so I decided to give it a shot since it is supported.

The really nice thing about CMake is it's great support on multiple platforms and it can generate project files for various ide's etc... In order to solve this problem I am having I need to copy over the DLL files as a post build to the project. It took me some time to figure this out but I got success. The key here is that CMake actually provides cross platform utilities built right into it's executable. This means I can use cmake to execute a cross platform copy command to copy the dll files post build.

Here is the code to solve all the problems and it works flawlessly. By default QTCreator sets a bunch of CMake variables for us as an out of source build.

CMakeLists.txtproject(Ascended)cmake_minimum_required(VERSION 2.8)aux_source_directory(. SRC_LIST)set(SFML_ROOT ../libs/SFML-2.1)set(MINGW_ROOT c:/Qt/5.1.1/mingw48_32)find_package(SFML COMPONENTS system graphics window REQUIRED)include_directories(${SFML_INCLUDE_DIR})# SFML Runtime DLL filesset(SFML_RUNTIME_FILES ${SFML_ROOT}/bin/sfml-system-2.dll ${SFML_ROOT}/bin/sfml-graphics-2.dll ${SFML_ROOT}/bin/sfml-window-2.dll)# MINGW Runtime DLL filesset(MINGW_RUNTIME_FILES ${MINGW_ROOT}/bin/libgcc_s_dw2-1.dll ${MINGW_ROOT}/bin/libwinpthread-1.dll ${MINGW_ROOT}/bin/libstdc++-6.dll)add_executable(${PROJECT_NAME} ${SRC_LIST})# POST_BUILD notification and copy SFML Runtime DLL filesadd_custom_command(TARGET ${PROJECT_NAME} POST_BUILD COMMAND ${CMAKE_COMMAND} -E echo "Copying SFML Runtime to Build directory.")foreach(FILE ${SFML_RUNTIME_FILES}) add_custom_command(TARGET ${PROJECT_NAME} POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy ${FILE} ${CMAKE_BINARY_DIR} )endforeach(FILE)# POST_BUILD notification and copy MinGW runtime DLL filesadd_custom_command(TARGET ${PROJECT_NAME} POST_BUILD COMMAND ${CMAKE_COMMAND} -E echo "Copying MinGW Runtime to Build directory.")foreach(FILE ${MINGW_RUNTIME_FILES}) add_custom_command(TARGET ${PROJECT_NAME} POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy ${FILE} ${CMAKE_BINARY_DIR} )endforeach(FILE)target_link_libraries(${PROJECT_NAME} ${SFML_LIBRARIES})




Made a few decisions (Update)

Well so far I made a few decisions and was able to get some basic code up and running to test the environment.

The first decision I made was to use C++ over Python and other languages I know. The main reason for this decision is library support. When I look from the perspective of game development with Python there are libraries that exist and they are wrappers around C/C++ libs. This is not an issue but the issue does arise with the way they are wrapped in many cases using Cython. I do not know much about Cython but what I do know is you really need to pay attention to the compiler you use with Python and wrapped libraries otherwise you get tons of crashes. I really did not want to go with this hassle I would much rather bolt Python or Lua on top of my code before going through all the setup required.

Now that this is out of the way I also decided to go with SFML-2.1. This is really a no brainer due to my decision to use C++. This is a super clean library and there is very little to complain about. It handles many parts you do not want to handle and stays simple enough to not step on your toes where you do not want it.

As far as IDE's and Compilers go I really wanted to stay away from VC++. Do not get me wrong here it is a great compiler and IDE but I do not spend all my time on Windows anymore. The main reason I use windows is because it is required for school, however, after I graduate this summer this is not a deal breaker for me anymore and I am likely to go to Linux full time for the most part and just leave Windows laying around for cross platform testing purposes. Due to this I decided to go with QTCreator with mingw_w64 32bit. The version you get with QTCreator 2.8.1 uses GCC 4.8.1. This is really a very nice IDE so far I have only used it a small amount but it is really indeed slick to use.

Also from what I can see qmake is very powerful and well designed and not too difficult to figure out. Qmake is basically similar to something like CMake in the respect that it does not actually build the code but instead it is a makefile generator to create the makefile for you.

So with all this set up I was able to put together what I would consider the SFML Hello World application which they use in the documentation to demonstrate how to setup the library.

The purpose of the code is to create a colored circle on the window rendering surface which is quite simple.
#include int main(){ sf::RenderWindow window(sf::VideoMode(200, 200), "SFML Works!"); sf::CircleShape shape(100.0f); shape.setFillColor(sf::Color::Red); while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { if (event.type == sf::Event::Closed) window.close(); } window.clear(); window.draw(shape); window.display(); } return 0;}
Very simple. Create the window, add the circle of a particular radius, start the event handler, and exit when closed.

In order for this code to compile you obviously need to add the includes, and libraries to the project. QTCreator does this differently then most ide's as it is not done through the configuration ui it is actually done in the project file aka the .pro which just so happens to also be your QMake file as well. This is really cool :D

The other issue is I am dynamically linking to SFML instead of using static libs. So the dll files must be present. In my opinion the typical way of copying the dll files in place by hand is annoying. To automate the process is platform dependent and QMake again comes to the rescue by allowing to create a unix or a win32 block to cover platform dependence. With this I can manipulate the directories and pipe the information into cmd.exe to copy the dll files to the build directory. Really neat stuff.

Here is the .pro file for qmake with comments. The thing to note from a windows perspective this can only be run from the ide. The reason it cannot be run from the debug folder in this case is although I can copy over the appropriate runtime dll's for dwarf2 and pthreads; I cannot copy over libstdc++'s dll. I am not sure why I think the way the file is named causes the terminal copy command to croak. If anyone knows a better way please let me know as -static-libstdc++ seems to not be statically linking like it should.TEMPLATE = appCONFIG += consoleCONFIG -= app_bundleCONFIG -= qtINCLUDEPATH = c:/libs/SFML-2.1/includeLIBS += -Lc:/libs/SFML-2.1/lib -lsfml-graphics -lsfml-window -lsfml-systemSOURCES += main.cppwin32:debug { EXTRA_BINFILES += \ c:/libs/SFML-2.1/bin/sfml-graphics-2.dll \ c:/libs/SFML-2.1/bin/sfml-window-2.dll \ c:/libs/SFML-2.1/bin/sfml-system-2.dll EXTRA_BINFILES_WIN = $${EXTRA_BINFILES} EXTRA_BINFILES_WIN ~= s,/,\\,g # regex replace '/' with '\' in path list DESTDIR_WIN = ./debug DESTDIR_WIN ~= s,/,\\,g # regex replace '/' with '\' in the path # iterate through file list and copy to the build directors for (FILE, EXTRA_BINFILES_WIN) { QMAKE_POST_LINK += $$quote(cmd /c copy /y $${FILE} $${DESTDIR_WIN} $$escape_expand(\n\t)) }}
That is all for now hope you enjoy.




Getting back into GameDev (for real this time) need some input

Hey everyone, finally got all those W8 issues sorted out. Had to do lots of patching for VS2008 for school and had to disable secure boot ect... to get the UEFI layer to allow the video card to function but all is setup and good to go.

Over the past few days I am been really pondering various aspects of my hobbies. Micro Electronics is really cool but it does not seem to give me the satisfaction I originally intended from it. My goal with the Micro Electronics was to learn development at a very low level. Through various projects and experiments I realize it is really much more about the hardware design and circuits then it is about the low level development. The thing is programming a micro controller really is not programming per say in most common applications it is mostly configuring the internal hardware of the chip to act on various sensory data. The most programming you do is setting some bits and possibly performing a few calculations and that is about it. At least for the cases of what I have been capable off as I am severely limited when it comes to circuitry knowledge. Even from a robotics level it really is nothing more then acting upon the various sensory inputs to make various decision about motor speed and direction.

So through much thought and pondering I think for sure I am going to be getting back into game development as I am not quite sure what I can obtain from micro electronics but I do know what I can obtain from a knowledge perspective with games that could be useful in other applications like data modeling.

Right now I already have 2 game ideas I want to work on that have really been poking and prodding at my brain for the last few years now. One such game should be simple to implement and the other should be a good stepping stone from the first. Both games are 2D as I feel when getting back into this I should start from 2D and once I have the 2 games under my belt then I can consider the move to 3D if it still feels feasible at the time.

In the case of target platform I am not sure at the moment and this will directly effect various technology choices I will have to make. If I choose the PC my options are limitless, however, if I choose mobile I would have to target Windows Phone 8 as that is the device I own currently which I think would be awesome as it is really a great mobile platform. The issue with Windows Phone 8 is I would probably need to find a target API because XNA does not work on windows phone 8 to my knowledge I would need to aim at a API which covers the platform like MonoGame or DirectX.

Also I think there is a fee in order to utilize anything other then the simulator to develop for the windows store but the advantage with targeting this is I can target both Windows 8 and the phone. So this may be the route I take not sure yet.

Any input is appreciated here as I am out of my zone on the Metro/Phone platform as I have always been mainly a C Developer.

So the question is out what do you guys think? Would it be a good choice to target Windows Phone 8 there is a huge open market on these devices for sure and may open options to publish on Xbox in the future as well as Window 8 itself? If so what technologies should I be looking at?

If I do not see too many comments here I will post up in the forums I have been out of the game dev scene for years.




Does W8 always have to get in my way

This is just a random rambling I do apologize. In all honesty it really does get under the skin all the time.

Over all I think the OS is great it is very slick and fast much more so then previous versions of the Windows operating system. The speed increases really help with increasing productivity. This is however, where all the good parts stop.

The first major issue I have come across is compatibility. This is just retarded. I have a class for school where we need to use visual studio 2008 to do ASP.Net 3.5 web development. Compatibility issues ensue because turns out SQL Server 2005 hates windows 8 and the only work around is to uninstall it and install SQL Server 2008 with SP3. Just one more thing I need to do so I can get this school work done. Wish they would just upgrade the course already I mean the tech is old now.

Next are the video card issues UEFI/Secure Boot is the bane of all existence. It use to be you plug in the card and boot up/install your driver. Turns out now I need to boot up, get into the UEFI system, turn off secure boot, enable legacy mode, restart, shutdown, put in card, reboot, look at the distorted loading screen, install driver, then reboot, and continue to look at the awful distorted loading screen every time I need to start the PC.
WHY?.... It is not like the video card is going to compromise my PC...

What a pain in my arse. All I know is I hope all this is video card crap is worth my effort so I can get my HD 6850 up and running and hopefully regain my interests in game development. I really do miss the fun I had in the past.




The Mosin Nagant is here

As I promised the Mosin Nagant has arrived. The Mosin I have received is 1942 Izhevsk 91/30. I think it would be best to give some background before the pictures.

The Mosin Nagant was originally designed by the Russians in 1891. The approximate pronunciation of Mosin Nagant is (Moseen Nahgahn) due to the Russians emphasizing vowels over consonants. Over the years they made some modifications to the rifle the most obvious modification was the switch from a Hex to a Round receiver to produce more accuracy. My particular year is a very interesting year for the Russians. In 1942 the Russians were in some very heated and significant battles to protect their homeland from the unstoppable German war machine. One such example was Stalingrad which everyone here should even know about. This meant the Russians were in a tight bind and really needed to get more weaponry out to the soviet soldiers so often the refurb process in the arsenals was quick and half assed so to speak in order to the the rifle out on the field. In 1942 the Mosin Nagant was still a mainstay weapon for the Russians due to their lack of a efficient Assault Rifle. This meant they suffered in medium range combat as their only other weapons were really the PPSH sub machine gun and some shovels and grenades.

The Mosin Nagant was a top notch rifle and very rugged. Accuracy was a key point in designing the Model 91/30 and other models as the sport a whopping 28 3/4" barrel or larger in some early models. They were designed and sited in to use the Bayonet all the time as it was Soviet doctrine to never remove the Bayonet. Hand picked the most accurate 91/30's were retrofitted with a bent bolt and often a PU scope or some other model scope for the snipers. The 91/30 was used as the Russian sniper rifle all the way up to the cold war when they designed the Dragonov sniper rifle based off the AK-47. Even during the post war time up to and including the cold war Mosin Nagant's were still in use and manufactured but in a Carbine form known as the M44. Numerous other countries also used the Mosin as many of them were part of the Soviet Block at some point or another including Poland, Hungaria, Finnland, and Bulgaria. Many other countries outside the Block used them as well including China, and the North Vietcong. Even today there have been reports of terrorist forces in Iraq, and Afghanistan are using Mosin Nagant rifles.

As stated above the rifle was designed for accuracy. The 7.62x54R was designed as a high velocity cartridge. To give some perspective with some Russian Surplus ammunition ballistic test using 148gr LPS ammunition which is a Light Ball ammo with a steel core instead of led. The muzzle velocity (this is as the bullet leaves the barrel aka 0 yards) sits around 2800 feet per second+. The impact force under 50 yards sits around 2800 foot lbs per sq inch. With the right configuration of load this rifle and push over 3000 feet per second. For those who do not know velocity and twist ratio really decide the accuracy of the rifle from a ballistic perspective. These rifles can easily hit out to 1000 meters if needed.

Ok now more about my rifle. My rifle was manufactured in 1942 by the Izhevsk arsenal in Soviet Russia. This is a wartime rifle in a wartime stock meaning the stock was not replaced post war. The rifle has been refinished by a Soviet Arsenal even though it appears that the refinishing stamps are missing, however, this is normal they forgot this stuff all the time. The rifle is also known as all matching numbers. This means the serial numbers on all the parts match which is good. I am 99% sure the rifle was force matched which is well known for military surplus as the fonts look slightly different on the stamps. There are no lineouts on the old serial numbers they were probably totally ground off and then re-stamped. There is lots of black paint on the rifle as well which was common to hide the rushed bluing jobs and light pitting. One thing you will also notice is a amazing stock repair job done by the Russians on the front of the stock. When it was done I do not know but it really adds to the unique character and history of the rifle.

The best part of this rifle is the fact that it is one heck of a good shooter. Had her down the range and it still functions great. The trigger does take some getting use to I estimate the trigger pull is around 8 - 9 lbs possibly 10 lbs. I would estimate the rifle weighs in at about 12 - 13 lbs or so.

As promised here are some pictures. Due to there being some 18 pictures or so I will just post the link to the album and you can check out a piece of history. http://s752.photobucket.com/user/blewisjr86/media/DSC_0001_zpsfbd2b09e.jpg.html?sort=9&o=0




Some Updates on whats going on

Hey Everyone,

Not many people read my journal as much as they did in the past when I was heavily into game dev but it really does not matter. For one I very seldom even post anymore. The reasoning behind this is I have greatly drifted from game development and focus more on embedded stuff.

Right now I have been very busy actually finishing up my degree WOOOOO!!!!!. The stress is building up as the work load ever increases but I know all this hard work I have been doing will pay off. For those who did not know I am getting a BS in Information Systems Security and so far while working 48 hrs/week at my dead end job, working on hobby electronics, enjoying my firearm shooting hobby, and school I have been able to stay on the dean's list. *pat pat* I am really getting excited as this is a huge step for me.

I really do not blog much like I have stated. I have tried to do my own external blog but I always seem to not have the time hopefully one day I can get one going regularly again as I really do like writing.

In the name of my firearm hobby I am adding a new weapon to my collection. Currently I have a Springfield 1911 Range Officer edition .45 cal. I will be adding within the next few days (Can't wait for it to arrive longest 7 days of my life) a WW2 Russian Mosin Nagant bolt action rifle. This rifle shoots a 7.62x54R cartridge. Very powerful round which can easily punch right through cinder block. The bullet itself is a .30 cal right with the 308 and 30-06. The R means it is a rimmed cartridge the 54 basically dictates the cartridge size if I remember correctly. One of these rounds packs more of a wallop then a AK-47 round which is a 7.62x39. The 7.62x54R was designed as a long range round optimized for velocity which increases the accuracy and distance. This round is very accurate from 300 - 500 meters and can easily hit a human body sized target out towards 1000 meters. The Mosin was not just a infantry rifle for the longest time it was also the Russian sniper rifle of choice until the Dragonov was developed. So excited can't wait. I will be sure to post some pics when I get it.

As for hobby electronics I will be posting some more info on this project here as well hopefully. I am currently building what I call an audio trigger system. Essentially the micro controller waits for an audio pulse and uses this pulse to trigger an action. In my case the first project using this small subproject will be a audio triggered stopwatch. Then after this the trigger subsystem will also move to an audio visualizer project.
This project really stretches my electronics knowledge as there were some interesting hiccups I have had to design around. The code is simple the circuits are the hard part for a guy like me. Because of this project I am learning and actually understanding what is going on. The design of this project needs some preparation for a post so it may be a little while and quite long. Hope to get that together soonish.

Feels good to write again cya guys around.




An update on where things stand currently.

Ah, Hello GDNet.

It has been a while. My external blog has been shut down because overall I found it rather difficult to write about what I was doing at the time and it was killing my focus. I completed my very first micro controller project which is an Alarm Clock running on a PIC micro controller. Was a lot of fun working on such a low level. I learned so much in the time it took to complete the project that a reached almost a burn out point. So right now micro electronics is on the back burner as I need a rest. There is so much one needs to learn to complete something from the ground up. You need to learn to program the controller at the hardware level be it C or ASM, I used C for the alarm clock. Then there is all the circuit theory and interfacing to the various different components. It really is a lot of work and I very much respect people who do such tasks for a living. You would never think that writing the firmware for an alarm clock would teach you so much of the basics of an actually Operating System Kernel. It really does.

So now I am coming back into the world of game dev. Even though it was not a game it really felt great to finish a actual project from start to finish and I feel the project really taught me some important things about development that I was missing before. So now I really feel I am ready and have the discipline needed to tackle a game thanks to having to design and plan such a beast of a project. For those who do not know a Alarm Clock may seem simple but in reality there is quite a bit of work involved when dealing with such minute resources and hardware restrictions due to the way various modules are built into the silicon.

So right now I am looking for a route to take. I have always wanted to learn either DirectX or OpenGL but I really think it is not necessary because writing a game engine is really a lot of work just like writing the firmware for an alarm clock. You really need to know the little details and nuances. Is it possible for me to learn say OpenGL and build an engine sure it is but it probably will not be very good. So I am currently looking into UDK as it might work very well for a game I have been thinking about.

Would be nice to hear what others think on the topic so feel free to comment. Should I follow my dream of learning OpenGL or should I tackle UDK and actually get something playable 20 times quicker?




New Blog!

Hey GDNet,

My new blog is up and running now and I have gotten my first post up. Nothing really interesting just a Welcome post.
There is much more content currently in the pipeline and I think it will really come into being as its own little side project for me.

I will be embarking on my first solo PIC uC project which will be open source and of course it will be documented at my new blog.

If anyone around here is truly interested in where I am going with my development goals feel free to stop by regularly and drop some comments.
It is always good to know if people are reading.


Hope to cya there and in the GDNet forums from time to time. Peace.




Small Update

Hey GDNet

This is just a quick update on where things stand.

First sorry for not posting more PIC journal entries. There are two main reasons for this.
The first reason was after working my way through a majority of the tutorials I feel PIC is not quite the right micro controller for me. It is a great micro controller don't get me wrong and I would not hesitate to use it in a personal project but there are a few issues that led me to this decision. The first issue is the development tools. They are rather bad. The MPLABX IDE is based off NetBeans. This is not an issue but their plugins are rather buggy. The first issue with the IDE is getting it to actually interface with the MCU without getting yelled at like in my first HelloWorld Post. The next issue is the in circuit debugger ugh. When having issues and trying to debug the application half the time the debugger just did not work!!!! There is also no options or functions to power the device without programming it. This is rather icky because if you want to run the application you already burned into the chip you need to reburn the program or actually use external power. I don't like this because the nature of flash memory on MCU's is that you only can burn the chip so many times before it dies. Next is the state of quality compilers for C. Without a doubt I want to use C to program these after learning to understand the architecture through assembly. The issue with PIC is the compilers are not free. XC8 which is the 8 bit compiler is $500 which is not bad by embedded compiler standards, however, it is only for 8 bit if you want 16 bit and 32 bit they are $500 each as well. Quite pricy. There are free versions of the compiler available but the optimization is horrible often generating hex files double the size of just using raw assembler. So this means if you want to fit a slightly more complex application written for PIC in the 14 kb of flash you have you need to A. Meticulously code your c to try and force the compiler to generate halfway decent assembler and to then inline ASM code to shave bytes just to get the size reasonable to fit on the chip; or B. Dump the hard cash and get a proper compiler that does it's job.

So I decided to switch to AVR chips. I picked up and Arduino pack today. The benefits of this are you get a fully optimized C compiler based off of GCC for free which can not only program Arduino with it's custom api but can also code for raw AVR chips later down the road. You can also use these tools to code Assembly for both Arduino dev boards and raw AVR chips. Secondly you have 2 IDE's both free the first is the Arduino IDE but there is also AVR Studio 6 which is also free and built using Microsoft's Visual Studio system to make your own IDE's. So you get the full benefits of Visual Studio 2010 plugins and all for Atmel AVR and ARM chips. This is a win win. Solid development tools all around with no restrictions on your capabilities.

The second reason I have not been posting is that I am in the process of setting up an external blog. I have not really been doing game development for quite a long time. I feel really out of place posting this Micro Electronics stuff here and I feel many people won't read or just don't have the interest in it. So I will be moving on and getting my own blog going for my new hobby of interest and hopefully build a little bit of a following.

That is all for now quite busy I need to get in contact with my hosting provider for verification stuff. See you on the flip side.




The beginnings of PIC (Hello World)

Hello GDNet

First keep in mind this is a rather long post. I also have images in a entry album for you.

So my PIC Micro Controller starter kit arrive a few days ago and I started to tinker around with it. I really like this piece of hardware.
The circuit build on the development board is very clean. It contains a 6 pin power connector, a 14 pin expansion header, a potentometer (dial), push button, and 4 LED's. There is also 7 resistors and 2 capacitors on the board. By the looks of it there is 1 resistor for each LED so you don't overload them, 2 for the push button, 1 for the expansion header, 1 capacitor for the potentometer and 1 capacitor for the MCU socket. This is just by looking at the board not quite sure if this is acurate would have to review the schematic which I am not quite good at yet.

The programmer (PICkit 3) has a button designed fast wipe the micro controller with a specified hex file. It also has 3 LED's to indicate what is happening.

First before I get into HelloWorld I would like to the pain in the ass features I found with the MPLABX IDE.
First I spent hours trying to figure out why the hell the ide could not find the chip on my development board to program it. Turns out by default the IDE assumes you are using a variable range power supply to power the board so I needed to change the options in the project to power the development board through the PICkit 3 programmer.
The dreaded device ID not found error. Next the IDE could not find the device ID of my MCU wtf!!!!. 2 hours later I stumbled apon an answer. THE MPLABX IDE MUST BE RUN IN ADMINISTRATOR MODE!!!!! WTF!!!!!!! The users manual stated nothing of the sort. So to get it working I needed to start the IDE in admin mode and after it is started I need to plug the programmer into the usb port. If it is not done in that order you will get errors when trying to connect to the programmer and the chip.


Here is a little quick overview of the specific chip I used for this into project I find typing this stuff out helps me remember anyway.
There are 3 types of memory on the PIC16 enhanced mid range. Program memory (Flash), Data memory, and EEPROM memory.
Program memory stores the program, data memory handles all the components, EEPROM is persistant memory.
Data memory is separated into 32 banks on the PIC16 enhanced mid range.
Banks: You deal with these the most. It contains your registers and other cool stuff.
Every bank contains the core registers, the special function registers are spread out amongst all the banks, every bank has general purpose ram for variables, and every bank has a section for shared ram which is accessible from all banks.

The HelloWorld project uses 4 instructions, and 4 directives. Instructions instruct the MCU and directives instruct the assembler.
banksel: Tells the assembler to select a specific memory bank. This is better to use then the raw instruction because it allows you to select by register name instead of by memory bank number.
errorlevel: Used to suppress errors and warnings the assembler spits out.
org: Used to set where in program memory the following instructions will reside
Labels: used to modularize code it is not a directive per se but a useful thing to use.
end: tells the assembler to stop assembling.

bsf: bit sets a register (turns it on) sets value to a 1.
bcf: bit clear a register (turns it off) sets value to a 0.
clrf: initializes a registers bits to 0 so if you have 0001110 it will be come 0000000
goto: move to a labeled spot in memory not as efficient as alternative methods

LATC: Is a data LATCH. This one is a LATCH for PORTC allows read-modify-write. We use this to write to the appropriate I/O pin for the LED. You allways write with LATCHES it is better to read from PORT
PORTC: Reads the pin values for PORTC always write to LATCHES never to PORTS
TRISC: Determins if the pin is a input(1) or an output(0)

Explanation of Project:

So generally speaking assembler is very verbose especially on the PIC16 enhanced because you need to ensure you are in the proper bank before trying to manipulate the appropriate register. So in order to light the LED we need to ensure the I/O pin for the LED we want to light is set to an output. We should then initialize the data LATCH which is an array so that all bits are 0. Then we need to turn on (high)(1) the appropriate I/O port that our LED sits on in this case it is RC0 which is wired to LED 1 on DS1.

The code to do this follows forgive the formatting assembler is very strict in that labels can only be in column 1 and include directives can only be in column 1. Everything else must be indented. Also there are some configuration settings for the MCU in the beginning of the file. I am not sure what each one does yet has I did not get a chance to read the specific details yet in the data sheet. These may mess up formatting a bit because it seems they need to be on the same line unwrapped etc... which makes it extend out very far. I will need to look into how to wrap these for readability.
Lastly the code is heavily commented to go with the above explanation.
; --Lesson 1 Hello World; LED's on the demo board are connected to I/O pins RC0 - RC3.; We must configure the I/O pin to be an output.; When the pin is driven high (RC0 = 1) the LED will turn on.; These two logic levels are derived from the PIC MCU power pins.; The PIC MCU's power pin is VDD which is connected to 5V and the; source VSS is ground 0V a 1 is equivalent to 5V and 0 is equivalent to 0V.; -----------------LATC------------------; Bit#: -7---6---5---4---3---2---1---0---; LED: ---------------|DS4|DS3|DS2|DS1|-; ---------------------------------------#include ; for PIC specific registers. This links registers to their respective addresses and banks. ; configuration flags for the PIC MCU __CONFIG _CONFIG1, (_FOSC_INTOSC & _WDTE_OFF & _PWRTE_OFF & _MCLRE_OFF & _CP_OFF & _CPD_OFF & _BOREN_ON & _CLKOUTEN_OFF & _IESO_OFF & _FCMEN_OFF); __CONFIG _CONFIG2, (_WRT_OFF & _PLLEN_OFF & _STVREN_OFF & _LVP_OFF); errorlevel -302 ; supress the 'not in bank0' warning ORG 0 ; sets the program origin for all subsequent codeStart: banksel TRISC ; select bank1 which contains TRISC bcf TRISC,0 ; make IO Pin RC0 an output banksel LATC ; select bank2 which contains LATC clrf LATC ; init the data LATCH by turning off all bits bsf LATC,0 ; turn on LED RC0 (DS1) goto $ ; sit here forever! end




Update on where things are heading

Hey GDNet,

I know I don't post often enough a lot of this has to due with me being boged down with school + a full time job. The other reason is that I don't really tinker around with game programming that much anymore either. I still want to learn opengl at some point or another but this has been put on the back burner. Hopefully I can return to this goal at a later date when there are some better resources available aka if the new red book turns out to be written right this time.

On another note one thing I have wanted to get into for a long time is embedded development through microcontrollers (MCU). The reasoning behind this is it overall can make you a better developer. You have very small ammount of resources available that you need to use sparingly. Not to mention more often then not you get to use Assembly. I have always wanted to learn Assembly not to use for a project but to make myself a better developer. The reason this holds true is that in order to utilize Assembly you need to understand the bare metal architecture of the chip you are using. x86 and x86_64 are very complex architectures with huge ammounts of instructions and it make it difficult to learn. So one way is to instead use a MCU and then gradually work your way up.

My end goal project for this would be to make an 8-bit game I write like say asteroids run on a MCU. I asked for advice on a forum on what hardware I should look at to get to this goal and I was told I should look into Atmel Mega chips. Initially I was looking at the 8-bit PIC chips made by microchip. On the microchip forums I was told I am in for a big learning curve and PIC is probably a bad choice for an 8-bit game because the call stack is small and the ram/flash space is tiny. They also said the C compilers are bloated unless you buy a professional one. UH this is the point. The original gameboy ran a modified Z80 chip made by sharp. The actual specs of the chip are easily matched by the PIC 8-bit MCU's. So I decided to go with PIC anyway because from what I have read they have the better dev tools and are more then capable to compete with a Atmel Mega and are cheaper to get started with and have tons of documentation.

So despite this advice I made my order. This is what I bought there is a link to the store page if you are interested on this description page.
http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en559587 In the side bar there is a link to buy/sample options if you want to look at buying one yourself.

I think this will be a great chip to start with as it has 12 tutorials in assembly & c the IDE as well as the programmer demo board and 2 MCU chips a PIC16F and PIC18F. The PIC16 is the mid range PIC 8-bit MCU and the PIC18 is the High End PIC 8-bit MCU. The tutorials cover both chips.

Wish me luck this is going to be FUN!!!!! I will try to post my progress here if you are interested. I still may end up making an outside blog instead not sure yet but if I do I will for sure kick a linkback here.

That is all for now have fun and code well.




Current OpenGL Progress and other stuff

Well my current prog ress state on learning OpenGL is that I currently have gone nowhere. Essentially it is at a stand still. There are a few reasons for this.

First reason is procrastination.
Second reason is more or less the cause of the First.

Right now I am really tied up with another project that I am trying to get off the ground. The project is a web development Project using Java EE which really needs to get moving. Essentially this project is meant to make me money down the road when it is finished as a sort of Corporate startup endeavor. This is not the typical hipster startup trend BS that is all over the place. A friend of mine and I have wanted to start a Software company for a long time and this is the project that could get it off it's feet. Essentially it is a web tool for small businesses that allows for Order Processing, Inventory tracking, etc..... which is run for them off site in a cloud like setting and they can use the WebFrontend or the Thin Client to work with the system. Hard to really explain the software unless you have used some of the systems already r out there that make you want to blow your brains out because of how disfunctional they really are.

So right now I have been brushing up on my Java and crash coursing some Java EE to give me a base to work off of. Beleive it or not everything people say is false. Java is actually quite a awesome language and it is really fast. Could get a bit verbose at times but it is very nice to work with and I am comming to the point where I am growing quite fond of it again. I have not touched java since version 1.5 and Java was the second programming language I learned after Visual Basic 5 and before C.

If and when I get a chance to spend some time learning OpenGL I might try porting the Arcsynthesis tutorials to Java with LWJGL because quite honestly the SuperBible 5th edition is a sad excuse for a book from what I experienced so far.




Preparing to Learn OpenGL (Toolchain Setup)

Hello again everyone.

I am finally after a very long time going to be diving into 3D for my next project.
In order to do this I obviously need to learn a 3D api and after much evaluation I have
decided to learn OpenGL. The main reason for this is not because of its cross platform
support but because the style of the api melds with my brain much better then the COM based
Direct3D api. This is probably due to my strong roots and love for the C language but either
way I have made my choice.

I am going to be learning the Modern OpenGL style obviously starting with OpenGL 3.3.
There really is not many books out there on modern opengl so I will resort to using the
OpenGL Superbible 5th edition to get my feet wet. Sure it uses a GLTools wrapper library
but from what I can tell is eventually they teach you the stuff under that library so I will
be using it as a stepping stone to get a understanding and then supplement it with the more
advanced arcsynthesis tutorial and maybe the OpenGL 4.0 Shader Cookbook. I am hoping this will
give me a solid foundation to build off of.

With that in mind I need to configure the OpenGL Superbible to work with my Toolchain I have set up.
The superbible assumes use of Visual Studio, XCode, or Linux Makefiles. I currently don't use any of these.
First I am not on Linux even though I have strong roots with linux (My server runs on it) and development on
Linux my current laptop uses the Nvidia Optimus technology which currently has very poor Linux support.
So instead I put together a toolchain on Windows 8 in which I am somewhat comfortable with which I may adapt
in the future.

The current toolchain consists of MinGW/MSYS, CMake, Subversion and Sublime Text 2. MinGW is a gcc compiler for windows.
CMake is a cross platform build generator and Sublime Text 2 is a Non Free cross platform text editor that integrates
with TextMate bundles and is extensible through Python. Subversion is obviously a version control system. I could use git
or Mercurial but I am still having a hard time with the concept of DVCS so this is subject to change as well.

To use the OpenGL Superbible we have a few dependencies which are needed. The first is FreeGlut and the second is the
GLTools library. I got the code for the Superbible from the googlecode svn repo so I can get the source for GLTools.
I downloaded a newer version of FreeGlut from the website 2.8 the repo came with 2.6. I needed to build these with my
compiler so that they can properly link so I threw together 2 cmake files to do this. I made 4 directories under my
Documents folder 1 for FreeGlut's source, 1 for GLTools source, and 1 out of source build directory for each library.
The CMakeLists.txt file for each library went under the source directories. Then I ran cmake to generate MSYS Makefiles.
Then ran make. The make file places the libraries under a central C:\libs folder and also moves the headers there as well.
If you are interested here is the content of the CMakeLists.txt files. I used globbing for the source files which is bad
practice but in this case it does not matter because I will not be adding any more source files to the CMake projects.

GLTools CMakeLists.txt

cmake_minimum_required(VERSION 2.6)
set(SRC_DIR "src/")
set(INC_DIR "include/")
file(GLOB SRC_CPP ${SRC_DIR}*.cpp)
file(GLOB SRC_C ${SRC_DIR}*.c)
add_library(GLTools ${SRC_CPP} ${SRC_C})
set_target_properties(GLTools PROPERTIES
target_link_libraries(GLTools Winmm Gdi32 OpenGL32)

FreeGlut CMakeLists.txt

cmake_minimum_required(VERSION 2.6)
set(SRC_DIR "src/")
set(INC_DIR "include/")
set(BUILD_DIR ${PROJECT_BINARY_DIRECTORY}/libs/freeglut-2.8.0/libs)
file(GLOB SRC_C ${SRC_DIR}*.c)
add_library(freeglut32_static ${SRC_C})
set_target_properties(freeglut32_static PROPERTIES

I don't think the FreeGlut one is optimal because of the complexity of building the library.
It has been tested and does work so it should be fine. If I encounter any issues with the way
the library is built I will make sure to post and update.
So after running make under C:\libs I have the following structure


This structure will allow me to easily create CMake build for all of the chapters in the book as
I complete them. I know where the libraries are so I can easily link them and bring in the headers.
Kind of hackish but being that this is not a custom project it is the easiest way to ensure I can get
build up and running quickly.
That is all for this post hopefully it was helpful cya next time.




Fun with ANSI C and Binary Files

Hello Everyone,

After that last rant post I felt obligated to actually post something useful. I feel horrible when I rant like that but sometimes it just feels necessary.
On a side note however, yes I still hate VS 2012 Express. After all these years you think Microsoft would Update their damn C compiler ugh.

Ok so on to the meat of the post. Though my various browsings of the forums I have seen people with an interest in pure C programming. It really makes me feel good inside because it really is a nice language. So many people say it is ugly and hackish and very error prone. I tend to disagree I actually feel it is much less error prone then C++. We will get into why in a few moments. First before I get into code let me explain a bit why I love Pure C despite its age.

The first thing I really like about C is the simplicity. It is a procedural language which makes you think in steps instead of abstractions and objects. In other words it causes you to think more like the actual computer thinks in a general perspective. I think this is great for beginners because it forces you to think in algorithms which are nothing but a series of steps.

The next part I like about it is the very tiny standard library. It is so small you can actually wrap your head around it without a reference manual. This does come with some downfalls as you don't get the robust containers and other things C++ comes with esenssially in C you have to write your own ( Not as bad as it sounds ).

Lastly raw memory management. No worrying about whether or not you are using the right smart pointer or not etc... Now I know what people are going to say that C is more prone to memory leaks then C++ becuase of the lack of smart pointers. Sure you can leak memory but it is a lot harder to do so in C IMHO. The thing is again C is procedural without OOP. This means when programming in a procedural way you are not going to be accidentally copying your raw pointers. So the only way really to leak is to forget to free the memory. Which under standard C idiom is rather hard to do. In C the moto goes what creates the memory frees the memory. What this mean is if you have a module say a storage module that dynamically allocates with malloc that module must be responsible for cleaning up the memory it created. You will see this in action next.

As I said ANSI C allows you to think in the terms of algorithms without the sense of having to abstract everything.
To provide an example I created a very basic .tga image loader based off of nothing but the Specification.

Keep in mind this is simple particularly for using in a texture. Basically I skipped a bunch of uneeded header elements and extension elements because they are not needed as I am not saving a new copy of the file so I just grab the useful bits.

So from a design perspective this is what we need.
A structure that will store our image data.
A function to load the data
Finally a Function to clean up our dynamically allocated memory (Due to the above best practice)

From this we get the following header file.

#ifndef TGAIMAGE_H
#define TGAIMAGE_H

* Useful data macros for the TGA image data.
* The data format is layed out by the number of bytes
* each entry takes up in memory where
* 1 BYTE takes up 8 bits.
#define BYTE unsigned char /* 1 BYTE 8 bits */
#define SHORT short int /* 2 BYTES 16 bits */

* TGA image data structure
* This structure contains the .tga file header
* as well as the actual image data.
* You can find out more about the data this contains
* from the TGA 2.0 image specification at
* http://www.ludorg.net/amnesia/TGA_File_Format_Spec.html
typedef struct _tgadata {
SHORT width;
SHORT height;
BYTE depth;
BYTE *imgData;

* Load .tga data into structure
* params: Location of TGA image to load
* return: pointer to TGADATA structure
TGADATA* load_tga_data(char *file);

* Free allocated TGADATA structure
* return 0 on success return -1 on error
int free_tga_data(TGADATA *tgadata);


The above should be self explanitory due to the comments provided.
I created 2 #define Macros to make it easier to manage the typing. The specification defines the size of the data at each offset which all revolves around either 8 or 16 bits.

Now we have the implementation of our functions. Here is that file.

#include "tgaimage.h"

TGADATA* load_tga_data(char *file)
FILE *handle = NULL;
int mode = 0;
int size = 0;

handle = fopen(file, "rb");

if (handle == NULL) {
fprintf(stderr, "Error: Cannot find file %s\n", file);
return NULL;
} else {
data = malloc(sizeof(TGADATA));

/* load header data */
fseek(handle, 12, SEEK_SET);
fread(&data->width, sizeof(SHORT), 1, handle);
fread(&data->height, sizeof(SHORT), 1, handle);
fread(&data->depth, sizeof(BYTE), 1, handle);

/* set mode variable = components per pixel */
mode = data->depth / 8;

/* set size variable = total bytes */
size = data->width * data->height * mode;

/* allocate space for the image data */
data->imgData = malloc(sizeof(BYTE) * size);

/* load image data */
fseek(handle, 18, SEEK_SET);
fread(data->imgData, sizeof(BYTE), size, handle);

* check mode 3 = RGB, 4 = RGBA
* RGB and RGBA data is stored as BGR
* or BGRA so the red and blue bits need
* to be flipped.
if (mode >= 3) {
BYTE tmp = 0;
int i;
for (i = 0; i tmp = data->imgData;
data->imgData = data->imgData[i + 2];
data->imgData[i + 2] = tmp;
return data;

int free_tga_data(TGADATA *tgadata)
if (tgadata == NULL) {
return -1;
} else {
return 0;

Lets start at the top with the tga_load_image function.

In C the first thing we need to do is set up a few variables.
We have one for our structure, the file, the mode and the size. More on the mode and size later.

We use fopen with "rb" to open up the file to read binary data.
If the file open was successful we can go ahead and start getting data.

The first thing we do here is use malloc to reserve memory for our structure and use sizeof so we know how much memory we need.

Now we load the header data. I use the fseek function to get in position for the first read.
fseek in the first arument takes a pointer to our opened file. The second argument is actually the first offset we want to read from and SEEK_SET says to count that offset from the beginning of the file. An offset is the number of bytes into a file. The specification for the tga file tells us that the width of the image starts at offset 12. It is two bytes in size so we ensure we only read 2 bytes from the file with sizeof(SHORT) and tell it to do 1 read of that size. Then the internal pointer for file position is now at offset 14 which is where our hight is. We do the same then finally read the depth which is one byte in size placing us at offset 17.

Now that the header data we need is read and stored we need to handle that actual image data which is tricky. This is where our mode and size variables come into play.

You find the mode of the image data by dividing the depth by 8. So if you have a 24 bit depth and divide it by 8 you get a mode of 3.
This mode is actually the number of components each pixel in the data has. The tga spec defines a mode of 3 as BGR and a mode of 4 as BGRA. Blue Green Red and Blue Green Red Alpha respectivly. Now the actual size of the section of image data varies depending on the image so we need to calculate the size of that section so we don't read to far into the file corrupting our data. To do this we need the width, height, and mode. By multiplying them together we get the size of the section. 3 bytes per pixel for each pixel defined by width and height. Hope that makes sense.

Now that we have the size of this image data section we can dynamically allocate our imgData section of the structure to the appropriate memory size.

We then need to fseek to the appropriate section of the file which is offset 18 for this data and we read in the full section because it is defined as a run of bytes.

Now we have the data ensure the file is closed to free the memory allocated by fopen.

Ok remember just above I said mode 3 and 4 are BGR and BGRA respectivly. This is not good because if we use this as a texture is say OpenGL it needs to be in RGB or RGBA format. So we ensure the mode here and we need to flip the red and blue bytes around in the data.
To flip the bytes we are doing some very basic index math because the data in the pointer is technically an array it allows us to hop the right number of indicies in this case 2 because RGB will always be a triplet and we don't care about A or G because the are in the proper location. If you don't understand the pointer and array nomenclature feel free to ask in the comments or read the K&R book if you can get a hold of a copy.
Finally we return our structure to the caller.

Our last function is free_tga_data this one is important due to the rules above. The tgaimage module allocated data so it is it's responsibility to provide the means to clean it up.

Here is really simple we take in the structure as an argument and make sure it is not NULL as calling free on a NULL pointer is undefined and will likley segfault the application. If all is good we FIRST clean up the imgData portion of the structure. If we don't do this it will leak as it was a separate malloc. Then we free the rest of the tgadata structure from memory.

Hopefully this post was helpful to some of the C programmers out there. This is a very nice example to demonstrate how clean a solution in C can be as well as allows for a nice demonstration on how best to avoid memory leaks in C applications due to various best practices. Not only this but it also demonstrates how to traverse a binary file using the files Offsets from nothing more then the specification.

That is all for now have a great day.




Microsoft what are you doing?

Interesting state of affairs I have come across today. So lets just get into it and try to be short and sweet.

Today I have been doing some research on graphics API's. For the longest time I have been wanting to move to the 3D end of computer graphics. As everyone knows there are 2 API's for this D3D and OpenGL. I don't really want to get into flame war's over the two API because it really does not matter they both do the same thing in different ways.

So ultimatly my choice that I made after my research was to use D3D. The reasoning behind this was the superior quality of Luna's books over the SuperBible of OpenGL. Luna really gets into interesting stuff like water rendering examples and terrain rendering examples where the SuperBible spends the entire book rendering tea pots. This is not really and issue but the state of the book is rather lacking due to the fact that so many pages are wasted using his pre canned fixed function api instead of just getting down to the nitty gritty. I am not a fan of the beat around the bush style and prefur the jump right in mentality. I am a competant programmer there is no need for the wrapper api it is just extra dead trees. So this is the main reasoning behind the D3D choice just shear quality of resources available.

Then I came across the current Microsoft debachal. Not sure what they are thinking. First off yes I am running Windows 8 and I really love it. Nice and easy to use once you get use to it and I like the clean style it presents. I think the new version of visual studio could use some UI work but who cares. The real issue comes into play with the Express 2012 Edition because I don't have $500 to drop on an IDE. Actually I prefur no IDE but again that is another gripe. When Microsoft moved the D3D SDK into the Windows 8 SDK the removed some API functionality (not a big deal) but they also removed PIX. They rolled PIX and the shader debugger into Visual Studio and made it only available in the Pro + versions. NOT COOL. NOT COOL AT ALL. Not only this but they on top of it removed the cmd line compilers.
So in order to get those you need to install visual studio first.

So basically they want me to use the IDE or at least install it and then remove the standalone debuggers meaning I can't properly debug shaders as I am learning unless I shell out $500. Not cool again not at all.

So right now I am leaning towards having to use OpenGL and avoiding potential Windows 8 store development just so that I can properly adapt my work flow to the standalone tools they provide.

Not sure what Microsoft is thinking here but it really feels like they are trying to alienate the Indy style of development for the sake of a few bucks. Really wish they still had the $100 standard edition sku I would buy it in a heartbeat if it got me the tools they took away.

Sorry for the little rant not usually like me at all.

If anyone knows about any potential work arounds (NOT PIRACY I HATE PIRACY) feel free to clue me in.




Wow Long Time

Holy crap has it been a long time since I posted here. I have been so tied up with school and work that I kind of just fell of the face of the earth
being totally swamped with no real time to do much of anything.

I just recently due to school got back into doing some programming. Partially because of the nature of the class and me being as lazy as I could possibly be just not wanting to go through all the repeditive steps.

Right now I am taking a statistics class and calculating all of the probability stuff can get very very long and repedative to find out the various different answers. For instance when finding the binomial probability of a range of numbers in a set you might have to calculated 12 different binomial probabilities and then add them together so you can then caluculate the complement of that probability to find the other side of the range of numbers. It is just way too repedative in my liking.

The advantage of this is it really re-kindled my love of the Python language. I just wish the language was a bit more useful for game development sadly. The performance hits are just way too high when you progress onto 3D.

After I finished my homework I decided to do a comparison of the Python and C++ code required for calculating the binomial probability of a number in a set. This is the overall gist of the post because it is really amazing to see the difference in the code of two examples of the same program and it is simple enough to really demonstrate both in a reasonable amount of time. The interesting thing here is from a outside perspective runing both they appear to be run instantaniously with no performance difference at all. So here is the code btw it is indeed a night and day difference in readability and understandability.

Python (2.7.3)

def factorial(n):
if n n = 1
return 1 if n == 1 else n * factorial(n - 1)

def computeBinomialProb(n, p, x):
nCx = (factorial(n) / (factorial(n-x) * factorial(x)))
px = p ** x
q = float(1-p)
qnMinx = q ** (n-x)
return nCx * px * qnMinx

if __name__ == '__main__':
n = float(raw_input("Value of n?:"))
p = float(raw_input("Value of p?:"))
x = float(raw_input("Value of x?:"))
print "result = ", computeBinomialProb(n, p, x)


int factorial(int n)
if (n n = 1;
return (n == 1 ? 1 : n * factorial(n - 1));

float computeBinomialProb(float n, float p, float x)
float nCx = (factorial(n) / (factorial(n - x) * factorial(x)));
float px = pow(p, x);
float q = (1 - p);
float qnMinx = pow(q, (n - x));
return nCx * px * qnMinx;

int main()
float n = 0.0;
float p = 0.0;
float x = 0.0;
float result = 0.0;
std::cout std::cin >> n;
std::cout std::cin >> p;
std::cout std::cin >> x;
result = computeBinomialProb(float(n), float(p), float(x));
std::cout return 0;

Sorry for no syntax highlighting I forget how to do this.
The bigest thing you can notice is that in Python you don't need all the type information which allows for really easy and quick variable declarations which actually slims the code down quite a bit. Another thing to notice is you can prompt and gather information in one go with the Python where in C++ you need to use two different streams to do so. I think the Python is much more readible but the C++ is quite crisp as well.




Closures and Python 2.7

Wow Hello again GDNet it has been quite a while since I last actually logged into the site. As for what I have been up to; well I have been drowning in school work and honing my development skills. Lately I have mainly been using Python and experimenting with the Python Flask micro web framework. I will say it has really been a joy getting away from the world of C/C++ for a change. So why am I back here after I have left my good bye a while ago. Here is the thing I actually miss this site I lurk on it almost every day anyway so why not. I am going to be starting up a new project game related so stay tuned.

Now for the reason for this blog entry. As I stated I have really been honing my programming skills as of late and dealing with some odd languages mostly in the functional paradigm. One thing about functional languages is that you don't have OOP so you need to find alternate ways to create somewhat of a similar effect and it turns out Closures are just that. Many people ask why not just use OOP then. Well the issue really arises because of the way most books typically teach OOP which leads to very sloppy inheritance and can cause deep hierarchies. This issue with this is it makes your code a maintenance nightmare. The other with the way most books teach OOP is they create a notion of taxonomies which leads to people creating Classes that should have never been classes to begin with. Note I am not saying OOP is evil I am saying the way OOP is often taught is evil. On another note OOP can lead to issues with parallelism where the state of the object becomes out of sync when multiple threads are involved and closures solve this problem quite well.

[Edit Thank you TheUnbeliever]
Just recently I read an post in For Beginners about this issue this post brings up. The OP was sent to StackOverflow where there was a good explanation of the issue at hand with Python's Scope resolution and closures. The one solution I present in this article is the one used in the StackOverflow article this article is actually one that I used when learning how to implement closures in Python 2.7. Hopefully this post will be useful to help people understand closures and how to implement them in both Python 2.7 and Python 3.x.

So what is a closure. Most definitions of what a closure is uses odd jargon like lexical structure and referencing environment and such. The definitions are not very clear unless you have a strong functional programming background so here is a more clean definition that I stumbled across.
A closure is a block of code that meets 3 particular criteria...
1. It can be passed around as a value
2. Can be executed on demand by anyone who has this value
3. It can refer to variables from the context in which it was created in (lexical scope, referencing environment)

So why are these useful... For one they allow you to maintain state between calls much like an object, very useful for creating callback, can be used to hide utility function in the function providing a cleaner api and they can even be used to create many programming constructs such as loops. These constructs are very useful over all and can really simplify to code you write and need to maintain in large complex systems.

Python has had support for closures since version 2.2, however, there are a few issues. Issueless support comes in Python 3.x. The issue in Python 2.7 is a Python Scoping issue that forces the closed over variable to be read only.

Here is an example of a closure in implemented in Python with the exact issue of Scoping...

def counter(start_pt):
def inc():
start_pt += 1
return start_pt
return inc

if __name__ == '__main__':
func = counter(1)
for i in range(10):

So what does the code do it is simple it counts up 10 numbers from an arbitrary start point. So what is the issue well the issue is with pythons scope this actually generates an error the exact error is UnboundLocalError: local variable 'start_pt' referenced before assignment. This is the same issue the author of the post was having and it is because of the way python defines the scope of a variable. Basically python determines the scope from the innermost first assignment it finds which means when we increment start_pt we are accessing a variable with no initial value yet because python has the scope confused and forces the variable to be read only.

Python 3.x solves this scope problem by implementing a new statement called nonlocal here is how it looks...

def counter(start_pt):
def inc():
nonlocal start_pt
start_pt += 1
return start_pt
return inc

if __name__ == '__main__':
func = counter(1)
for i in range(10):

The output of this code will be as expected 2 3 4 5 6 7 8 9 10 11

But again this is only in Python 3.x what if we can't use Python 3.x because we need support for API's that are not Python 3.x compatible yet. We want to use Closures but also want to avoid the scope issue. There are a few solutions to this problem. Lets go through 2 of them one which seems kind of hacky and other which is less hacky but not a true closure but instead just mimics the closure concept. Here is the first which is a true closure that uses a Python mutable list to work around the Scope issue.

def counter(start_pt):
c = [0]
c[0] = start_pt
def inc():
c[0] += 1
return c[0]
return inc

if __name__ == '__main__':
func = counter(1)
for i in range(10):

Basically what we do is create a new variable that is a 1 element mutable list and place our start_pt in this location. We can't use a standard variable c that is not a list because c becomes read only inside the inner function which results in the same issue as before giving us the same error as before so using a mutable list works around the issue and we get the expected output of 2 3 4 5 6 7 8 9 10 11. This is kind of sloppy looking so lets look at the not a true closure but mimics a closure alternative. Note I am violating standard Python naming conventions here because we want the user to think they are using a lexical closure so here is the code.

class counter(object):
def __init__(self, start_pt):
self.start_pt = start_pt

def __call__(self):
self.start_pt += 1
return self.start_pt

if __name__ == '__main__':
func = counter(1)
for i in range(10):

So here we turn the closure into a mimicked closure by using a class to preserve the state instead of the mutable list.
We also force our class to be executed only as a function by using the __call__ magic method. So here we have a closure but it is not a true closure but actually and object disguised as a closure.

So there you have it we were able to discover 2 ways to work around the Python 2.7 issues with Scoping and still be able to use our non 3.x compatible libraries. Both methods are valid you can choose which one you want to use. Sorry the example is so simplistic but it really is an easy example to get the point across now you can use your favorite language and use the powerful feature of closures the method you choose is up to you or you can be a real man and jump on Python 3.x



Sign in to follow this  
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!