• entries
97
112
• views
86696

Ramblings of programmyness, tech, and other crap.

Yes I be a contemplater

Ok after much thought about what I want to do game development wise my final decision is to use Java and JOGL. I made this decision for a few reasons.

1. Ease of cross platform distrobution via Web start.
2. I am extreamily more fluent in Java then any other language basically because it was one of my first languages and I studied it in college.
3. I already have books for reference on Java and OpenGL incase I need to look something up.

It is amazing what contemplation can do especially with a goal in mind.

Wow Long Time

Holy crap has it been a long time since I posted here. I have been so tied up with school and work that I kind of just fell of the face of the earth
being totally swamped with no real time to do much of anything.

I just recently due to school got back into doing some programming. Partially because of the nature of the class and me being as lazy as I could possibly be just not wanting to go through all the repeditive steps.

Right now I am taking a statistics class and calculating all of the probability stuff can get very very long and repedative to find out the various different answers. For instance when finding the binomial probability of a range of numbers in a set you might have to calculated 12 different binomial probabilities and then add them together so you can then caluculate the complement of that probability to find the other side of the range of numbers. It is just way too repedative in my liking.

The advantage of this is it really re-kindled my love of the Python language. I just wish the language was a bit more useful for game development sadly. The performance hits are just way too high when you progress onto 3D.

After I finished my homework I decided to do a comparison of the Python and C++ code required for calculating the binomial probability of a number in a set. This is the overall gist of the post because it is really amazing to see the difference in the code of two examples of the same program and it is simple enough to really demonstrate both in a reasonable amount of time. The interesting thing here is from a outside perspective runing both they appear to be run instantaniously with no performance difference at all. So here is the code btw it is indeed a night and day difference in readability and understandability.

Python (2.7.3)

def factorial(n):
if n n = 1
return 1 if n == 1 else n * factorial(n - 1)

def computeBinomialProb(n, p, x):
nCx = (factorial(n) / (factorial(n-x) * factorial(x)))
px = p ** x
q = float(1-p)
qnMinx = q ** (n-x)
return nCx * px * qnMinx

if __name__ == '__main__':
n = float(raw_input("Value of n?:"))
p = float(raw_input("Value of p?:"))
x = float(raw_input("Value of x?:"))
print "result = ", computeBinomialProb(n, p, x)

C++

#include
#include
int factorial(int n)
{
if (n n = 1;
return (n == 1 ? 1 : n * factorial(n - 1));
}

float computeBinomialProb(float n, float p, float x)
{
float nCx = (factorial(n) / (factorial(n - x) * factorial(x)));
float px = pow(p, x);
float q = (1 - p);
float qnMinx = pow(q, (n - x));
return nCx * px * qnMinx;
}

int main()
{
float n = 0.0;
float p = 0.0;
float x = 0.0;
float result = 0.0;
std::cout std::cin >> n;
std::cout std::cin >> p;
std::cout std::cin >> x;
result = computeBinomialProb(float(n), float(p), float(x));
std::cout return 0;
}

Sorry for no syntax highlighting I forget how to do this.
The bigest thing you can notice is that in Python you don't need all the type information which allows for really easy and quick variable declarations which actually slims the code down quite a bit. Another thing to notice is you can prompt and gather information in one go with the Python where in C++ you need to use two different streams to do so. I think the Python is much more readible but the C++ is quite crisp as well.

Wow it has been a while

Hello everyone it has been another long while since I posted been so busy. Work and School seems to occupy my life and what little spare time I have had I use to play COD Black Ops on my PS3.

School is going great I am acing my classes so far. I must say I never expected going to school online to be so much more difficult then going right to a campus and classroom. The main reason I am going online is because I really have no choice with my job and what schools offer around here. It really takes a lot of time and dedication to stay on track when going to school online. There are no reminders you need to make sure everything is planned and organized when it comes to your time and that helps avoid the procrastination bug that people tend to have.

On another note as of late I have been learning the workings of the android framework. I have never done any sort of embedded development before so this is new territory for me when it comes to game development. So not only have I had to learn the layout of the phones architecture but I also have to brush up a bit on my Java. While doing this I have been watching some of Googles I/O videos about best practices for embedded android development to avoid some common architecture performances issues for when I start my project. From the looks of it to build a solid Android app you really need to watch what you do because so many things we take for granted doing desktop development are really expensive in performance cost when you toss it on a 1 ghz ARM processor.

That is all for now I will keep posted on the project process as I move further into development.

Woot Woot D3 is announced and other stuff!!!!!!!

A little late on the jump had other stuff running through my mind as per my last journal entry. I am so Psyched up about this game I have been waiting for it for 8 years. The ideas that Blizzard can put into game is just pure genious. From the looks of it this will be another genra statement in the action rpg relm. Once again blizz will set the standard. For one the new boss multiplayer loot system is simple but very new. The fact that each person taking place in the kill gets his own loot generated is great. Also, new new health loot orbs allowing the front line to heal the back line is a great concept as well. Lets hope for the best to see it soon in action. Hurry Blizzard I need this game.

-----------------

On another note my new book on xna is much better then my last one. The previous books was Professional XNA Game Programming for XNA2.0 published by wrox. Horrible book just plain horrid. The fact that the auther changes a key part of the code and fails to tell you hey you got to modify this too is just horrible style of writing. Now with my new book Beginning XNA 2.0 Game Programming (from novice to professional) is just great. It really takes the time to explain everything before making your first 2d game and even 3d game. It starts with game dev concepts that all game developers should know. Then it explains a bit about the way xna works. Next it walks you through the steps of rendering a sprite and some simple collision detection, input and sound. Once that is done you make your first 2d game Rock Rain. I am on the part where I get to make the 2d game and I already understand alot more about game dev than I have ever did. Once I get through that section I think I am going to take a break from the book and make some 2d games to make sure I understand the concepts better. Then I will move to the 3d sections of the book.

Why could their not be a book written this good for C++ and OpenGL. If it was the case I would have been making games a long time ago instead of stupid little tech demos of rendering triangles and cubes.

If you are looking to learn XNA get this book it will really help out.

Web site mock up complete.

So I am in the process of starting up a small indy game company. It will start as a one man operation pritty much and the plan is to release the games for free. Just a little something to occupy some time really. Who knows maybe it will take off and get bigger. But not worried about that. So through some research I realized that now a days a lot more goes into a website then XHTML/CSS and PHP. So I broke out the rusty Gimp skills and put together a entire web site mock up. This is the result of many hours of work I would estimate about 7 hours of work with research time included. I wanted to get this layout just right so I was tweaking stuff constantly. The next step will be to splice up the site into small images and then use those with XHTML/CSS via image replacement to build the functional site. I will also be using PHP to deliver database driven news.

I would like to know your opinions on the mock up. I am a big fan of constructive criticism. Click the thumbnail to make the mock up bigger.

Visual C++ 2010 Express Beta 2

I just installed Express 2010 beta 2. One thing I can say it is very fast. In the coming days we will see how it works out. So far I was able to build SFML and Boost with it. I did stumble into some issues with SFML but resolved them. Turns out the /INCREMENTAL option was turned off in one of the Build Configs. Once I flipped the switch it seems to build alright. We will see if it actually works at a later date.

Version Control.

First I must open with the simple fact that a lot of people just don't use version control. The main reason for this is because in all honesty a lot of people just don't understand it. Not to mention the major version control war going on at the current moment just tends to confuse people all together even more. I am going to do my best to give some informative information on the different version control systems (VCS) out there to try and help make sense of the decision I need to make for my next project as well as hopefully help others make the decision for their projects.

First there are currently two types of VCS's out there at the current moment first is the CVCS and next is the DVCS. CVCS systems like subversion and cvs have a central server that every client must connect to. The client basically pulls revision information from these system to a working copy on your hard drive. These working copies tend to be very small because it pulls only the information it needs to be considered up to date basically the latest revision. The major gripe people seem to have with these types of systems at the current moment is the lack of enough information to do a proper merge of a branch back into the main code base. DVCS systems are what people call distributed. I hate that term because I think it makes things harder to understand. Examples of these are git, mercurial, and bazaar. The reason I hate the term distributed is because currently a lot of people use DVCS systems just like a CVCS system but with benefits. Typically the main code base is stored in a central location so that people can stay properly up to date with the code base. DVCS pulls the entire repository to your system including all of the revision history not just the latest. This allows for you to not be connected to the server and do everything right from your machine without a internet connection. What I like about this is in its own right everyone has a complete copy so if something happens to the central repository numerous people have a backup of the code and revision history. The nice thing about DVCS and why I think so many people are fanatic about it is the easy branching and merging. Because you have the total revision history it allows you to easily branch merge and cherry pick changes at will without a lot of risk of "pain". So when looking at the two different types of systems think CVCS (most recent revision only in working copy), DVCS (total history in working copy).

*Warning opinionated*
My main gripes with the current arguments out there have to do with the fact of branching. The DVCS group of people seem to like the idea of branching every time they make a code change or add a new feature. They argue that this is insane and painful to do in SVN because of horrible revision/merge tracking. Ok I agree branching and merging can be painful in subversion but at the same time it is not as bad as people say because they are taking it to a extreme and not using the system properly. I am not the kind of person that likes to make a branch for every feature I add into a application. I feel branching should only be used for major changes, re-factors, and other changes that have a large chance to BREAK the current working development tree. Maybe this mentality is why I never really had to many issues with subversion to begin with when it came to branches and merges. Maybe it is because I was using subversion as the developers INTENDED according to the svn red-book. I don't know maybe I am just a hard sell.

So which one should you use. It is hard to say each system has its pros and cons as you have seen. The one feature I love about the DVCS camp is the speed at which you can get up and running and the fact that sites like GitHub and Bitbucket are amazing places to host your code. I also like the speed of DVCS systems, because you are doing everything from your local machine and not over a network DVCS is blazing fast compared to CVCS. Lastly the thing I like the most about DVCS is the fact that they are very flexible and you can use them with whatever workflow you desire. The main cons I have for DVCS are the lack of repository permissions, large file support is really not that good because of full repo history pulls, no locking for binary files and other code files to prevent major issues if both are modified at the same time. For example if you only want to branch when you feel something has a good chance to break the entire main line of development you can do so. If you want to be considered the central repo instead of a host like GitHub you can do so. What I like about CVCS is the fact that it can handle very large files quite well. Another thing I like is the fact you can have repository permissions making sure you know who can write to the repository. I also like the fact that you can lock binary files and other files if you wish to prevent other people from making changes to the file as you make your changes. This alone can save tons of time if you are working as a team and your art assets are in version control with your code. The major issues I see with CVCS are the required network connection and can cause speed issues *coffee anyone*, not having the full revision history present on the machine making it difficult to cherry pick or inject merges into different parts of the development line.

Keep in mind the pros and cons are as I see them other people may have different pros and cons. Yes I listed less cons for subversion however, there are times when these cons can definitely outweigh the pros. The cons of DVCS are hard to ignore and can outweigh the pros at times as well. So with these things in mind I am sure you can better make the decision you need to make. As for me I have used both and I like both a lot which make the decision for me extra hard. The one big pull factor for me is hosting and DVCS has the huge win there. So for my next project I will be using DVCS because I feel the pros outweigh the cons more so under most circumstances. Not to mention I really like the speed and the fact of having a whole repository backup on my machine. Ultimately the decision is yours but with this information hopefully you can weed through the google fluff that is out there.

In the future I just might have to return with my experiences and if anyone wants more detail into the inner workings of Version control systems let me know in the comments and I will go out and find the video that goes into the details of the internal differences of how revisions are stored.

Update on where things are heading

Hey GDNet,

I know I don't post often enough a lot of this has to due with me being boged down with school + a full time job. The other reason is that I don't really tinker around with game programming that much anymore either. I still want to learn opengl at some point or another but this has been put on the back burner. Hopefully I can return to this goal at a later date when there are some better resources available aka if the new red book turns out to be written right this time.

On another note one thing I have wanted to get into for a long time is embedded development through microcontrollers (MCU). The reasoning behind this is it overall can make you a better developer. You have very small ammount of resources available that you need to use sparingly. Not to mention more often then not you get to use Assembly. I have always wanted to learn Assembly not to use for a project but to make myself a better developer. The reason this holds true is that in order to utilize Assembly you need to understand the bare metal architecture of the chip you are using. x86 and x86_64 are very complex architectures with huge ammounts of instructions and it make it difficult to learn. So one way is to instead use a MCU and then gradually work your way up.

My end goal project for this would be to make an 8-bit game I write like say asteroids run on a MCU. I asked for advice on a forum on what hardware I should look at to get to this goal and I was told I should look into Atmel Mega chips. Initially I was looking at the 8-bit PIC chips made by microchip. On the microchip forums I was told I am in for a big learning curve and PIC is probably a bad choice for an 8-bit game because the call stack is small and the ram/flash space is tiny. They also said the C compilers are bloated unless you buy a professional one. UH this is the point. The original gameboy ran a modified Z80 chip made by sharp. The actual specs of the chip are easily matched by the PIC 8-bit MCU's. So I decided to go with PIC anyway because from what I have read they have the better dev tools and are more then capable to compete with a Atmel Mega and are cheaper to get started with and have tons of documentation.

So despite this advice I made my order. This is what I bought there is a link to the store page if you are interested on this description page.
http://www.microchip.com/stellent/idcplg?IdcService=SS_GET_PAGE&nodeId=1406&dDocName=en559587 In the side bar there is a link to buy/sample options if you want to look at buying one yourself.

I think this will be a great chip to start with as it has 12 tutorials in assembly & c the IDE as well as the programmer demo board and 2 MCU chips a PIC16F and PIC18F. The PIC16 is the mid range PIC 8-bit MCU and the PIC18 is the High End PIC 8-bit MCU. The tutorials cover both chips.

Wish me luck this is going to be FUN!!!!! I will try to post my progress here if you are interested. I still may end up making an outside blog instead not sure yet but if I do I will for sure kick a linkback here.

That is all for now have fun and code well.

Update on learning XNA

In my last journal entry I mentioned I was going to give XNA a shot. So I did and I started learning XNA. As I am going through the book XNA feels kind of clunky and overly simplified. Not sure if it is the authors doing or the frameworks doing atm. But right now all I can say is I was more comfortable in C++ and OpenGL a few years ago. C++ and OpenGL made more sence to me I guess. This comes as a shocker because I do like C# as a whole. Maybe it is because I have used C++ longer then C# as it was one of my first languages I learned but have not yet totally mastered even after 7+ years of using it. Not sure what to do atm.

Turns out it was the book

The Xna Programming Professional book by wrox is just horrible. The author fails to mention to the reader when important changes to the source are nessesary. I spent 20 minutes today hacking my way through his sample code trying to find the piece of code that was missing to get the damn pong menu to show up. Worst book I have bought in years. So down to the books store for me in a few hours to try and find a different book to learn from.

Tool chain's for the stubborn

Hello GDNet. It has bee a while since I last posted partially because I have been really busy and just really did not have too much to post about. Today I am going to talk about toolchain's for the stubborn people among us. Every so often I see a thread come by on for beginners asking about command line compilation and various out of the ordinary editors. This entry is for those people. Like them I am stubborn and often prefer to do things the "more difficult way". So in order to do this properly I am going to have to make a few assumptions and here they are...

1. You like me are stubborn. This is defined as an individual who refuses to "see the light" and don't understand why everyone uses IDE's. You fail to see the benefits to your productivity and feel the feature they provide often get in the way and also feel that they are bloated, cumbersome, and restrictive to your workflow.

2. You want to develop cross platform and are completely clueless to the whole GCC/Make world and don't want to restrict yourself to a pure Microsoft compiler so that you can carry your new found knowledge over to Linux, BSD, and Mac OSX.

Note: As of this posting I don't have a particular editor of choice to put in the tool chain list I am still trying to find one I am 100% comfortable with.

Attention: At the end of this post I will also provide a self contained zip file with the appropriate files for you to try this out yourself. I will be using an example from the OpenGL Superbible 5 for this as this is GDNet. In order to compile and run this example your video card must support a minimum of OpenGL 3.0.

What we will be working with:

1. MinGW GCC + Msys
2. Makefiles
3. Editor of choice

Ok that is really all you need.

Installation:

First we need to install the MinGW GCC + Msys collection onto our Windows System.

1. First go to MinGW Latest Link and download the file labeled mingw-get-inst-20110530.exe
2. Run this exe file to start the installation. Ensure you install this to C:\MinGW and make sure you also click the check box for C++ and Msys base.
3. Next we need to set a environment variable so that we can use these tools from the windows command line for simplicity of use.
- Open My Computer a.k.a. Computer (Windows 7) from the start menu and in the top bar select System Properties.
- Next select advanced system properties from the side bar. (Windows 7 not sure about XP it has been a while).
- Then click the Environment Variables button.
- Under user variables press new and call it PATH then under value type C:\MinGW\bin;C:\MinGW\msys\1.0\bin; and then press ok.
- Now press ok on the rest of the windows to close them out.
- We are now ready to go.

The Infamous Makefile:

Ok now in order to make larger projects and projects with dependencies easier to compile this thing called the Makefile was invented. The Makefile is a small script that the make tool reads so it knows what steps it must take to compile the code.

A Makefile consists of a few properties usually what are known as Variables and Targets. The Variables are designed so you can save yourself a lot of typing in lengthy Makefiles and Targets are basically labels of what you want to compile. When you use the make command the syntax is make the target tells make where to enter in the Makefile so it knows what to compile.

Here is the Makefile for the Triangle project don't worry I will break it down...

[source lang="make"]MAIN = Triangle
SRCPATH = ./src/
LIBDIRS = -L../libs/freeglut-2.6.0/lib -L../libs/GLTools/lib
INCDIRS = -I../libs/freeglut-2.6.0/include -I../libs/GLTools/include -I../libs/GLTools/include/GL

CXX = g++
CXXFLAGS = $(INCDIRS) LIBS = -lfreeglut32_static -lgltools -lglew -lwinmm -lgdi32 -lopengl32 -lglu32 all :$(MAIN)

$(MAIN).o :$(SRCPATH)$(MAIN).cpp$(MAIN) : $(MAIN).o$(CXX) $(CXXFLAGS) -o$(MAIN) $(LIBDIRS)$(SRCPATH)$(MAIN).cpp$(LIBS)

clean :
rm -f *.o
[/source]

Ok so what does all this mean. Well at first glance it may seem overwhelming due to variables but in all honesty it is quite simple.

First line 1-8 are variables. The syntax for variables is as follows... =
Very simple. Here in this Makefile I am using variables to handle the Main application name, the source code path, the library path, the include path, the compiler to use, the flags for the compiler, and the libraries that need to be linked to the binary file. In GCC -L is used for library path, -I is use for include path and -l is used for linking.

The rest of the file consists of targets, dependencies and rules(commands).
The general syntax for the structure of a target is as follows.

:

It is important to note that for the Rules section you must use a hard tab not a tab that was converted to spaces.

The first target is "all" This is executed when you type make or more explicitly make all at the command prompt. The dependency to this target is $(MAIN) which is the variable from above that actually points to our target later in the file known as$(MAIN).

The second target is $(MAIN).o with a dependency of$(MAIN).cpp. In this case remember our variable resolves to Target so this is the equivalent of dictating what the dependencies for our programs object file are.

The next target is $(MAIN) with a dependency of$(MAIN).o This target is what actually compiles and links our application. It has a dependency on our Triangle.cpp's object file. This target is slightly different in that it actually contains a rule. This rule is pretty straight forward if you resolve the variables. It is the actual call to the g++ compiler to compile and then link our application.

The last target is clean with no dependency. This has one rule which deletes all of the object files in our program path so we can get a clean compile from running make. To execute this target just like any target you need to type make clean.

That concludes our look at Makefiles so how do I actually use this?

Basically open your command prompt navigate to the projects root directory (The one with the Makefile in it. Makefiles are simply named Makefile or something with Makefile in the name) and then simply type make and hit enter and it will compile the application and the directory will now contain the exe file you can run.

Conclusion:

Ok so what is there to conclude about. The ultimate conclusion is this Makefile's are a very flexible way to compile an application. They allow for the flexibility to logically structure your code paths within your version control system and create a very organized and linked way to structure your complication process. The make system can actually call itself from within a make file which is very nice this can lead to a smooth way to test individual components and can even aid in building yourself an agile development workflow as well. Most of all this compilation process can be used on any OS as long as they have a way to get the tools to build the project. If you write your Makefile properly you will be able to make a small modification or even no modifications and it will compile on any OS that can get access to make. In our case we got access to make on windows with MinGW + msys.

Now go and be proud of your stubborn behavior because now you have no excuse to not be stubborn.

Remember: You can try this out yourself just install the tools as stated in the article and extract the archive I provide below to your hard disk.

Time to take a small switch

Well I have been learning DirectX as my last entry stated. After 8 chapters of the Luna shader approach book and a few days of self experimentation I can say I am comfortable with what I learned.

Now I want to take a look back at OpenGL. Quite a few years back I started messing about with OpenGL. I think at this point it is time to revisit for a few reasons. First a foremost I am a much better developer then I was back when I first started experimenting with it. Secondly I have a much better math background. When I first started dabbling with graphics programming in 2D/3D I found a passion for math. I use to hate it. But now I have a practical reason for using it making me find it much more interesting.

So I will start to screw around with OpenGL. I don't have a OpenGL book the quality of the luna book but I understand how to set up Win32 and OpenGL Devices already. It will just more or less be lets convert one of the luna book examples to OpenGL and see which API I like better.

I will make sure I post a update with my findings.

The Woes

A little off topic today. Turns out I really miss my Linux install. I have been using Linux for a very long time it turns out. I learned a real lot about computers in my time using Linux. I officially abandoned Linux a few months ago because all my hopes of seeing commercial quality games come to it were very far fetched. So I switched to vista just to play games. Then I discovered XNA and I kinda stuck to Vista for those very reasons. I am passionate about games and come to think of it since I started learning XNA I haven't really been playing games at all. My passion for gaming escalated when I was learning how to make them myself. I have always been a techie guy since I used my first computer in second grade. Turns out I really developed a strong passion not only for games but for computers in general. So I have 2 techie loves in my life Games and Linux sadly the two don't meld together very well. I would really love the day I can see them both intertwine.

Yea I possibly could change the future by starting my own game company and putting games out there for Linux however, It seems a lot harder then that. I know by experience the user base is there but it would take more then one small dev company to change the face of gaming. So here I am yet again at a cross roads use the OS I love to make games or use a decent OS to make games and play games. Oh the woes of choices.

--------------------

There are a few reasons why Linux is not really a target platform for game companies. One simply is the massive amount of distros out there. Another one is no common package management system for all distros. Not to mention some support earlier versions of required libs for game dev then others. These are things that need to be solved to bring mainstream gaming to Linux.

I have thought of a possible solution not only to help the developers and save them money when wanting to cross release a game but also a way to help linux appeal to the game devs.
However, would it be worth trying to implement such a solution and hope it takes hold?
Would all the time and effort be worth it just to see it flop?
Would it be worth stopping my progress in XNA or game dev all together to try and solve a long standing problem?

In all honesty I really do not know.

Maybe some of you have the answer.

The Results are out.

I am not going to bother with posting the code I used. I am going to try to keep this as short and sweet as possible because we all know benchmarks are to be taken with a grain of salt. The main conclusion I came up with is beware of for loops they can really bog down code so use them sparingly. So here are the results which are not surprising that java is faster. So here are the results.

First results are with a for loop that reruns the code 1000 times. By resetting positional values. This shows that Java with a for loop is about 2.3x as fast as Python mathematically.

Python average after 5 runs 17.7692 Seconds
Java average after 5 runs 7.6814 Seconds

These second results are without the for loop and just running the math through one iteration showing that Java is 5.1x as fast as python without a for loop.

Python average without for loop after 5 runs 0.041
Java average without for loop after 5 runs 0.008

Now these numbers seem insignificant, however, from my experience once you add in collision detection, with the physics and the rendering these numbers can become exponential and cause some bottle necks in your game.

Based off experience and the fact I have many languages under my belt and knowledge of core programming and game programming data structures and practices I am going to be looking into either going back to C++ or Java for my game coding. Because I know somewhere in my newest game with python I will have to optimize a lot of code by moving it to C++ anyway.

The project from insanity

Wow has it really been this long since I posted a journal entry. Man time really flies right by it is just insane. Over the last few months I have been going through the motions of designing a project. The project is rather over ambitious for sure and
99% of the worlds population would probably call me insane. Even as I was going through and laying out the design I realized how insane
I really was but it does not matter I want to work on something long term a huge almost impossible endeavor just because I can. I know I have the
capability to complete said project and at this point it is more figuring out how to approach the project effectively. So lets get into some of the
decisions I have to make to do so after I give a brief layout of what it is I want to do. First and foremost my game is a RPG but not the typical RPG. I don't want to create just another RPG or ARPG to add to the meat grinder.
I want to create a RPG that can evolve and hold longevity without costing a player 1 penny. This project is not about making money or creating a
business it is about creating a community. The key goals of this project all combine around this fact of community and having friends be able to gather together and adventure.

A modular scenario based system (the ability to mod in your own custom adventures in a easy way)
A Turn based Action system
The ability to customize the rule set
The ability to customize various actions in the game (spells, attacks, etc...)
The ability to use premade or custom assets for the scenario's
The ability to play solo or with friends
Open Source/Cross Platform (This project is very ambitious and 100% free + I love open source)
As far as technologies to use I have no clue at the moment. I ruled out Unity/UE4 simply because the do not fit the open source motto even if
they would be great to use they just do not fit the project. I also need something very flexible that will allow me to create the necessary
tools needed to create a good environment for building the custom scenario modules. Since I have a wide variety of applicable programming skills I began evaluation of some potential target technologies. Currently I am evaluating
JME3 which just so happens to be very nice to work with despite some of the quirks and lack of direction in its tooling the core engine itself is
really well done and easy to pick up on. +1 for great documentation. The only thing I really do not like here is the Netbeans based SDK as I find
it very off putting for some reason or another, however, it may be possible to work outside of the sdk and develop some custom tooling to replace
some of the features. The goal is to abstract creators away from needing to actually touch the programming language behind the game and from
having to install the whole engine + sdk to create scenarios. I have also looked at SDL/SFML way back in the past but the new versions for sure are very slick, however, I am not sure I want to go the route of
a 2D game. It for sure would work and it would solve the issue quite quickly of having to work around the JME3 SDK system. This approach could
however remove some people from wanting to help contribute to the project due to the use of C/C++ . Sure there are other bindings but they tend to be quirky and awkward to use because they rarely follow the structure the other languages are known for. Any input on other tech that I did not mention would be much appreciated just leave it in the comments and if you want you can even just
comment to call me insane. Can't think of anything else to type so see you again soon.

The Mosin Nagant is here

As I promised the Mosin Nagant has arrived. The Mosin I have received is 1942 Izhevsk 91/30. I think it would be best to give some background before the pictures.

The Mosin Nagant was originally designed by the Russians in 1891. The approximate pronunciation of Mosin Nagant is (Moseen Nahgahn) due to the Russians emphasizing vowels over consonants. Over the years they made some modifications to the rifle the most obvious modification was the switch from a Hex to a Round receiver to produce more accuracy. My particular year is a very interesting year for the Russians. In 1942 the Russians were in some very heated and significant battles to protect their homeland from the unstoppable German war machine. One such example was Stalingrad which everyone here should even know about. This meant the Russians were in a tight bind and really needed to get more weaponry out to the soviet soldiers so often the refurb process in the arsenals was quick and half assed so to speak in order to the the rifle out on the field. In 1942 the Mosin Nagant was still a mainstay weapon for the Russians due to their lack of a efficient Assault Rifle. This meant they suffered in medium range combat as their only other weapons were really the PPSH sub machine gun and some shovels and grenades.

The Mosin Nagant was a top notch rifle and very rugged. Accuracy was a key point in designing the Model 91/30 and other models as the sport a whopping 28 3/4" barrel or larger in some early models. They were designed and sited in to use the Bayonet all the time as it was Soviet doctrine to never remove the Bayonet. Hand picked the most accurate 91/30's were retrofitted with a bent bolt and often a PU scope or some other model scope for the snipers. The 91/30 was used as the Russian sniper rifle all the way up to the cold war when they designed the Dragonov sniper rifle based off the AK-47. Even during the post war time up to and including the cold war Mosin Nagant's were still in use and manufactured but in a Carbine form known as the M44. Numerous other countries also used the Mosin as many of them were part of the Soviet Block at some point or another including Poland, Hungaria, Finnland, and Bulgaria. Many other countries outside the Block used them as well including China, and the North Vietcong. Even today there have been reports of terrorist forces in Iraq, and Afghanistan are using Mosin Nagant rifles.

As stated above the rifle was designed for accuracy. The 7.62x54R was designed as a high velocity cartridge. To give some perspective with some Russian Surplus ammunition ballistic test using 148gr LPS ammunition which is a Light Ball ammo with a steel core instead of led. The muzzle velocity (this is as the bullet leaves the barrel aka 0 yards) sits around 2800 feet per second+. The impact force under 50 yards sits around 2800 foot lbs per sq inch. With the right configuration of load this rifle and push over 3000 feet per second. For those who do not know velocity and twist ratio really decide the accuracy of the rifle from a ballistic perspective. These rifles can easily hit out to 1000 meters if needed.

Ok now more about my rifle. My rifle was manufactured in 1942 by the Izhevsk arsenal in Soviet Russia. This is a wartime rifle in a wartime stock meaning the stock was not replaced post war. The rifle has been refinished by a Soviet Arsenal even though it appears that the refinishing stamps are missing, however, this is normal they forgot this stuff all the time. The rifle is also known as all matching numbers. This means the serial numbers on all the parts match which is good. I am 99% sure the rifle was force matched which is well known for military surplus as the fonts look slightly different on the stamps. There are no lineouts on the old serial numbers they were probably totally ground off and then re-stamped. There is lots of black paint on the rifle as well which was common to hide the rushed bluing jobs and light pitting. One thing you will also notice is a amazing stock repair job done by the Russians on the front of the stock. When it was done I do not know but it really adds to the unique character and history of the rifle.

The best part of this rifle is the fact that it is one heck of a good shooter. Had her down the range and it still functions great. The trigger does take some getting use to I estimate the trigger pull is around 8 - 9 lbs possibly 10 lbs. I would estimate the rifle weighs in at about 12 - 13 lbs or so.

As promised here are some pictures. Due to there being some 18 pictures or so I will just post the link to the album and you can check out a piece of history. http://s752.photobucket.com/user/blewisjr86/media/DSC_0001_zpsfbd2b09e.jpg.html?sort=9&o=0

The hunt for an alternative

I am on the hunt at http://partsaneprog.wordpress.com/

The beginnings of PIC (Hello World)

Hello GDNet

First keep in mind this is a rather long post. I also have images in a entry album for you.

So my PIC Micro Controller starter kit arrive a few days ago and I started to tinker around with it. I really like this piece of hardware.
The circuit build on the development board is very clean. It contains a 6 pin power connector, a 14 pin expansion header, a potentometer (dial), push button, and 4 LED's. There is also 7 resistors and 2 capacitors on the board. By the looks of it there is 1 resistor for each LED so you don't overload them, 2 for the push button, 1 for the expansion header, 1 capacitor for the potentometer and 1 capacitor for the MCU socket. This is just by looking at the board not quite sure if this is acurate would have to review the schematic which I am not quite good at yet.

The programmer (PICkit 3) has a button designed fast wipe the micro controller with a specified hex file. It also has 3 LED's to indicate what is happening.

First before I get into HelloWorld I would like to the pain in the ass features I found with the MPLABX IDE.
First I spent hours trying to figure out why the hell the ide could not find the chip on my development board to program it. Turns out by default the IDE assumes you are using a variable range power supply to power the board so I needed to change the options in the project to power the development board through the PICkit 3 programmer.
The dreaded device ID not found error. Next the IDE could not find the device ID of my MCU wtf!!!!. 2 hours later I stumbled apon an answer. THE MPLABX IDE MUST BE RUN IN ADMINISTRATOR MODE!!!!! WTF!!!!!!! The users manual stated nothing of the sort. So to get it working I needed to start the IDE in admin mode and after it is started I need to plug the programmer into the usb port. If it is not done in that order you will get errors when trying to connect to the programmer and the chip.

Ok now onto HelloWorld WARNING ASSEMBLY CODE INCLUDED!!!!!

Here is a little quick overview of the specific chip I used for this into project I find typing this stuff out helps me remember anyway.
There are 3 types of memory on the PIC16 enhanced mid range. Program memory (Flash), Data memory, and EEPROM memory.
Program memory stores the program, data memory handles all the components, EEPROM is persistant memory.
Data memory is separated into 32 banks on the PIC16 enhanced mid range.
Banks: You deal with these the most. It contains your registers and other cool stuff.
Every bank contains the core registers, the special function registers are spread out amongst all the banks, every bank has general purpose ram for variables, and every bank has a section for shared ram which is accessible from all banks.

The HelloWorld project uses 4 instructions, and 4 directives. Instructions instruct the MCU and directives instruct the assembler.
Directives:
banksel: Tells the assembler to select a specific memory bank. This is better to use then the raw instruction because it allows you to select by register name instead of by memory bank number.
errorlevel: Used to suppress errors and warnings the assembler spits out.
org: Used to set where in program memory the following instructions will reside
Labels: used to modularize code it is not a directive per se but a useful thing to use.
end: tells the assembler to stop assembling.

Instructions:
bsf: bit sets a register (turns it on) sets value to a 1.
bcf: bit clear a register (turns it off) sets value to a 0.
clrf: initializes a registers bits to 0 so if you have 0001110 it will be come 0000000
goto: move to a labeled spot in memory not as efficient as alternative methods

Registers:
LATC: Is a data LATCH. This one is a LATCH for PORTC allows read-modify-write. We use this to write to the appropriate I/O pin for the LED. You allways write with LATCHES it is better to read from PORT
PORTC: Reads the pin values for PORTC always write to LATCHES never to PORTS
TRISC: Determins if the pin is a input(1) or an output(0)

Explanation of Project:

So generally speaking assembler is very verbose especially on the PIC16 enhanced because you need to ensure you are in the proper bank before trying to manipulate the appropriate register. So in order to light the LED we need to ensure the I/O pin for the LED we want to light is set to an output. We should then initialize the data LATCH which is an array so that all bits are 0. Then we need to turn on (high)(1) the appropriate I/O port that our LED sits on in this case it is RC0 which is wired to LED 1 on DS1.

The code to do this follows forgive the formatting assembler is very strict in that labels can only be in column 1 and include directives can only be in column 1. Everything else must be indented. Also there are some configuration settings for the MCU in the beginning of the file. I am not sure what each one does yet has I did not get a chance to read the specific details yet in the data sheet. These may mess up formatting a bit because it seems they need to be on the same line unwrapped etc... which makes it extend out very far. I will need to look into how to wrap these for readability.
Lastly the code is heavily commented to go with the above explanation.
; --Lesson 1 Hello World; LED's on the demo board are connected to I/O pins RC0 - RC3.; We must configure the I/O pin to be an output.; When the pin is driven high (RC0 = 1) the LED will turn on.; These two logic levels are derived from the PIC MCU power pins.; The PIC MCU's power pin is VDD which is connected to 5V and the; source VSS is ground 0V a 1 is equivalent to 5V and 0 is equivalent to 0V.; -----------------LATC------------------; Bit#: -7---6---5---4---3---2---1---0---; LED: ---------------|DS4|DS3|DS2|DS1|-; ---------------------------------------#include ; for PIC specific registers. This links registers to their respective addresses and banks. ; configuration flags for the PIC MCU __CONFIG _CONFIG1, (_FOSC_INTOSC & _WDTE_OFF & _PWRTE_OFF & _MCLRE_OFF & _CP_OFF & _CPD_OFF & _BOREN_ON & _CLKOUTEN_OFF & _IESO_OFF & _FCMEN_OFF); __CONFIG _CONFIG2, (_WRT_OFF & _PLLEN_OFF & _STVREN_OFF & _LVP_OFF); errorlevel -302 ; supress the 'not in bank0' warning ORG 0 ; sets the program origin for all subsequent codeStart: banksel TRISC ; select bank1 which contains TRISC bcf TRISC,0 ; make IO Pin RC0 an output banksel LATC ; select bank2 which contains LATC clrf LATC ; init the data LATCH by turning off all bits bsf LATC,0 ; turn on LED RC0 (DS1) goto \$ ; sit here forever! end

Taking High Level Programming for Granted

Recently there have been some posts around about people considering using C over C++ for god knows what reasons they have. As per usual the forum crowd advises them to stay away from C and just learn C++. This is great in theory because over all despite the insane complexity of C++ it is a much safer language to use then C. C is a very elegant language because of it's simplicity. It is a very tiny language that is extremely cross platform (more then C++) and has a very straight forward and tiny standard library. This makes C very easy to learn but at the same time makes C very difficult to master. This is because you have to do a lot of things by hand in C because there is no standard library equivalent or particular language features that cover all bases. C++ is safer in a lot of situations because of type safety. C++ keeps track of the type of everything where C actually discards all type information at compile time.

With that little blurb aside I personally feel a lot of programmers should learn C. Not as a first language but at some point I think they should learn it. This is because it allows you to understand how the high level features of modern day high level languages work and people take these things for granted now a days. Today there are not that many programmers that actually understand what templates are doing for them and what disadvantages/advantages they have. The same goes for objects. A lot of programmers fail to understand how objects work internally. This information can make you a much better programmer over all.

Over the last few years I have spent a lot of time in C compared to C++ or other high level languages. This is not only to understand the internals of high level features I have used in the past but because I am preparing for a up coming project I am designing. This project almost has to be done in C for portability, performance, and interoperability reasons. This project not only will be targeted towards the Linux desktop but possibly embedded devices as well. So today I am going to show something that C++ gives you that C does not and how to get the same functionality in C anyway. Then I will explain why the C version is more efficient then the C++ version but at the same time not as safe because of programmer error potential. We will keep the example simple instead of making a Generic Stack we are going to make a Generic Swap function and for simplicities sake I am going to keep the two examples as close as possible.

C++ gives us a feature known as templates. Templates are a powerful meta programming feature that will actually generate code for us based of the the type of data it receives. They can do more then just this but this is a very common use. The main downfall of this particular method of creating swap is that if you pass in over 50 different types to swap during the course of the application you are actually generating over 50 different functions that have to be added by the compiler. So with that said here is a generic swap function in C++ using templates.

[source lang="cpp"]
template
void swap(T &v1, T &v2)
{
T temp;
temp = v1;
v1 = v2;
v2 = temp;
}
[/source]

There are 2 specific features to C++ we are using here. First we are using templates to generalize the type we are swapping and lastly we are using references so that we are actually swapping the variables passed in. This basically means we take in any type of variable and then we swap the address those variables hold. When this is compiled for each different version of swap we call C++ will generate a type specific function for us.

Now we need to make the equivalent of this function in C. The first thing to note is C does not have templates and C does not have the concept of references and it does not retain type information after compile time. So with some C trickery and clever assumptions based of the specs we can achieve the same result. There are other ways to do this but I am going to do it the 100% portable way this is both ANSI and POSIX compliant standards wise. Here is the code and the explanation of why I can do what I am doing will be explained afterwards.

[source lang="cpp"]
void swap(void *vp1, void *vp2, int size)
{
char *buffer = (char *)malloc(size * sizeof(char));
assert(buffer != NULL);
memcpy(buffer, vp1, size);
memcpy(vp1, vp2, size);
memcpy(vp2, buffer, size);
free(buffer);
}
[/source]

Ok so there is a lot there. First a void ptr is a generic pointer we can do this because pointers are always 4 bytes in size so the compiler does not care what is in them because we are just pointing to the storage location. Since we also don't know how big the data being pointed to is we also need to pass in the size. Now we need to find a replacement for our temp variable we used in C++. We don't know what type is stored in our void pointer we need to figure out what to store and calculate how big of a space in storage we need. We don't care what is stored we just want to hold that bit pattern. Because we know in C that a char is only 1 byte we can use that to our advantage to do the pointer arithmetic necessary to calculate the size of our storage. So we will dynamically allocate an array of char types to store our bit pattern. We will also do an assertion to make sure that we are not null before we attempt to copy data into this location. The assertion will bail if we have no space allocated. Next we need to use memcpy this will copy our bit patterns around for us. Lastly we need to make sure we free our temp storage location.
The main advantage of this is the application does not generate a new function for each different type we call through it. This uses the same assembly no matter what we pass into it. This efficiency does come with a price. If swap is not called properly we don't know what we will get back. Also because we are using void pointers the compiler will not complain it actually suppresses what compiler checking we do have. Also you must keep in mind that if the 2 types being swapped are actually different say a double and an int or int and a char* we enter the realm of undefined behavior and have no idea what will happen.

When calling swap with 2 ints you would call it as

swap(&val1, &val2, sizeof(int));

If you are swapping 2 character strings you need to call it as

swap(&val1, &val2, sizeof(char *));

With the character strings you still need to pass the address and you need to pass the size of the pointer to that address range.
This is important because a character string or char * is actually a pointer to an array of characters so you need to make sure you are pointing to the address of that
array of characters.

With all that said you can see how the C++ makes things like this very easy at a price of generating duplicate instructions. With the C you can see of a very efficient way to do the same thing with its own set of drawbacks on the caller side. It is very similar to what the C++ would do internally behind the scenes the only difference is they are passing through hidden type information so that they can generate exact type casting so you retain your type safety. This is a great and simple demonstration of what we take for granted when we use the various high level features of different programming languages. So next time you use these features stop and say thank you to the designers because without their efforts your features would not exist and you would have to do pointer arithmetic on a daily basis.

Last note if you read this and you still are thinking of using C over C++ the decision is ultimately up to you. Personally I love C it is a very elegant and clean language and I really enjoy using it, however, ask yourself if it is the right tool for the job because in C you have to reinvent the wheel constantly to achieve the functionality that newer languages give you almost for free.

SVN=Slow

Not really journal worthy or at least typical for my Journal.

So I am sitting here right now on my Linux Dual boot getting ready to setup some stuff for the OpenGL SuperBible 5th Edition that I am working my way through. I wanted to get everything set up on my Linux development side of things because unless I am playing Eve or doing school work I am using Linux anyway.

My first step is to pull down the code and I am ready to fall asleep. Basically all of the latest code for the book with bug fixes is located in a google code subversion repository. As of late I am really hammering home on git because in reality it is a really nice VCS once you get use to it. So I decided to pull down the code with git-svn. *SLEEP* So this has been pulling the code down now for the last 10 min or so. Still not done. Before you say anything it is not git-svn. Beleive it or not git-svn is going a lot faster then my first attempt at pulling it down with the svn client.

This is just rediculous and all this is just so I can compile GLTools :S

Still Alive

Hello everyone just popping in to say I am still alive. Been pretty busy the last few with prepping for final exams and such. Now that exams are out of the way I can get back to relaxing and working on school work plus learn some more about Objective-C before my next final exam in 8 weeks. One thing about going to school Online with a compressed program structure is you really hammer through content fast in a class. I mean reading 2 - 3 chapters a week, plus discussions and assignments and a quiz here and there. I must say if anything can prepare you for tight deadlines it is a compressed school course. For instance it is now Week 1 of the spring session I read 2 chapters got my 2 discussions to take care of and a 2 page paper that is due at the end of next week + next weeks work on top of that paper. Can get kind of rough and really taxes the organizational skills.

On another note I ended up getting another Cocoa book to get a second perspective on Objective-C + Cocoa and I must say I like this book a lot better then the other one. The other one was good but this one ups it in every way shape and form. This book is what people call the Hillegass book also known as Cocoa Programming for Mac OS X done by the world famous Cocoa teacher. I must say it really shows his explanations and style Put Cocoa and Objective-C Up and Running to shame. My favorite feature of the book is at the end of each chapter the author presents challenges for the reader to go out and write code on your own which is important when learning something new. It is not a book that is all about copy down the example and see what happens he actually gives challenge assignments that make use of the concepts you have learned thus far in the book. Very good stuff. As I move through this book everything is starting to make sense. Cocoa + Objective-C is a whole new perspective on development that goes against the particular trend of modern languages today and right now as things click I can see why so many developers are in love with the system and why Mac OSX applications are so robust and solid compared to the applications on other operating systems. Not to say windows and Linux don't have good software they do but Mac seems to have more of it and part of the reason for that is Cocoa.

I will definitely make sure I keep you updated as things mould together and I really can't wait to start my own first Cocoa Application. I am hoping to actively talk about my project I will be starting as well in this journal. Even tho the application is not a game I still find it important to talk about it because a lot of the issues and concepts I will be dealing with from both a design and programming standpoint are great for everyone to learn from. This is a developer journal after all and I don't see anything stating it has to be game related in nature so all gloves are off and heck it beats having to go out and find another place for a blog and cross link like I tried previously. The new journal system is so much nicer then the old one.

Some Updates on whats going on

Hey Everyone,

Not many people read my journal as much as they did in the past when I was heavily into game dev but it really does not matter. For one I very seldom even post anymore. The reasoning behind this is I have greatly drifted from game development and focus more on embedded stuff.

Right now I have been very busy actually finishing up my degree WOOOOO!!!!!. The stress is building up as the work load ever increases but I know all this hard work I have been doing will pay off. For those who did not know I am getting a BS in Information Systems Security and so far while working 48 hrs/week at my dead end job, working on hobby electronics, enjoying my firearm shooting hobby, and school I have been able to stay on the dean's list. *pat pat* I am really getting excited as this is a huge step for me.

I really do not blog much like I have stated. I have tried to do my own external blog but I always seem to not have the time hopefully one day I can get one going regularly again as I really do like writing.

In the name of my firearm hobby I am adding a new weapon to my collection. Currently I have a Springfield 1911 Range Officer edition .45 cal. I will be adding within the next few days (Can't wait for it to arrive longest 7 days of my life) a WW2 Russian Mosin Nagant bolt action rifle. This rifle shoots a 7.62x54R cartridge. Very powerful round which can easily punch right through cinder block. The bullet itself is a .30 cal right with the 308 and 30-06. The R means it is a rimmed cartridge the 54 basically dictates the cartridge size if I remember correctly. One of these rounds packs more of a wallop then a AK-47 round which is a 7.62x39. The 7.62x54R was designed as a long range round optimized for velocity which increases the accuracy and distance. This round is very accurate from 300 - 500 meters and can easily hit a human body sized target out towards 1000 meters. The Mosin was not just a infantry rifle for the longest time it was also the Russian sniper rifle of choice until the Dragonov was developed. So excited can't wait. I will be sure to post some pics when I get it.

As for hobby electronics I will be posting some more info on this project here as well hopefully. I am currently building what I call an audio trigger system. Essentially the micro controller waits for an audio pulse and uses this pulse to trigger an action. In my case the first project using this small subproject will be a audio triggered stopwatch. Then after this the trigger subsystem will also move to an audio visualizer project.
This project really stretches my electronics knowledge as there were some interesting hiccups I have had to design around. The code is simple the circuits are the hard part for a guy like me. Because of this project I am learning and actually understanding what is going on. The design of this project needs some preparation for a post so it may be a little while and quite long. Hope to get that together soonish.

Feels good to write again cya guys around.

Some Orbis Work

Well I did some work on orbis today. Not much a bit short on time. I got the initial backdrop of the title screen done. To do this I used the dreaded GameComponent feature of XNA. GameComponent derived objects can be quite handy and abstract the code nicely for reuse if they are coded properly. I however can see these GameComponent objects being quite a sloppy mess and hard to follow in larger projects. In my book I am learning XNA from they overuse game components I think. They litterally make everything a GameComponent. Not bad in principle but as I said as the projects get large I think they can be a burden. But anyway onto the code..... (note main game code missing just the components are shown.)

Scene.cs (this is the main scene component all scenes are derived from this)

using System;
using System.Collections.Generic;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Audio;
using Microsoft.Xna.Framework.GamerServices;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using Microsoft.Xna.Framework.Storage;
using Microsoft.Xna.Framework.Content;

namespace Orbis
{
///
/// This is a game component that implements IUpdateable.
///
public class Scene : Microsoft.Xna.Framework.DrawableGameComponent
{
// components belonging to the scene
private readonly List components;

public Scene(Game game)
: base(game)
{
components = new List(); // initialize the component list

// set the state of the scene
Visible = false;
Enabled = false;
}

///
/// Allows the game component to perform any initialization it needs to before starting
/// to run. This is where it can query for any required services and load content.
///
public override void Initialize()
{
// TODO: Add your initialization code here

base.Initialize();
}

///
/// Allows the game component to update itself.
///
/// Provides a snapshot of timing values.
public override void Update(GameTime gameTime)
{
// makes sure the child components of this scene are updated when they are enabled(active)
for(int i = 0; i {
if(components.Enabled)
{
components.Update(gameTime);
}
}

base.Update(gameTime);
}

public override void Draw(GameTime gameTime)
{
// draw the drawable child components of this scene if they are visible
for (int i = 0; i {
GameComponent component = components;
if ((component is DrawableGameComponent) && ((DrawableGameComponent)component).Visible)
{
((DrawableGameComponent)component).Draw(gameTime);
}
}
base.Draw(gameTime);
}

///
/// returns the values in the components list belonging to this scene
///
public List Components
{
get { return components; }
}

///
/// Set the state of the scene to display it
///
public virtual void Show()
{
// set state of the scene
Visible = true;
Enabled = false;
}

///
/// Set the state of the scene to not display it
///
public virtual void Hide()
{
// set state of the scene
Visible = false;
Enabled = false;
}
}
}

StartupScene.cs (this is the main title scene of the game not fully complete)

using System;
using System.Collections.Generic;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Audio;
using Microsoft.Xna.Framework.GamerServices;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using Microsoft.Xna.Framework.Storage;
using Microsoft.Xna.Framework.Content;

namespace Orbis
{
///
/// This is a game component that implements IUpdateable.
///
public class StartupScene : Scene
{
// sprite batch for drawing
private SpriteBatch spriteBatch = null;
// background texture
private Texture2D texture;
// background texture rect
private Rectangle backRect;
public StartupScene(Game game, Texture2D texture)
: base(game)
{
// initialize spriteBatch from a SpriteBatch in the game services
spriteBatch = (SpriteBatch)Game.Services.GetService(typeof(SpriteBatch));
// initialize background texture
this.texture = texture;

// initialize backRect
backRect = new Rectangle(0, 0, Game.Window.ClientBounds.Width, Game.Window.ClientBounds.Height);
}

///
/// Allows the game component to perform any initialization it needs to before starting
/// to run. This is where it can query for any required services and load content.
///
public override void Initialize()
{
// TODO: Add your initialization code here

base.Initialize();
}

///
/// Allows the game component to update itself.
///
/// Provides a snapshot of timing values.
public override void Update(GameTime gameTime)
{
// TODO: Add your update code here

base.Update(gameTime);
}

public override void Draw(GameTime gameTime)
{
// draw the background
spriteBatch.Begin();
spriteBatch.Draw(texture, backRect, Color.White);
spriteBatch.End();
base.Draw(gameTime);
}
}
}

now for the great and all mighty picture (note thumbnail click for full)

Some More Progress.

As I said in my last entry it has been a long time since I touched C#. So as I was doing the port of the D3DApp framework for the DirectX 10 Luna Book, I realized quite quickly that I was trying to write C++ in C#. Not going to work. So I took a dive into the SlimDX sample MiniTri to get a general quick refresher on how a C# programmer does things. This sample + the MSDN helped a lot on giving me a nice refresher.

The code is not fully finished yet. I still need to get the font display coded and the GameTimer coded, but none the less I got a fully functional Direct3D 10 window running. The window is cleared to blue and I tacked in the functionality to turn on and off VSync. I hate when the video card transistors scream at high frame rates.

There are two things I learned from sitting down to do this. One it is really a excellent way to reinforce the learning curve of the massive DirectX API. You can't just sit there and copy the code or look at the code and port it. The best way I found to go about things is the read the chapter then implement it on my own in a C# way. The advantage is you actually learn the API not how to copy code and you can get a nice clean framework to use compared to what the author provides.
The second thing I learned is probably the biggest one. Why was a torturing my self with C++ all these years. C# is a very clean and powerful language and just looking at the code you can tell what it does. I will say the code I put together is not perfect yet or finalized in anyway but it is indeed a lot cleaner then the authors C++ code. The main reason I can see that this is so would be the removal of Macros and Preprocessor directives. Not to mention the exclusion of header files.

Now again keep in mind this is not finalized yet still needs some tweaking. Plus some more implementation but here is what I got so far.

using System;
using System.Drawing;
using System.Windows.Forms;
using SlimDX;
using D3D10 = SlimDX.Direct3D10;
using DXGI = SlimDX.DXGI;
using SlimDX.Windows;

namespace InitDirect3D
{
class D3DApp
{
private RenderForm m_Window;
private D3D10.Device m_D3DDevice;
private DXGI.SwapChain m_SwapChain;
private D3D10.RenderTargetView m_RenderTargetView;
private D3D10.DepthStencilView m_DepthStencilView;
private D3D10.Texture2D m_DepthStencilBuffer;
private int m_VSync;

public D3DApp()
{
m_Window = null;
m_D3DDevice = null;
m_SwapChain = null;
m_RenderTargetView = null;
m_DepthStencilView = null;
m_DepthStencilBuffer = null;
m_VSync = 0;
}

~D3DApp()
{
m_RenderTargetView.Dispose();
m_DepthStencilView.Dispose();
m_DepthStencilBuffer.Dispose();
m_D3DDevice.Dispose();
m_SwapChain.Dispose();
}

public void Initialize(string windowCaption)
{
m_Window = new RenderForm(windowCaption);

// Setup the SwapChain and Device
DXGI.SwapChainDescription swapDesc = new DXGI.SwapChainDescription()
{
BufferCount = 1,
ModeDescription = new DXGI.ModeDescription(m_Window.ClientSize.Width, m_Window.ClientSize.Height,
new Rational(60, 1), DXGI.Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = m_Window.Handle,
SampleDescription = new DXGI.SampleDescription(1, 0),
Usage = DXGI.Usage.RenderTargetOutput,
Flags = DXGI.SwapChainFlags.None,
};

D3D10.Device.CreateWithSwapChain(null, D3D10.DriverType.Hardware, D3D10.DeviceCreationFlags.Debug,
swapDesc, out m_D3DDevice, out m_SwapChain);

this.OnResize();

// Disable Alt+Enter FullScreen
DXGI.Factory factory = m_SwapChain.GetParent();
factory.SetWindowAssociation(m_Window.Handle, DXGI.WindowAssociationFlags.IgnoreAll);

m_Window.Resize += new EventHandler(m_Window_Resize);
m_Window.KeyDown += new KeyEventHandler(m_Window_KeyDown);
}

void m_Window_KeyDown(object sender, KeyEventArgs e)
{
if (e.KeyCode == Keys.V)
{
if (m_VSync == 0)
{
m_VSync = 1;
}
else
{
m_VSync = 0;
}
}
}

void m_Window_Resize(object sender, EventArgs e)
{
this.OnResize();
}

public void Run()
{
MessagePump.Run(m_Window, () =>
{
DrawScene();
m_SwapChain.Present(m_VSync, DXGI.PresentFlags.None);
});
}

private void OnResize()
{
if (m_RenderTargetView != null)
{
m_RenderTargetView.Dispose();
}

if (m_DepthStencilView != null)
{
m_DepthStencilView.Dispose();
}

if (m_DepthStencilBuffer != null)
{
m_DepthStencilBuffer.Dispose();
}

// Setup the RenderTargetView and DepthStencilBuffer/View
D3D10.Texture2D backbuffer = D3D10.Texture2D.FromSwapChain3D10.Texture2D>(m_SwapChain, 0);
m_RenderTargetView = new D3D10.RenderTargetView(m_D3DDevice, backbuffer);
backbuffer.Dispose();

D3D10.Texture2DDescription depthStencilDesc = new D3D10.Texture2DDescription()
{
Width = m_Window.ClientSize.Width,
Height = m_Window.ClientSize.Height,
MipLevels = 1,
ArraySize = 1,
Format = DXGI.Format.D24_UNorm_S8_UInt,
SampleDescription = new DXGI.SampleDescription(1, 0),
Usage = D3D10.ResourceUsage.Default,
BindFlags = D3D10.BindFlags.DepthStencil,
CpuAccessFlags = D3D10.CpuAccessFlags.None,
OptionFlags = D3D10.ResourceOptionFlags.None
};

m_DepthStencilBuffer = new D3D10.Texture2D(m_D3DDevice, depthStencilDesc);
m_DepthStencilView = new D3D10.DepthStencilView(m_D3DDevice, m_DepthStencilBuffer);

// Bind views to pipeline
m_D3DDevice.OutputMerger.SetTargets(m_DepthStencilView, m_RenderTargetView);

// Set the viewport transform.
D3D10.Viewport vp = new D3D10.Viewport(0, 0, m_Window.ClientSize.Width, m_Window.ClientSize.Height, 0.0f, 1.0f);
m_D3DDevice.Rasterizer.SetViewports(vp);
}

public virtual void UpdateScene(float dt)
{
}

public virtual void DrawScene()
{
m_D3DDevice.ClearRenderTargetView(m_RenderTargetView, new Color4(1.0f, 0.0f, 0.0f, 1.0f));
m_D3DDevice.ClearDepthStencilView(m_DepthStencilView, D3D10.DepthStencilClearFlags.Depth | D3D10.DepthStencilClearFlags.Stencil, 1.0f, 0);
}
}
}

Some Minor Progress

I recently just picked up the newest luna book on DirectX 10. Just when I thought DX9 was very clean DX10 takes it to a much higher level. The logical flow of the API is much better.

It is going to take me some time to get into this however. I have made the choice to step back from C++ somewhat. I love C++ don't get me wrong but I can't help but be intrigued by SlimDX. If I did not at least check it out I would not be content with myself. I am always enthusiastic about new ways to do things. For this I must shift to C#.

I have not used C# in quite a few years so it is going to be a decent shift in mindset. Going from a native way of thinking to a managed way of thinking so far has been awkward. I am going to be porting the code to C# as I go so I will make sure to post up lots of code as I move along.

This will not be a port purse because I don't think a direct port is going to be pretty. After all the code should look and feel like C#. So it will be more of a redesign. Sharp Develop is up and running SlimDX installed. Luna book is in hand lets see what I can do.