Advertisement Jump to content
  • Advertisement
Sign in to follow this  
  • entries
    97
  • comments
    112
  • views
    86214

About this blog

Ramblings of programmyness, tech, and other crap.

Entries in this blog

 

Fun with ANSI C and Binary Files

Hello Everyone,

After that last rant post I felt obligated to actually post something useful. I feel horrible when I rant like that but sometimes it just feels necessary.
On a side note however, yes I still hate VS 2012 Express. After all these years you think Microsoft would Update their damn C compiler ugh.

Ok so on to the meat of the post. Though my various browsings of the forums I have seen people with an interest in pure C programming. It really makes me feel good inside because it really is a nice language. So many people say it is ugly and hackish and very error prone. I tend to disagree I actually feel it is much less error prone then C++. We will get into why in a few moments. First before I get into code let me explain a bit why I love Pure C despite its age.

The first thing I really like about C is the simplicity. It is a procedural language which makes you think in steps instead of abstractions and objects. In other words it causes you to think more like the actual computer thinks in a general perspective. I think this is great for beginners because it forces you to think in algorithms which are nothing but a series of steps.

The next part I like about it is the very tiny standard library. It is so small you can actually wrap your head around it without a reference manual. This does come with some downfalls as you don't get the robust containers and other things C++ comes with esenssially in C you have to write your own ( Not as bad as it sounds ).

Lastly raw memory management. No worrying about whether or not you are using the right smart pointer or not etc... Now I know what people are going to say that C is more prone to memory leaks then C++ becuase of the lack of smart pointers. Sure you can leak memory but it is a lot harder to do so in C IMHO. The thing is again C is procedural without OOP. This means when programming in a procedural way you are not going to be accidentally copying your raw pointers. So the only way really to leak is to forget to free the memory. Which under standard C idiom is rather hard to do. In C the moto goes what creates the memory frees the memory. What this mean is if you have a module say a storage module that dynamically allocates with malloc that module must be responsible for cleaning up the memory it created. You will see this in action next.

As I said ANSI C allows you to think in the terms of algorithms without the sense of having to abstract everything.
To provide an example I created a very basic .tga image loader based off of nothing but the Specification.

Keep in mind this is simple particularly for using in a texture. Basically I skipped a bunch of uneeded header elements and extension elements because they are not needed as I am not saving a new copy of the file so I just grab the useful bits.

So from a design perspective this is what we need.
A structure that will store our image data.
A function to load the data
Finally a Function to clean up our dynamically allocated memory (Due to the above best practice)

From this we get the following header file.
tgaimage.h

#ifndef TGAIMAGE_H
#define TGAIMAGE_H

/*
* Useful data macros for the TGA image data.
* The data format is layed out by the number of bytes
* each entry takes up in memory where
* 1 BYTE takes up 8 bits.
*/
#define BYTE unsigned char /* 1 BYTE 8 bits */
#define SHORT short int /* 2 BYTES 16 bits */

/*
* TGA image data structure
* This structure contains the .tga file header
* as well as the actual image data.
* You can find out more about the data this contains
* from the TGA 2.0 image specification at
* http://www.ludorg.net/amnesia/TGA_File_Format_Spec.html
*/
typedef struct _tgadata {
SHORT width;
SHORT height;
BYTE depth;
BYTE *imgData;
} TGADATA;

/*
* Load .tga data into structure
* params: Location of TGA image to load
* return: pointer to TGADATA structure
*/
TGADATA* load_tga_data(char *file);

/*
* Free allocated TGADATA structure
* return 0 on success return -1 on error
*/
int free_tga_data(TGADATA *tgadata);

#endif

The above should be self explanitory due to the comments provided.
I created 2 #define Macros to make it easier to manage the typing. The specification defines the size of the data at each offset which all revolves around either 8 or 16 bits.

Now we have the implementation of our functions. Here is that file.
tgaimage.c

#include
#include
#include "tgaimage.h"

TGADATA* load_tga_data(char *file)
{
TGADATA *data = NULL;
FILE *handle = NULL;
int mode = 0;
int size = 0;

handle = fopen(file, "rb");

if (handle == NULL) {
fprintf(stderr, "Error: Cannot find file %s\n", file);
return NULL;
} else {
data = malloc(sizeof(TGADATA));

/* load header data */
fseek(handle, 12, SEEK_SET);
fread(&data->width, sizeof(SHORT), 1, handle);
fread(&data->height, sizeof(SHORT), 1, handle);
fread(&data->depth, sizeof(BYTE), 1, handle);

/* set mode variable = components per pixel */
mode = data->depth / 8;

/* set size variable = total bytes */
size = data->width * data->height * mode;

/* allocate space for the image data */
data->imgData = malloc(sizeof(BYTE) * size);

/* load image data */
fseek(handle, 18, SEEK_SET);
fread(data->imgData, sizeof(BYTE), size, handle);
fclose(handle);

/*
* check mode 3 = RGB, 4 = RGBA
* RGB and RGBA data is stored as BGR
* or BGRA so the red and blue bits need
* to be flipped.
*/
if (mode >= 3) {
BYTE tmp = 0;
int i;
for (i = 0; i tmp = data->imgData;
data->imgData = data->imgData[i + 2];
data->imgData[i + 2] = tmp;
}
}
}
return data;
}

int free_tga_data(TGADATA *tgadata)
{
if (tgadata == NULL) {
return -1;
} else {
free(tgadata->imgData);
free(tgadata);
return 0;
}
}


Lets start at the top with the tga_load_image function.

In C the first thing we need to do is set up a few variables.
We have one for our structure, the file, the mode and the size. More on the mode and size later.

We use fopen with "rb" to open up the file to read binary data.
If the file open was successful we can go ahead and start getting data.

The first thing we do here is use malloc to reserve memory for our structure and use sizeof so we know how much memory we need.

Now we load the header data. I use the fseek function to get in position for the first read.
fseek in the first arument takes a pointer to our opened file. The second argument is actually the first offset we want to read from and SEEK_SET says to count that offset from the beginning of the file. An offset is the number of bytes into a file. The specification for the tga file tells us that the width of the image starts at offset 12. It is two bytes in size so we ensure we only read 2 bytes from the file with sizeof(SHORT) and tell it to do 1 read of that size. Then the internal pointer for file position is now at offset 14 which is where our hight is. We do the same then finally read the depth which is one byte in size placing us at offset 17.

Now that the header data we need is read and stored we need to handle that actual image data which is tricky. This is where our mode and size variables come into play.

You find the mode of the image data by dividing the depth by 8. So if you have a 24 bit depth and divide it by 8 you get a mode of 3.
This mode is actually the number of components each pixel in the data has. The tga spec defines a mode of 3 as BGR and a mode of 4 as BGRA. Blue Green Red and Blue Green Red Alpha respectivly. Now the actual size of the section of image data varies depending on the image so we need to calculate the size of that section so we don't read to far into the file corrupting our data. To do this we need the width, height, and mode. By multiplying them together we get the size of the section. 3 bytes per pixel for each pixel defined by width and height. Hope that makes sense.

Now that we have the size of this image data section we can dynamically allocate our imgData section of the structure to the appropriate memory size.

We then need to fseek to the appropriate section of the file which is offset 18 for this data and we read in the full section because it is defined as a run of bytes.

Now we have the data ensure the file is closed to free the memory allocated by fopen.

Ok remember just above I said mode 3 and 4 are BGR and BGRA respectivly. This is not good because if we use this as a texture is say OpenGL it needs to be in RGB or RGBA format. So we ensure the mode here and we need to flip the red and blue bytes around in the data.
To flip the bytes we are doing some very basic index math because the data in the pointer is technically an array it allows us to hop the right number of indicies in this case 2 because RGB will always be a triplet and we don't care about A or G because the are in the proper location. If you don't understand the pointer and array nomenclature feel free to ask in the comments or read the K&R book if you can get a hold of a copy.
Finally we return our structure to the caller.

Our last function is free_tga_data this one is important due to the rules above. The tgaimage module allocated data so it is it's responsibility to provide the means to clean it up.

Here is really simple we take in the structure as an argument and make sure it is not NULL as calling free on a NULL pointer is undefined and will likley segfault the application. If all is good we FIRST clean up the imgData portion of the structure. If we don't do this it will leak as it was a separate malloc. Then we free the rest of the tgadata structure from memory.

Hopefully this post was helpful to some of the C programmers out there. This is a very nice example to demonstrate how clean a solution in C can be as well as allows for a nice demonstration on how best to avoid memory leaks in C applications due to various best practices. Not only this but it also demonstrates how to traverse a binary file using the files Offsets from nothing more then the specification.

That is all for now have a great day.

blewisjr

blewisjr

 

Closures and Python 2.7

Wow Hello again GDNet it has been quite a while since I last actually logged into the site. As for what I have been up to; well I have been drowning in school work and honing my development skills. Lately I have mainly been using Python and experimenting with the Python Flask micro web framework. I will say it has really been a joy getting away from the world of C/C++ for a change. So why am I back here after I have left my good bye a while ago. Here is the thing I actually miss this site I lurk on it almost every day anyway so why not. I am going to be starting up a new project game related so stay tuned.

Now for the reason for this blog entry. As I stated I have really been honing my programming skills as of late and dealing with some odd languages mostly in the functional paradigm. One thing about functional languages is that you don't have OOP so you need to find alternate ways to create somewhat of a similar effect and it turns out Closures are just that. Many people ask why not just use OOP then. Well the issue really arises because of the way most books typically teach OOP which leads to very sloppy inheritance and can cause deep hierarchies. This issue with this is it makes your code a maintenance nightmare. The other with the way most books teach OOP is they create a notion of taxonomies which leads to people creating Classes that should have never been classes to begin with. Note I am not saying OOP is evil I am saying the way OOP is often taught is evil. On another note OOP can lead to issues with parallelism where the state of the object becomes out of sync when multiple threads are involved and closures solve this problem quite well.

[Edit Thank you TheUnbeliever]
Just recently I read an post in For Beginners about this issue this post brings up. The OP was sent to StackOverflow where there was a good explanation of the issue at hand with Python's Scope resolution and closures. The one solution I present in this article is the one used in the StackOverflow article this article is actually one that I used when learning how to implement closures in Python 2.7. Hopefully this post will be useful to help people understand closures and how to implement them in both Python 2.7 and Python 3.x.

So what is a closure. Most definitions of what a closure is uses odd jargon like lexical structure and referencing environment and such. The definitions are not very clear unless you have a strong functional programming background so here is a more clean definition that I stumbled across.
A closure is a block of code that meets 3 particular criteria...
1. It can be passed around as a value
2. Can be executed on demand by anyone who has this value
3. It can refer to variables from the context in which it was created in (lexical scope, referencing environment)

So why are these useful... For one they allow you to maintain state between calls much like an object, very useful for creating callback, can be used to hide utility function in the function providing a cleaner api and they can even be used to create many programming constructs such as loops. These constructs are very useful over all and can really simplify to code you write and need to maintain in large complex systems.

Python has had support for closures since version 2.2, however, there are a few issues. Issueless support comes in Python 3.x. The issue in Python 2.7 is a Python Scoping issue that forces the closed over variable to be read only.

Here is an example of a closure in implemented in Python with the exact issue of Scoping...

def counter(start_pt):
def inc():
start_pt += 1
return start_pt
return inc

if __name__ == '__main__':
func = counter(1)
for i in range(10):
print(func())


So what does the code do it is simple it counts up 10 numbers from an arbitrary start point. So what is the issue well the issue is with pythons scope this actually generates an error the exact error is UnboundLocalError: local variable 'start_pt' referenced before assignment. This is the same issue the author of the post was having and it is because of the way python defines the scope of a variable. Basically python determines the scope from the innermost first assignment it finds which means when we increment start_pt we are accessing a variable with no initial value yet because python has the scope confused and forces the variable to be read only.

Python 3.x solves this scope problem by implementing a new statement called nonlocal here is how it looks...


def counter(start_pt):
def inc():
nonlocal start_pt
start_pt += 1
return start_pt
return inc

if __name__ == '__main__':
func = counter(1)
for i in range(10):
print(func())


The output of this code will be as expected 2 3 4 5 6 7 8 9 10 11

But again this is only in Python 3.x what if we can't use Python 3.x because we need support for API's that are not Python 3.x compatible yet. We want to use Closures but also want to avoid the scope issue. There are a few solutions to this problem. Lets go through 2 of them one which seems kind of hacky and other which is less hacky but not a true closure but instead just mimics the closure concept. Here is the first which is a true closure that uses a Python mutable list to work around the Scope issue.



def counter(start_pt):
c = [0]
c[0] = start_pt
def inc():
c[0] += 1
return c[0]
return inc

if __name__ == '__main__':
func = counter(1)
for i in range(10):
print(func())


Basically what we do is create a new variable that is a 1 element mutable list and place our start_pt in this location. We can't use a standard variable c that is not a list because c becomes read only inside the inner function which results in the same issue as before giving us the same error as before so using a mutable list works around the issue and we get the expected output of 2 3 4 5 6 7 8 9 10 11. This is kind of sloppy looking so lets look at the not a true closure but mimics a closure alternative. Note I am violating standard Python naming conventions here because we want the user to think they are using a lexical closure so here is the code.



class counter(object):
def __init__(self, start_pt):
self.start_pt = start_pt

def __call__(self):
self.start_pt += 1
return self.start_pt

if __name__ == '__main__':
func = counter(1)
for i in range(10):
print(func())


So here we turn the closure into a mimicked closure by using a class to preserve the state instead of the mutable list.
We also force our class to be executed only as a function by using the __call__ magic method. So here we have a closure but it is not a true closure but actually and object disguised as a closure.

So there you have it we were able to discover 2 ways to work around the Python 2.7 issues with Scoping and still be able to use our non 3.x compatible libraries. Both methods are valid you can choose which one you want to use. Sorry the example is so simplistic but it really is an easy example to get the point across now you can use your favorite language and use the powerful feature of closures the method you choose is up to you or you can be a real man and jump on Python 3.x

blewisjr

blewisjr

 

The beginnings of PIC (Hello World)

Hello GDNet

First keep in mind this is a rather long post. I also have images in a entry album for you.

So my PIC Micro Controller starter kit arrive a few days ago and I started to tinker around with it. I really like this piece of hardware.
The circuit build on the development board is very clean. It contains a 6 pin power connector, a 14 pin expansion header, a potentometer (dial), push button, and 4 LED's. There is also 7 resistors and 2 capacitors on the board. By the looks of it there is 1 resistor for each LED so you don't overload them, 2 for the push button, 1 for the expansion header, 1 capacitor for the potentometer and 1 capacitor for the MCU socket. This is just by looking at the board not quite sure if this is acurate would have to review the schematic which I am not quite good at yet.

The programmer (PICkit 3) has a button designed fast wipe the micro controller with a specified hex file. It also has 3 LED's to indicate what is happening.

First before I get into HelloWorld I would like to the pain in the ass features I found with the MPLABX IDE.
First I spent hours trying to figure out why the hell the ide could not find the chip on my development board to program it. Turns out by default the IDE assumes you are using a variable range power supply to power the board so I needed to change the options in the project to power the development board through the PICkit 3 programmer.
The dreaded device ID not found error. Next the IDE could not find the device ID of my MCU wtf!!!!. 2 hours later I stumbled apon an answer. THE MPLABX IDE MUST BE RUN IN ADMINISTRATOR MODE!!!!! WTF!!!!!!! The users manual stated nothing of the sort. So to get it working I needed to start the IDE in admin mode and after it is started I need to plug the programmer into the usb port. If it is not done in that order you will get errors when trying to connect to the programmer and the chip.

Ok now onto HelloWorld WARNING ASSEMBLY CODE INCLUDED!!!!!

Here is a little quick overview of the specific chip I used for this into project I find typing this stuff out helps me remember anyway.
There are 3 types of memory on the PIC16 enhanced mid range. Program memory (Flash), Data memory, and EEPROM memory.
Program memory stores the program, data memory handles all the components, EEPROM is persistant memory.
Data memory is separated into 32 banks on the PIC16 enhanced mid range.
Banks: You deal with these the most. It contains your registers and other cool stuff.
Every bank contains the core registers, the special function registers are spread out amongst all the banks, every bank has general purpose ram for variables, and every bank has a section for shared ram which is accessible from all banks.

The HelloWorld project uses 4 instructions, and 4 directives. Instructions instruct the MCU and directives instruct the assembler.
Directives:
banksel: Tells the assembler to select a specific memory bank. This is better to use then the raw instruction because it allows you to select by register name instead of by memory bank number.
errorlevel: Used to suppress errors and warnings the assembler spits out.
org: Used to set where in program memory the following instructions will reside
Labels: used to modularize code it is not a directive per se but a useful thing to use.
end: tells the assembler to stop assembling.

Instructions:
bsf: bit sets a register (turns it on) sets value to a 1.
bcf: bit clear a register (turns it off) sets value to a 0.
clrf: initializes a registers bits to 0 so if you have 0001110 it will be come 0000000
goto: move to a labeled spot in memory not as efficient as alternative methods

Registers:
LATC: Is a data LATCH. This one is a LATCH for PORTC allows read-modify-write. We use this to write to the appropriate I/O pin for the LED. You allways write with LATCHES it is better to read from PORT
PORTC: Reads the pin values for PORTC always write to LATCHES never to PORTS
TRISC: Determins if the pin is a input(1) or an output(0)

Explanation of Project:

So generally speaking assembler is very verbose especially on the PIC16 enhanced because you need to ensure you are in the proper bank before trying to manipulate the appropriate register. So in order to light the LED we need to ensure the I/O pin for the LED we want to light is set to an output. We should then initialize the data LATCH which is an array so that all bits are 0. Then we need to turn on (high)(1) the appropriate I/O port that our LED sits on in this case it is RC0 which is wired to LED 1 on DS1.

The code to do this follows forgive the formatting assembler is very strict in that labels can only be in column 1 and include directives can only be in column 1. Everything else must be indented. Also there are some configuration settings for the MCU in the beginning of the file. I am not sure what each one does yet has I did not get a chance to read the specific details yet in the data sheet. These may mess up formatting a bit because it seems they need to be on the same line unwrapped etc... which makes it extend out very far. I will need to look into how to wrap these for readability.
Lastly the code is heavily commented to go with the above explanation.
; --Lesson 1 Hello World; LED's on the demo board are connected to I/O pins RC0 - RC3.; We must configure the I/O pin to be an output.; When the pin is driven high (RC0 = 1) the LED will turn on.; These two logic levels are derived from the PIC MCU power pins.; The PIC MCU's power pin is VDD which is connected to 5V and the; source VSS is ground 0V a 1 is equivalent to 5V and 0 is equivalent to 0V.; -----------------LATC------------------; Bit#: -7---6---5---4---3---2---1---0---; LED: ---------------|DS4|DS3|DS2|DS1|-; ---------------------------------------#include ; for PIC specific registers. This links registers to their respective addresses and banks. ; configuration flags for the PIC MCU __CONFIG _CONFIG1, (_FOSC_INTOSC & _WDTE_OFF & _PWRTE_OFF & _MCLRE_OFF & _CP_OFF & _CPD_OFF & _BOREN_ON & _CLKOUTEN_OFF & _IESO_OFF & _FCMEN_OFF); __CONFIG _CONFIG2, (_WRT_OFF & _PLLEN_OFF & _STVREN_OFF & _LVP_OFF); errorlevel -302 ; supress the 'not in bank0' warning ORG 0 ; sets the program origin for all subsequent codeStart: banksel TRISC ; select bank1 which contains TRISC bcf TRISC,0 ; make IO Pin RC0 an output banksel LATC ; select bank2 which contains LATC clrf LATC ; init the data LATCH by turning off all bits bsf LATC,0 ; turn on LED RC0 (DS1) goto $ ; sit here forever! end

blewisjr

blewisjr

 

Preparing to Learn OpenGL (Toolchain Setup)

Hello again everyone.

I am finally after a very long time going to be diving into 3D for my next project.
In order to do this I obviously need to learn a 3D api and after much evaluation I have
decided to learn OpenGL. The main reason for this is not because of its cross platform
support but because the style of the api melds with my brain much better then the COM based
Direct3D api. This is probably due to my strong roots and love for the C language but either
way I have made my choice.

I am going to be learning the Modern OpenGL style obviously starting with OpenGL 3.3.
There really is not many books out there on modern opengl so I will resort to using the
OpenGL Superbible 5th edition to get my feet wet. Sure it uses a GLTools wrapper library
but from what I can tell is eventually they teach you the stuff under that library so I will
be using it as a stepping stone to get a understanding and then supplement it with the more
advanced arcsynthesis tutorial and maybe the OpenGL 4.0 Shader Cookbook. I am hoping this will
give me a solid foundation to build off of.

With that in mind I need to configure the OpenGL Superbible to work with my Toolchain I have set up.
The superbible assumes use of Visual Studio, XCode, or Linux Makefiles. I currently don't use any of these.
First I am not on Linux even though I have strong roots with linux (My server runs on it) and development on
Linux my current laptop uses the Nvidia Optimus technology which currently has very poor Linux support.
So instead I put together a toolchain on Windows 8 in which I am somewhat comfortable with which I may adapt
in the future.

The current toolchain consists of MinGW/MSYS, CMake, Subversion and Sublime Text 2. MinGW is a gcc compiler for windows.
CMake is a cross platform build generator and Sublime Text 2 is a Non Free cross platform text editor that integrates
with TextMate bundles and is extensible through Python. Subversion is obviously a version control system. I could use git
or Mercurial but I am still having a hard time with the concept of DVCS so this is subject to change as well.

To use the OpenGL Superbible we have a few dependencies which are needed. The first is FreeGlut and the second is the
GLTools library. I got the code for the Superbible from the googlecode svn repo so I can get the source for GLTools.
I downloaded a newer version of FreeGlut from the website 2.8 the repo came with 2.6. I needed to build these with my
compiler so that they can properly link so I threw together 2 cmake files to do this. I made 4 directories under my
Documents folder 1 for FreeGlut's source, 1 for GLTools source, and 1 out of source build directory for each library.
The CMakeLists.txt file for each library went under the source directories. Then I ran cmake to generate MSYS Makefiles.
Then ran make. The make file places the libraries under a central C:\libs folder and also moves the headers there as well.
If you are interested here is the content of the CMakeLists.txt files. I used globbing for the source files which is bad
practice but in this case it does not matter because I will not be adding any more source files to the CMake projects.

GLTools CMakeLists.txt

cmake_minimum_required(VERSION 2.6)
project(GLTools)
set(SRC_DIR "src/")
set(INC_DIR "include/")
set(BUILD_DIR ${PROJECT_BINARY_DIRECTORY}/libs/GLTools/libs)
file(COPY ${INC_DIR} DESTINATION ${BUILD_DIR}/../include)
file(GLOB SRC_CPP ${SRC_DIR}*.cpp)
file(GLOB SRC_C ${SRC_DIR}*.c)
include_directories(${INC_DIR})
add_library(GLTools ${SRC_CPP} ${SRC_C})
set_target_properties(GLTools PROPERTIES
ARCHIVE_OUTPUT_DIRECTORY ${BUILD_DIR})
target_link_libraries(GLTools Winmm Gdi32 OpenGL32)


FreeGlut CMakeLists.txt

cmake_minimum_required(VERSION 2.6)
project(freeglut32_static)
set(SRC_DIR "src/")
set(INC_DIR "include/")
set(BUILD_DIR ${PROJECT_BINARY_DIRECTORY}/libs/freeglut-2.8.0/libs)
set(CMAKE_C_FLAGS "-O2 -c -DFREEGLUT_STATIC")
file(COPY ${INC_DIR} DESTINATION ${BUILD_DIR}/../include)
file(GLOB SRC_C ${SRC_DIR}*.c)
include_directories(${INC_DIR})
add_library(freeglut32_static ${SRC_C})
set_target_properties(freeglut32_static PROPERTIES
ARCHIVE_OUTPUT_DIRECTORY ${BUILD_DIR})
target_link_libraries(freeglut32_static)


I don't think the FreeGlut one is optimal because of the complexity of building the library.
It has been tested and does work so it should be fine. If I encounter any issues with the way
the library is built I will make sure to post and update.
So after running make under C:\libs I have the following structure

C:\libs
GLTools
include
GL
libs
freeglut-2.8.0
include
GL
libs


This structure will allow me to easily create CMake build for all of the chapters in the book as
I complete them. I know where the libraries are so I can easily link them and bring in the headers.
Kind of hackish but being that this is not a custom project it is the easiest way to ensure I can get
build up and running quickly.
That is all for this post hopefully it was helpful cya next time.

blewisjr

blewisjr

 

Version Control.

First I must open with the simple fact that a lot of people just don't use version control. The main reason for this is because in all honesty a lot of people just don't understand it. Not to mention the major version control war going on at the current moment just tends to confuse people all together even more. I am going to do my best to give some informative information on the different version control systems (VCS) out there to try and help make sense of the decision I need to make for my next project as well as hopefully help others make the decision for their projects.

First there are currently two types of VCS's out there at the current moment first is the CVCS and next is the DVCS. CVCS systems like subversion and cvs have a central server that every client must connect to. The client basically pulls revision information from these system to a working copy on your hard drive. These working copies tend to be very small because it pulls only the information it needs to be considered up to date basically the latest revision. The major gripe people seem to have with these types of systems at the current moment is the lack of enough information to do a proper merge of a branch back into the main code base. DVCS systems are what people call distributed. I hate that term because I think it makes things harder to understand. Examples of these are git, mercurial, and bazaar. The reason I hate the term distributed is because currently a lot of people use DVCS systems just like a CVCS system but with benefits. Typically the main code base is stored in a central location so that people can stay properly up to date with the code base. DVCS pulls the entire repository to your system including all of the revision history not just the latest. This allows for you to not be connected to the server and do everything right from your machine without a internet connection. What I like about this is in its own right everyone has a complete copy so if something happens to the central repository numerous people have a backup of the code and revision history. The nice thing about DVCS and why I think so many people are fanatic about it is the easy branching and merging. Because you have the total revision history it allows you to easily branch merge and cherry pick changes at will without a lot of risk of "pain". So when looking at the two different types of systems think CVCS (most recent revision only in working copy), DVCS (total history in working copy).

*Warning opinionated*
My main gripes with the current arguments out there have to do with the fact of branching. The DVCS group of people seem to like the idea of branching every time they make a code change or add a new feature. They argue that this is insane and painful to do in SVN because of horrible revision/merge tracking. Ok I agree branching and merging can be painful in subversion but at the same time it is not as bad as people say because they are taking it to a extreme and not using the system properly. I am not the kind of person that likes to make a branch for every feature I add into a application. I feel branching should only be used for major changes, re-factors, and other changes that have a large chance to BREAK the current working development tree. Maybe this mentality is why I never really had to many issues with subversion to begin with when it came to branches and merges. Maybe it is because I was using subversion as the developers INTENDED according to the svn red-book. I don't know maybe I am just a hard sell.

So which one should you use. It is hard to say each system has its pros and cons as you have seen. The one feature I love about the DVCS camp is the speed at which you can get up and running and the fact that sites like GitHub and Bitbucket are amazing places to host your code. I also like the speed of DVCS systems, because you are doing everything from your local machine and not over a network DVCS is blazing fast compared to CVCS. Lastly the thing I like the most about DVCS is the fact that they are very flexible and you can use them with whatever workflow you desire. The main cons I have for DVCS are the lack of repository permissions, large file support is really not that good because of full repo history pulls, no locking for binary files and other code files to prevent major issues if both are modified at the same time. For example if you only want to branch when you feel something has a good chance to break the entire main line of development you can do so. If you want to be considered the central repo instead of a host like GitHub you can do so. What I like about CVCS is the fact that it can handle very large files quite well. Another thing I like is the fact you can have repository permissions making sure you know who can write to the repository. I also like the fact that you can lock binary files and other files if you wish to prevent other people from making changes to the file as you make your changes. This alone can save tons of time if you are working as a team and your art assets are in version control with your code. The major issues I see with CVCS are the required network connection and can cause speed issues *coffee anyone*, not having the full revision history present on the machine making it difficult to cherry pick or inject merges into different parts of the development line.

Keep in mind the pros and cons are as I see them other people may have different pros and cons. Yes I listed less cons for subversion however, there are times when these cons can definitely outweigh the pros. The cons of DVCS are hard to ignore and can outweigh the pros at times as well. So with these things in mind I am sure you can better make the decision you need to make. As for me I have used both and I like both a lot which make the decision for me extra hard. The one big pull factor for me is hosting and DVCS has the huge win there. So for my next project I will be using DVCS because I feel the pros outweigh the cons more so under most circumstances. Not to mention I really like the speed and the fact of having a whole repository backup on my machine. Ultimately the decision is yours but with this information hopefully you can weed through the google fluff that is out there.

In the future I just might have to return with my experiences and if anyone wants more detail into the inner workings of Version control systems let me know in the comments and I will go out and find the video that goes into the details of the internal differences of how revisions are stored.

blewisjr

blewisjr

 

Just got my new toy in the mail

Hey guys it has been a while since my last post. So I would first like to give a few little updates to what I have been upto.

First and foremost my attempts to get back into game development was a total fail. It just did not work out. I was starting then I lost interest quickly and proceeded to get slamed into the dirt by massive ammounts of school work. On the bright side I am only 3 1/2 classes from graduation woo. After all these years of slugging it away at a pointless job it feels good to be almost to my goal of correcting my past mistakes of dropping out of college.

Now onto more goodies. I have always loved electronics such fun to make electricity do cool things and it is even a very good experience to become a much better developer. Having to deal with everything at such a low level it really brings to light some skills that can even help developers create better software at the high level. It is amazing what high level languages sacrifice often for ease of use and it is also amazing how universities do not teach there students the low level stuff really anymore.

So I have been looking into building a interesting robotics project well not exactly robotics but more of a drone project. This is a aspect of engineering I really enjoy because it is a tough project with lots of room to learn and also a larger project that can grow overtime. The issue with a lot of the simpler electronics projects is that they have small room for growth. After some design I realized I am going to need lots of power for this project so it is time for me to leave the world of PIC and AVR and move to ARM Cortex-M. The overall reasoning behind this is that you need some decent processing power to handle all the math needed for the flight controller and the smaller chips have a very hard time with this.

The board I chose is quite powerful for a development board.
Cortex-M4 processor (has hardware FPU)
Contains a mulit axis accelerometer
Contains a Mag sensor for reading magnetic fields of the earth

These few features are awesome because both sensors are needed for accurate flight and maximum stability adjustments.

The board is made by STM as well as the chip and has a built in programmer/debugger making life a lot cheaper then buying external debugging hardware. Super powerful dev package for only $10 can't go wrong. Here is a link to the site for the board if you are interested...
http://www.st.com/web/en/catalog/tools/FM116/SC959/SS1532/PF254044

Here is also a picture of the beast if you choose not to visit the link above...

[sharedmedia=gallery:images:4957]

Now that this is all said and done I need to test various IDE's to see what I like. Right now I am testing out CooCox on windows which is free. Seems rather solid despite being a really stripped down version of eclipse as in missing the good features. Eclipse is another option but would have to be run on linux due to the need for make and some other unix tools to function properly without having to run through massive windows GNU loopholes to get it working on windows. Commercial IDE's are not an option because for some reason the Embedded world things $4000 for an IDE is normal.

I will have some more updates on my learning in the future until then have fun coding.

blewisjr

blewisjr

 

Book Review: Cocoa and Objective-C Up and Running

Well I just finished my first book on Mac OSX software development. First and foremost I should go into a little bit on why I am in the process of learning Objective-C and what drew me into using this book in the first place. First reason I decided to dive into Objective-C is my new desktop/development platform is a iMac. Secondly the new phone I will be getting once the tax return comes in is a IPhone 4 due to it hitting my carrier Verizon this month. Currently I have a Android phone and I am very very disappointed maybe it has to do with the fact that mine is a samsung not sure. I want to be able to develop applications for both my phone and my desktop computer. To have any potential to sell these applications I need to use the right tool for the job and according to apple this is Objective-C/Cocoa. Now that is out of the way what drew me to this book.....

First when I was looking at the selections of books out there I saw a few very high quality titles. I on the other hand have experience in development in both GUI and games from a hobby perspective and have a solid background in programming concepts. So I did not really want a long drawn out book because I know how to read a APIAPI doc I just want a feel for the language. With a background in C and C++ already this was not too much of a leap for me. This books is short and to the point unlike some others but that can be a flaw now onto the review.

After reading this book I must say this is not a book for beginners. It says it was written for beginners, however, if you have never programmed a line of code in your life this book moves way way to fast. First you need to understand that Objective-C is just a layer on top of C so you are basically using C with some runtime extensions. With this in mind this book covers C in 2 chapters. I learned C from the K&R book so this is ok for me as a nice swift refresher but for a newbie to programming this is just not going to cut it. Next the basics of OOP are covered in just a single chapter. UH..... sorry not for beginners again it took me C + structs, C++, Java, and C#, and Python and a few years of experimental throwaway practice projects to get this concept right. It took me 1 year just to understand why Interfaces in C# were even useful in the first place. Took even a year before polymorphism slapped me in the face. Uh to go on even further the basics of Objective C are again taught in 2 chapters. NOT FOR BEGINNERS can I say it enough. With no experience in programming a newbie to software development won't understand a damn thing and be confused as all hell after the first 4 chapters of the book.

Ok with all this said I still thought this was an amazing book to learn from. First and foremost I was able to breeze through the first 4 chapters and get right into learning the syntax and the way Objective-C works. After 2 chapters of learning the new runtime/language extensions it got me right into the Mac OSX Cocoa API framework kind of like the .Net framework but for Objective-C. This is where the final 4 chapters of the book take place so you can write effective GUI apps for OSX. The last chapter of the book gives some useful pointers and some more pointers to further information to learn more. That is it 11 chapters compared to the typical 30 chapter books out there. Now some people may think this is not a good thing personally I found it refreshing.

The biggest commendation I give to this author in his book is the effective use of the tools apple provides ala XCode and Interface Builder. These tools are amazing and I am so happy the author did not do what a lot of Java books do and force you to use notepad and the command line. Face it people yea Vi and Emacs are great for a quick and dirty file edit but we are in the 21st century now and have better tools use them damn it. It is amazing at how powerful these Mac tools are and if you don't learn them or use them you are stupid and the author even rightly states so. The author does not souly rely on interface builder because if you want to make your app truly fluid like Mac apps tend to be you do have to write custom view boilerplate code and they author definitely goes there. Another thing is the author does something I personally really like first he explains the concepts of the chapter then he puts you into code working on a project. This is great using what you learned helps learning it.

Word of warning there are lots of Text Walls in this book. The author expects you to understand previous concepts so a lot of the projects are code code code with some explanations here and there. If you payed attention and learned the previous content of the book you should be able to easily parse the code and understand what is going on. Personally I like this style because it really makes you think about what the code is doing and not just telling you what it is doing. If you want to get anything out of this book you have to do every example to completion.

The author also uses screenshots in a very effective manner for tool demonstrations to make sure you got it just right. This author really takes his tools to heart and I love that. Like I said earlier he states you are stupid if you don't use the tools apple provides because it makes things so much easier and makes you so much more productive. He goes into how to hook up Actions, Outlets, and Bindings through interface builder and even shows something that made my jaw drop and that is XCodes amazing data modeler for the Core Data framework. The Core Data framework is a persistence framework for cocoa that allows you to save data across sessions and even gives you some juicy free stuff along they way like undo and redo.

Overall I feel this is a amazing book and that I learned a lot of information from it. I definitely think if you are NOT A BEGINNER developer and want to dive into OSX development this book is a great way to hit the ground running. Now I just need to put this stuff to use and start a project. More on that next blog post....

blewisjr

blewisjr

 

Solving the automated copy C++ Runtime issue

Hello Everyone,

In my last post I was using QTCreator with QMake to build some SFML sample code. In order to get the code to run from the ide if you remember correctly I needed to copy the SFML Runtime dll files to the directory because we were dynamically linking. I did mention that I could not run the .exe from the build directory due to missing some C++ runtime files which the .exe is linked against. This post is about finding a solution to this problem.

Initially I though I would be able to use QMake to copy the C++ runtime files for gcc, pthreads, and stdc++ to the build directory. I wanted to do this so that if I wanted I can run the code from outside the IDE directly from the Build directory. Everything was fine till I tried to copy the stdc++ dll file. After some investigation I found that QMake is using the DOS xcopy to do the copying and I feel for some reason it does not like the ++ characters in the file name. This assumption was confirmed by renaming the dll file and copying it over which worked. The issue with this is the code can't find the dll if you rename it so on to another way.

The second attempt I tried to use QMake to statically link to the stdc++ library using the -static-libstdc++ linker option. This was a total fail. This might be an issue with mingw I am not sure. So I bailed on this Idea quickly. Time to try something else...

QTCreator can also use CMake which is another Makefile generation system. CMake is awsome and I have dabbled with it in the past. I never used it to solve this problem before so I decided to give it a shot since it is supported.

The really nice thing about CMake is it's great support on multiple platforms and it can generate project files for various ide's etc... In order to solve this problem I am having I need to copy over the DLL files as a post build to the project. It took me some time to figure this out but I got success. The key here is that CMake actually provides cross platform utilities built right into it's executable. This means I can use cmake to execute a cross platform copy command to copy the dll files post build.

Here is the code to solve all the problems and it works flawlessly. By default QTCreator sets a bunch of CMake variables for us as an out of source build.

CMakeLists.txtproject(Ascended)cmake_minimum_required(VERSION 2.8)aux_source_directory(. SRC_LIST)set(SFML_ROOT ../libs/SFML-2.1)set(MINGW_ROOT c:/Qt/5.1.1/mingw48_32)find_package(SFML COMPONENTS system graphics window REQUIRED)include_directories(${SFML_INCLUDE_DIR})# SFML Runtime DLL filesset(SFML_RUNTIME_FILES ${SFML_ROOT}/bin/sfml-system-2.dll ${SFML_ROOT}/bin/sfml-graphics-2.dll ${SFML_ROOT}/bin/sfml-window-2.dll)# MINGW Runtime DLL filesset(MINGW_RUNTIME_FILES ${MINGW_ROOT}/bin/libgcc_s_dw2-1.dll ${MINGW_ROOT}/bin/libwinpthread-1.dll ${MINGW_ROOT}/bin/libstdc++-6.dll)add_executable(${PROJECT_NAME} ${SRC_LIST})# POST_BUILD notification and copy SFML Runtime DLL filesadd_custom_command(TARGET ${PROJECT_NAME} POST_BUILD COMMAND ${CMAKE_COMMAND} -E echo "Copying SFML Runtime to Build directory.")foreach(FILE ${SFML_RUNTIME_FILES}) add_custom_command(TARGET ${PROJECT_NAME} POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy ${FILE} ${CMAKE_BINARY_DIR} )endforeach(FILE)# POST_BUILD notification and copy MinGW runtime DLL filesadd_custom_command(TARGET ${PROJECT_NAME} POST_BUILD COMMAND ${CMAKE_COMMAND} -E echo "Copying MinGW Runtime to Build directory.")foreach(FILE ${MINGW_RUNTIME_FILES}) add_custom_command(TARGET ${PROJECT_NAME} POST_BUILD COMMAND ${CMAKE_COMMAND} -E copy ${FILE} ${CMAKE_BINARY_DIR} )endforeach(FILE)target_link_libraries(${PROJECT_NAME} ${SFML_LIBRARIES})

blewisjr

blewisjr

 

Eclipse CDT 8.0!

This is kind of hilarious. After I went through all that effort and made that post about tool chains for the stubborn I am no longer stubborn. Let me explain why.

First and foremost that whole post was pretty much how I have been working for the last year or so. This is mainly because of how much I feel visual studio gets in my way when I am coding. The features it has are nice but I don't like the way certain things feel with it because mainly I had to use Visual Studio Express. When you are working with a Tool Chain the less interruption you have the better. With Visual Studio Express it is very easy to get interrupted because every time you need to do something the IDE does not support because it is "Express" you need to break concentration and go to another tool. When I worked with vim and makefiles from the CLI my flow was never interrupted because of various scripts and other things I had set up to do my work. But like a typical Linux junkie I am constantly looking for better ways to do things. When I heard that Eclipse Indigo launched I just had to go and try it out because I like Eclipse a lot and use to use it all the time for my Java development. When it came to C++ though Eclipse was kind of stale. UNTIL NOW.

I introduce you to Eclipse CDT 8.0 the wonderful piece of software that it is. So lets go over some of the new features this beast has.

1. Reworked the new project wizard.
- This is really nice. As you go through your wizard there is an option to click advanced and set up your extra includes, libs, and linker settings. What I like about this the
the most is once it is done in the wizard as soon as you create that new file you are ready to go.
2. Reworked build settings.
- Long gone are the convoluted build settings everything was streamlined and placed where it makes sense. This work is not done they have more plans to make it even
better for the next updates.

Now for the Big Ones that I love.

3. Full static code analysis.
- This is amazing. They did some really sweet work on their C++ parser and this is near instantaneous feedback that makes coding a breeze to the point where it even
knows how it probably should fix your screw up. It even gives logical suggestions for all sorts of things if you so wish to look at them. All this is to help prevent
a lot of commonly made errors. They also have more plans on how to make this even better.

4. Actual refactoring.
- Yes the parser is that good. It can actual do full refactorizations. Right now it is limited in what kind of refactoring can be done because they did not have time to add
more before release. They got a lot more of these planned and because the grunt work is done they can add them quickly.

5. Git/Mylyn GitHub Integration
- Git integration is finally here and functional. It even comes with a nice Mylyn plugin that will hook right up into the GitHub bug tracking system.

All these features are great and the nice thing about it is the work on there parser made their code completion phenomenal and ridiculously fast. The best part about the code completion is that it does not get in your way unless you invoke it with ctrl+Spacebar. Once you have your header files included all you need to do is Save and you are good to go those headers you included can now be seen from your code completion system. It also build code completion based off your file as you go. It does this without the notorious sluggish nature people have been known to see.

To give you an idea about how good this code completion is on its filtering here is an example. If you type GL_COLOR_B and hit ctrl+Space it will give you the choice of GL_COLOR_BUFFER and GL_COLOR_BUFFER_BIT. Another nice thing is this filtering is slick on the fly in case there are a lot of functions with similar names.

This is a great IDE now. You don't even have to use MinGW if you want you can install the Windows SDK and use the microsoft compilers and eclipse instead of visual studio.

In all honesty words cannot really express how amazed I am with this release of the product. If you don't believe that Eclipse CDT is slick and fast go to the website and try it out especially if you are using Visual Studio Express. If you are a C++ DirectX guy make sure you use microsofts compiler and you will have no issues.

For someone like me to be impressed by an IDE this much it goes to show it is good. I can't afford to go out and buy VS Pro so this is a phenomenal piece of software that solves my dilemma it is free and extensible as well. Go try it seriously.

blewisjr

blewisjr

 

Objective-C and Delegates

So I had a chance to sit down and do a challenge from my Objective-C book today. The challenge was to create a delegate to the Window object to control resizing so that the window is always twice as wide as it is tall.

For people who don't know in Objective-C a delegate is a way to reroute method calls from one object to another object for handling. This is more flexible then delegates in C# because those delegates are technically listeners where delegates in Objective-C are protocols that must be conformed to. A protocol in Objective-C is similar to an interface in C# where it is a strict set of guidelines that must be followed however Objective-C allows you to omit implementation of certain aspects allowing for you to decide what you want to handle. The nice thing about delegates is it allows for the modification of the behavior of an object without your delegate needing to know about the object it is modifying the behavior of. This basically means it removes the need of obsessive subclassing which is a good thing.

So onto the implementation of this challenge. In order to modify the sizing control of a Window object we need to delegate the windowWillResize message to our handler class.

For this we will create a simple handler class we will just call WindowDelegate for simplicity.




[size=1]WindowDelegate.h:
[size=1]

[size=1]#import
@interface WindowDelegate : NSObject {


[size=1]}
[size=1]

[size=1]@end


[size=1]

[size=1]This code is very simple basically all we do is tell our WindowDelegate class to conform to the NSWindowDelegate protocol which contains our windowWillResize message.
[size=1]

[size=1]So now we need to implement the windowWillResize message so that it will keep our window twice as wide as it is tall.
[size=1]To do this we need to know the signature of the windowWillResize message the signature is...
[size=1]- (NSSize)windowWillResize:(NSWindow *)sender toSize:(NSSize)frameSize;
[size=1]

[size=1]Basically this message brings in a pointer to the calling object and a new size for the calling object and we return the size back to the calling object.
[size=1]

[size=1]Now the implementation.
[size=1]

[size=1]WindowDelegate.m:
[size=1]#import "WindowDelegate.h"
@implementation WindowDelegate
- (NSSize)windowWillResize:(NSWindow *)sender toSize:(NSSize)frameSize {
[size=1] float newWidth = frameSize.width;
[size=1] float newHeight = frameSize.height;
[size=1]

[size=1] if ((newWidth/newHeight) != newHeight) {
[size=1] NSLog(@"Width is not equal to twice height modifying width.");
[size=1] NSSize newSize;
[size=1] newSize.width = newHeight * 2;
[size=1] newSize.height = newHeight;
[size=1] NSLog(@"New window size: %f x %f", newSize.width, newSize.height);
[size=1] return newSize;
[size=1] }
[size=1]

[size=1] NSLog(@"New window size: %f x %f", frameSize.width, frameSize.height);
[size=1] return frameSize;
[size=1]

[size=1]@end
[size=1]

[size=1]NSSize is a C struct that contains the width and the height of the object. so we don't need to use it as a pointer.
[size=1]We don't need to declare the method in the header because the Protocol declares the method and we basically insert it into our class at compile time. We also don't even need to instantiate the class at all in code. This is one of my favorite things about Objective-C and Cocoa. To use this delegate you open up Interface Builder and add the Object to your application and then simply connect the windows delegate property to your object in interface builder. When the nib file is serialized into memory the Objective-C runtime will automatically instantiate the delegate class for us.
[size=1]

[size=1]That is all for today. Again I apologize for the screwed up code tags I am not sure if it is IPB or Safari at this moment that is screwing them up. Either way I hope it is fixed soon.

blewisjr

blewisjr

 

Java 8 very interesting

This is a rather short blog post. I have had some ideas for a project recently with some of the various endeavors I have been contemplating.

One of these endeavors is either a desktop application or web application not sure which but I think it makes more sense as a desktop application due to it's purpose.

When I was thinking about the project I new I would want it cross platform so my real choices would be either Java or C++. I never made a GUI application in C++ before so I said let me modernize my java install and upgrade to IntelliJ Idea 13.1. Oh by the way IntelliJ idea is worth every penny. If you develop in Java you should really spend the $200 and pick up a personal license which can be used for commercial applications. Really great IDE and I can't wait to see what they do with their C++ ide they are working on. Jetbrains makes amazing tools.

So I upgraded everything to Java 8 and decided to make a quick and simple GUI application and use Java 8 features. I will say one thing Java should have added Lambda's a long time ago... With this in mind the following Swing code turns from this...

[code=java:1]import javax.swing.*;import java.awt.event.*;import java.awt.*;public class TestGui extends JFrame { private JButton btnHello = new JButton("Hello"); public TestGui() { super("Test GUI"); getContentPane().setLayout(new FlowLayout()); getContentPane().add(btnHello); btnHello.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { System.out.println("Hello World"); } }); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setSize(300, 100); setLocationRelativeTo(null); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { new TestGui().setVisible(true); } }); }}
to this...

[code=java:1]import javax.swing.*;import java.awt.*;public class TestGui extends JFrame { private JButton btnHello = new JButton("Hello"); public TestGui() { super("Test GUI"); getContentPane().setLayout(new FlowLayout()); getContentPane().add(btnHello); btnHello.addActionListener(e -> System.out.println("Hello World")); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setSize(300, 100); setLocationRelativeTo(null); } public static void main(String[] args) { SwingUtilities.invokeLater(() -> new TestGui().setVisible(true)); }}
So much more elegant and readable I think Oracle just really hooked me back on Java with just this one feature.

blewisjr

blewisjr

 

Choosing a platform for software

One thing I have noticed over the years is that software development is becoming ever more fragmented. When I say fragmented I mean the platform choices are expanding rather dramatically. Years ago if you wanted to develop a piece of software you mainly had one choice the desktop. Whether it was a game or a software application you built it for the desktop or in the case of a game you had the additional option of a console if you were part of a large company. Now not to far in the future our options are huge. We can choose between desktop, tablet, phone, console, and even web. The software landscape has changed so much. More and more options are becoming available for the average Joe who wants to get their foot into the door and get their own little startup going.

So now the real question is not really about what development technologies you want to use but more about what platform will your application get more of a benefit from. We are now looking at instead of just looking at what your target market base needs but you now need to take into account what platforms the target market base uses most often. After you solidify this quite often you find that this inherently decides what software development technologies you have to use. It is actually quite interesting and it makes various decisions quite complicated and requires quite a bit of extensive research.

Currently I am going through this very process with my latest crazy application idea. This is the main reason I have decided to post this entry as it will really help me think about all these various options more clearly. I find this a very complicated process as this is the first real large project I have done in quite a long time. So lets see where this process can take us.

Target Audience:
The target audience for a piece of software is rather important so lets get this out of the way. I find that every truly great software idea which spawns outside of a corporate environment often is a direct extension of a gap the developer has in their computing experiences. In essence this means the software developer wants to do something but for some reason they can't find a great way to do that task. Often the software is out there to do these tasks but often to get the required result for them they need to use multiple pieces of software.

This is the exact boat I am in currently. For those who do not know I have many hobbies ranging from software development, to writing and much more. I like to be very active and busy. For the longest time I have wanted to write a novel. My real issue is the various technologies to do such a thing the way I want becomes rather convoluted. Sure you can write a novel directly in Microsoft Word but you really lose the fluidity required to write something beyond great without having to jump through hoops to keep track of various divergent plot lines and characters. This could often require multiple documents or other methods. Then their is Emacs and org mode but despite what some think personally I feel org mode is not the right tool for the job and is a pain to use. Other software out there exists but it is actually quite difficult to find, expensive, or very old and will not run on modern PC operating systems. Beyond this they seems to slightly have the idea of what I want but are not quite there.

So this software is targeted at individuals who want to write. The goal ultimately is to create a dynamic writing tool that is very fluid to use.

Platform:
This is actually really hard for the kind of tool I want to make. With my research I have hear that authors love tablets and they really wish there were great tools to write their content with on various tablet devices. It seems that their are huge gaps that they really wish were filled as often it seems to be one way. You have a desktop application but no compatible tablet application or you have a tablet application that is very limited and it is difficult to get that content to the desktop. For me I really think the issue is the developers not having their scope quite right and it is leading to these issues.

Desktop Platform:
The desktop platform is known to work with these types of application as there is tons of flexibility. The real issue I find with writing and desktops, or laptops is the fact that they are not very portable and when I write I like to be away from everything. Helps keep a clear mind and focused. This is difficult with a desktop PC style system even with ultra portable platforms out there like the UltraBook or MacBook Air. The screen densities are awful as well and after looking at the screen for extended periods of time it really places a lot of stress on the eyes. I think this is really where tablets excel in fixing. The other issue with the desktop is distribution and getting the application noticed. Apple fixed this with the app store, windows is well behind on this and their system is a mess for this approach requiring expensive certificates and redirection to application downloads and such. Quite a shame.

Tablet:
In all reality the tablet has everything I would want. Nice portability with solid screen densities and nice and easy on the eyes. There are various nice attachments and the new Samsung tablets are of nice size 10.1 inch and has a stylus. There are keyboard attachments and docs for it as well. Battery life is solid and distribution and noticeability are taken quite good care of in these environments. In my opinion if done right I think tablets will over time revolutionize computing even further as developers begin to really push what the platforms can do. I think it will just take a clear mindset.

Web:
Not much for me to say here. Cloud services and software as a service are beginning to become very common. I however feel the development ecosystem is quite poor. JavaScript, css, html, backend service programming. It is really a mess and needs some consolidation if it is ever going to become the norm. The technology is just very convoluted on the frontend side and could really use some love.

Conclusions:
My conclusion is heavily skewed towards the tablet. For the longest time I just did not really see their advantages as I never owned one nor did a care to have one. By chance I ended up getting my hands on a Samsung Note 10.1 32Gb device and I am hooked. This device I am already finding quite useful and I can really see the potential these devices can have. I think I found my platform for development. From what I have experienced thus far the Android development ecosystem is quite nice and relatively easy to dive into with a little guidance. Lets see where this tablet device can take me.

blewisjr

blewisjr

 

Current OpenGL Progress and other stuff

Well my current prog ress state on learning OpenGL is that I currently have gone nowhere. Essentially it is at a stand still. There are a few reasons for this.

First reason is procrastination.
Second reason is more or less the cause of the First.

Right now I am really tied up with another project that I am trying to get off the ground. The project is a web development Project using Java EE which really needs to get moving. Essentially this project is meant to make me money down the road when it is finished as a sort of Corporate startup endeavor. This is not the typical hipster startup trend BS that is all over the place. A friend of mine and I have wanted to start a Software company for a long time and this is the project that could get it off it's feet. Essentially it is a web tool for small businesses that allows for Order Processing, Inventory tracking, etc..... which is run for them off site in a cloud like setting and they can use the WebFrontend or the Thin Client to work with the system. Hard to really explain the software unless you have used some of the systems already r out there that make you want to blow your brains out because of how disfunctional they really are.

So right now I have been brushing up on my Java and crash coursing some Java EE to give me a base to work off of. Beleive it or not everything people say is false. Java is actually quite a awesome language and it is really fast. Could get a bit verbose at times but it is very nice to work with and I am comming to the point where I am growing quite fond of it again. I have not touched java since version 1.5 and Java was the second programming language I learned after Visual Basic 5 and before C.

If and when I get a chance to spend some time learning OpenGL I might try porting the Arcsynthesis tutorials to Java with LWJGL because quite honestly the SuperBible 5th edition is a sad excuse for a book from what I experienced so far.

blewisjr

blewisjr

 

Microsoft what are you doing?

Interesting state of affairs I have come across today. So lets just get into it and try to be short and sweet.

Today I have been doing some research on graphics API's. For the longest time I have been wanting to move to the 3D end of computer graphics. As everyone knows there are 2 API's for this D3D and OpenGL. I don't really want to get into flame war's over the two API because it really does not matter they both do the same thing in different ways.

So ultimatly my choice that I made after my research was to use D3D. The reasoning behind this was the superior quality of Luna's books over the SuperBible of OpenGL. Luna really gets into interesting stuff like water rendering examples and terrain rendering examples where the SuperBible spends the entire book rendering tea pots. This is not really and issue but the state of the book is rather lacking due to the fact that so many pages are wasted using his pre canned fixed function api instead of just getting down to the nitty gritty. I am not a fan of the beat around the bush style and prefur the jump right in mentality. I am a competant programmer there is no need for the wrapper api it is just extra dead trees. So this is the main reasoning behind the D3D choice just shear quality of resources available.

Then I came across the current Microsoft debachal. Not sure what they are thinking. First off yes I am running Windows 8 and I really love it. Nice and easy to use once you get use to it and I like the clean style it presents. I think the new version of visual studio could use some UI work but who cares. The real issue comes into play with the Express 2012 Edition because I don't have $500 to drop on an IDE. Actually I prefur no IDE but again that is another gripe. When Microsoft moved the D3D SDK into the Windows 8 SDK the removed some API functionality (not a big deal) but they also removed PIX. They rolled PIX and the shader debugger into Visual Studio and made it only available in the Pro + versions. NOT COOL. NOT COOL AT ALL. Not only this but they on top of it removed the cmd line compilers.
So in order to get those you need to install visual studio first.

So basically they want me to use the IDE or at least install it and then remove the standalone debuggers meaning I can't properly debug shaders as I am learning unless I shell out $500. Not cool again not at all.

So right now I am leaning towards having to use OpenGL and avoiding potential Windows 8 store development just so that I can properly adapt my work flow to the standalone tools they provide.

Not sure what Microsoft is thinking here but it really feels like they are trying to alienate the Indy style of development for the sake of a few bucks. Really wish they still had the $100 standard edition sku I would buy it in a heartbeat if it got me the tools they took away.

Sorry for the little rant not usually like me at all.

If anyone knows about any potential work arounds (NOT PIRACY I HATE PIRACY) feel free to clue me in.

blewisjr

blewisjr

 

On IDE's and Editors

The development environment predicament has been a on going thing with developers for years. Constant arguments over the smallest things such as programming languages, version control tools, even the great editor wars. I find it quite intriguing how much developers really like to argue over petty things as it can be quite amusing to read many of the baseless arguments. For me personally choosing many of these items has never been difficult except for one the editing environment. That is what this entry is about trying to make sense of it all.

When I develop code I want to be productive. I think this is the case for everyone. Through the years the one thing I noticed is that the IDE or Editor you are using can have a huge effect on productivity. Not in the sense of tasks being difficult but in the sense of not interrupting the stream of thought you are trying to put into code. For me personally one of the worst things ever is to be working on a algorithm and realize you made a mistake 10 lines up and having to go back and fix it then go back and start working again. Each environment out there be it a IDE or Editor has specific features to help combat this for the most part I would think but do not hold me to it.

The IDE is the modern editor of the day. It contains quick ways to refactor large blocks of code, code completion through syntax analysis and parsing, integrates all the tools you need, and the best of all graphical debugging representations of what you are working on. There is more then just this but a solid sampling of features. The key word here is Integrated everything is there and often works with very little configuration. In my experience however the biggest downfall of the IDE is the editor. When you make that critical mistake you need to stop typing, grab the mouse, and then fix your mistake then go back to working again. The other issue is the fact that many of these features often may not have some sort of quick keybinding causing you to have to go through the menu systems with the mouse yet again. Sure the most commonly used features you have keybindings for and I am sure you have them memorized but it is the odd things that are less common that you happen to use more often then others that hurts. One such example could be the selection of text. You usually have 2 options either the mouse to select the block of code or to use shift+arrow key. This is awkward.

The Editor front you have dumb editors and smart editors. Most use old smart editors like Emacs or Vim. These lack many of the IDE fancy features and if they do have a plugin for it odds are it is not as good. The one place they excel however is editing text. When editing text even the novice with very little experience can really reap benefits. For example I have been experimenting with Emacs for a few days now and man I feel productive editing text. Moving around my characters, words, lines, sentences and rapid selection is just awesome. Want to select a line of code 10 lines up from the cursor is easy... C-u 10 C-p, C-Space C-e then DEL or if you want to cut it C-u 10 C-p, C-k. I think one of the most powerful features here is setting the "mark". You can set a mark set with C-Space C-Space then move and make your edit then use C-u C-Space to jump immediately back to where you previously were. I think the overall benefit of these features is to minimize the amount of thought interruption you have when you need to jump and make a edit. No need to grab the mouse and move the cursor.

I am not sure what I appreciate more when I am writing code. Massive Integration with some powerful features or just a great editor environment that minimizes interruption. Could code completion and refactoring really make you more productive enough to sacrifice the power you get from some of these smart text editors. I find myself making lots of small edits in code rather then massive refactors so something like Emacs makes me personally feel really productive. So it comes down to is a sacrifice for a editor worth graphical debugging tools. I have no idea either way with embedded development you are often looking at hex and binary values as well as assembly code all the time so no GUI debugger really makes it look much better.

So my ultimate question is why can't we have a IDE with a amazingly powerful editor? The best of both worlds without it being a hacked plugin that does not really work like the editor it is trying to emulate in the IDE.

Even after writing this out I still do not know what direction to go. Was hoping the post would clear my mind a bit and help me logically put out what I appreciate in a editor. I guess the issue is I appreciate the features both offer and I want both but nothing gives me both. I am not sure I have the time or energy to develop a new IDE from scratch that works how I want it to work. Eclipse is a huge mess and I doubt I can write a new editor component for it to emulate say Emacs. Ultimately all I want is a environment that understands my code and has a really powerful editor to minimize my line of thought breakage and nothing does exactly that.

What is your take on this leave it in the comments I enjoy reading what other people think and what their experiences are like with odd topics like this. Oh and no flame wars :D

blewisjr

blewisjr

 

Getting back into GameDev (for real this time) need some input

Hey everyone, finally got all those W8 issues sorted out. Had to do lots of patching for VS2008 for school and had to disable secure boot ect... to get the UEFI layer to allow the video card to function but all is setup and good to go.

Over the past few days I am been really pondering various aspects of my hobbies. Micro Electronics is really cool but it does not seem to give me the satisfaction I originally intended from it. My goal with the Micro Electronics was to learn development at a very low level. Through various projects and experiments I realize it is really much more about the hardware design and circuits then it is about the low level development. The thing is programming a micro controller really is not programming per say in most common applications it is mostly configuring the internal hardware of the chip to act on various sensory data. The most programming you do is setting some bits and possibly performing a few calculations and that is about it. At least for the cases of what I have been capable off as I am severely limited when it comes to circuitry knowledge. Even from a robotics level it really is nothing more then acting upon the various sensory inputs to make various decision about motor speed and direction.

So through much thought and pondering I think for sure I am going to be getting back into game development as I am not quite sure what I can obtain from micro electronics but I do know what I can obtain from a knowledge perspective with games that could be useful in other applications like data modeling.

Right now I already have 2 game ideas I want to work on that have really been poking and prodding at my brain for the last few years now. One such game should be simple to implement and the other should be a good stepping stone from the first. Both games are 2D as I feel when getting back into this I should start from 2D and once I have the 2 games under my belt then I can consider the move to 3D if it still feels feasible at the time.

In the case of target platform I am not sure at the moment and this will directly effect various technology choices I will have to make. If I choose the PC my options are limitless, however, if I choose mobile I would have to target Windows Phone 8 as that is the device I own currently which I think would be awesome as it is really a great mobile platform. The issue with Windows Phone 8 is I would probably need to find a target API because XNA does not work on windows phone 8 to my knowledge I would need to aim at a API which covers the platform like MonoGame or DirectX.

Also I think there is a fee in order to utilize anything other then the simulator to develop for the windows store but the advantage with targeting this is I can target both Windows 8 and the phone. So this may be the route I take not sure yet.

Any input is appreciated here as I am out of my zone on the Metro/Phone platform as I have always been mainly a C Developer.

So the question is out what do you guys think? Would it be a good choice to target Windows Phone 8 there is a huge open market on these devices for sure and may open options to publish on Xbox in the future as well as Window 8 itself? If so what technologies should I be looking at?

If I do not see too many comments here I will post up in the forums I have been out of the game dev scene for years.

blewisjr

blewisjr

 

Piecing together a development environment

It has been a while since my last post for good reason I have been mighty busy. Now that things have settled down I have finally gotten the chance to start to piece together my embedded development environment. Embedded development is a quite an interesting beast in that many of the development concepts are quite behind standard desktop development. Overall I have come to believe it is this way because quite honestly embedded development is incredibly low level. There are really no huge api's in existence because abstractions really do not help with portability as no matter how well abstracted you still need heavy modifications for cross target support due to various CPU and peripheral features being located at different memory addresses etc. So in this respect I think there was never really a need to build massively robust software tools to develop on typical 8 bit and 32 bit micros.

So my particular development platform of choice is my new MacBook Pro. This machine is amazing quite a beast. The reason I chose a Mac over a PC with Windows is quite simple. Despite Windows having quite a following in the IDE department for embedded development Windows is still a very gimped platform. Every embedded toolchain for instance uses make files under the hood and these are GNU makefiles running on GNU make. The various IDE vendors ported make over to windows themselves and distribute it with the IDE. This actually makes the build process quite slow because make was really designed around POSIX. As I said previously embedded development still uses quite a few old concepts and the main reasoning behind this is the arcane architectures and the need to be able to select where code goes in memory and it just so happens that GCC, Make, and Linker Files are still the best way to do this. So my main reason for choosing make was the "It Just Works" system with the strong UNIX core that provides POSIX features and a powerful terminal like bash. It really is a win win as you no longer have to worry about crap breaking, not working at all, or various hardware incompatibilities that come with Linux which is getting better but still horrible.

So now that the machine is out of the way we need tools to use. The first obvious tool you need is a GCC cross compiler for ARM. For those that do not know a cross compiler is a compiler that runs on one system type say a PC but instead of generating machine code for that machine it generates machine code for a different architecture which allows you to even do embedded development at all. Without cross compilers you would never really be able to develop for these small chips as you typically can't run a PC like OS on the chip to compile your code. This is a simple task all you need to do is download the compiler set which includes everything you need like GDB, GCC, G++, Linker, Assembler, etc... All you do is download, extract to a directory and add the compiler to your path and you are done.

The next task is needing a GDB server for GDB to connect to for remote debugging. In order to debug hardware related code it needs to run on the hardware. You also need to be able to get the binary burned into the chips memory. Most ARM development boards come with a programming/debugging module on it already. This module can typically burn the chip on the development half of the board or also burn to a external chip via certain pin hookups. Still to operate these features you need another piece of software. In my case for maximum compatibility and to be able to use the same tool for possibly different chips I chose OpenOCD. On Linux/Mac/Windows OpenOCD needs to be compiled. There are sites that provide binaries for Windows but this often is not needed because the vendor usually has a tool ready for windows. On linux/mac OpenOCD or a tool someone else wrote like stlink made by a ST employee is required. On Mac open OCD can be taken care of quickly with the homebrew package utility. This allows for not only a debugging server but also a interface to burn your code to the chip.

Over all that is all that is needed besides driver code like CMSIS or Vender supplied libraries. When I say driver code it is not what people think of as a driver. All driver code is are various source files and headers which pre map peripheral and cpu memory addresses for the chip in question. Think of it more like a very tiny and low level API. Then you need the programming manuals, reference manuals, and datasheets.

As for IDE's on windows there are tons of choices. Many are quite expensive but there are a few free ones that work relatively well. On any platform you can easily use Eclipse with CDT and maybe 1 or 2 embedded plugins to handle this. Then there is always the non IDE route using a text editor like Emacs or VIM. This is a decent option considering you are not really working with large and confusing API's like you would be in C++, Java, or C#. The api's are very slim so "intellisense" is not paramount. I have not chosen what I am going to use on this front quite yet. Like always there are heated debates in this camp some saying Eclipse is the way to go and others saying Vim and Emacs are the way to go because you should know how your tools work for when stuff breaks.

I am not much for heated debates so I will figure out what I want to do here I will probably end up going with Eclipse because quite honestly I hate having to configure every little tiny piece of my editors.

That is all for now have fun and write awesome code.

blewisjr

blewisjr

 

Ship done!!!

Here is the ship Sprite I will be using in orbis. Took about 2 mins to toss it together but it still looks rather nifty hopefully it will work as intended :P


blewisjr

blewisjr

 

New Blog!

Hey GDNet,

My new blog is up and running now and I have gotten my first post up. Nothing really interesting just a Welcome post.
There is much more content currently in the pipeline and I think it will really come into being as its own little side project for me.

I will be embarking on my first solo PIC uC project which will be open source and of course it will be documented at my new blog.

If anyone around here is truly interested in where I am going with my development goals feel free to stop by regularly and drop some comments.
It is always good to know if people are reading.

http://www.partsaneprog.com

Hope to cya there and in the GDNet forums from time to time. Peace.

blewisjr

blewisjr

 

A bit about my game and some slow progress

Hello Everyone,

I feel it is time for some updates on my game as I really did not say much about it. So I would like to introduce you to the concept of a game I have been wanting to make for years. The game is called Orbis. The general idea behind the game is Asteroids with a twist.
So ultimately I will be making a Asteroids clone with a few twists to spice up an old game I use to love to play at the Arcades or even on the Atari!!!!
I am not sure if I am ready to really detail out all the features quite yet as I am not sure exactly what will make it into the game just yet. So we will leave it at Asteroids with a twist for now till I flesh out more of the concepts.

I also decided to make some tool changes for the game. I decided I would stay with C++ even though after my first foray back into C++ I wanted to scream back to C. Ultimately I ditched QTCreator and MinGW. For some reason I was having issues with MinGW on Windows 8 so I decided to install Visual Studio 2013 Express Windows Desktop edition. I must say I am really impressed. I also decided to stick with SFML. To use SFML with VS2013 I needed to rebuild the library and building SFML 2.1 did not work out to well so I ended up going with the Git repo and building from there. So far so good. So here is what my new environment looks like.

Visual Studio 2013 Express Windows Desktop
CMake 2.8.12.1
SFML (master)
Git Version Control (on BitBucket)

I am still using CMake because if I do decide to build the game on Linux for testing on my laptop CMake will save my life. So right now I use CMake to generate the Visual Studio projects and work from there. Not pretty but saves tons of headaches. Visual Studio leaves me out of my comfort zone as I am not a huge IDE fan period but we will see where this setup takes me.

Now a bit on the progress. Not much honestly. Much of my time is taken up by school and on top of it I am trying to get back into the groove of C++ after spending a few years in the world of C. So bear with me we will get there.

The first task I really wanted to get done was make sure SFML actually worked and it did. From there I felt the most important thing I should get out of the way is Resource Management because this is something I really can't have a game without. Sadly this was probably not the best place to start when I am trying to get my C++ groove back but none the less I think I was successful. My goal here was to put together a cache for my resources. This will be the core of ensuring all resources are properly freed up when no longer needed and will also be the core of my TextureAtlas system which is what I will be building next. I really needed this to be generic because SFML has many types of resources. So this resource cache is built to handle sf::Image, sf::Texture, sf::Font, and sf::Shader. There may be a few things but this is what I can think of off the top of my head. It will not handle music because sf::Music handles everything very differently so I will need to take a different approach for music.

I also wanted to ensure that the memory of the cache was handled automatically. Since I am not in the world of C and the fun void* generic programming world I figured I might as well try to use some C++11.

So my first foray into C++ after years and years of not touching it includes Templates, and some C++11. In other words AHHHH MY EYES!!!!
Sorry for no comments but here is the code I came up with using unique_ptr for the resource which gets stored in a map. The actual key to the map will be implemented as a enum elsewhere so I can index into the cache to get what is needed. There are 4 methods. 2 load_resource methods and 2 get_resource methods there is no way to remove a resource at this point as I am not sure I need it yet for this game at least.
One load_resource takes care of the basic loadFromFile. sf::Shader as a extra param and so can sf::Texture so the overloaded load_resource takes care of that. get_resource just returns the resource and there is a overloaded version to be called in case the cache is a const.

Again sorry for no comments I feel the code is simple enough to not need any.#ifndef RESOURCECACHE_H#define RESOURCECAHCE_H#include #include #include #include template class ResourceCache{public: void load_resource(ResourceID id, const std::string& file); template void load_resource(ResourceID id, const std::string& file, const Parmeter& parameter); Resource& get_resource(ResourceID); const Resource& get_resource(ResourceID) const;private: std::map> resources;};template void ResourceCache::load_resource(ResourceID id, const std::string& file){ std::unique_ptr resource(new Resource()); if (!resource->loadFromFile(file)) throw std::runtime_error("ResourceCache::load_resource: Failed to load (" + file + ")"); resources.insert(std::make_pair(id, std::move(resource)));}template template void ResourceCache::load_resource(ResourceID id, const std::string& file, const Parameter& parameter){ std::unique_ptr resource(new Resource()); if (!resource->loadFromFile(file, parameter)) throw std::runtime_error("ResourceCache::load_resource: Failed to load (" + file + ")"); resources.insert(std::make_pair(id, std::move(resource)));}template Resource& ResourceCache::get_resource(ResourceID id){ auto resource = resources.find(id); return *resource->second;}template const Resource& ResourceCache::get_resource(ResourceID id) const{ auto resource = resources.find(id); return *resource->second;}#endif
Here is the main.cpp file which I used for my functional test as well so you can see it in use.#include #include "ResourceCache.h"enum TextureID{ Background};int main(){ sf::RenderWindow window(sf::VideoMode(250, 187), "SFML Works!"); ResourceCache TextureCache; TextureCache.load_resource(TextureID::Background, "./Debug/background.png"); sf::Texture bkg = TextureCache.get_resource(TextureID::Background); sf::Sprite bkg_sprite(bkg); while (window.isOpen()) { sf::Event event; while (window.pollEvent(event)) { if (event.type == sf::Event::Closed) window.close(); } window.clear(); window.draw(bkg_sprite); window.display(); } return 0;}
Like stated this is my first foray back into C++ so feel free to let me know if you see anything obviously wrong with the ResourceCache class. Much appreciated in advance.

Until Next Time.

blewisjr

blewisjr

 

Wow Long Time

Holy crap has it been a long time since I posted here. I have been so tied up with school and work that I kind of just fell of the face of the earth
being totally swamped with no real time to do much of anything.

I just recently due to school got back into doing some programming. Partially because of the nature of the class and me being as lazy as I could possibly be just not wanting to go through all the repeditive steps.

Right now I am taking a statistics class and calculating all of the probability stuff can get very very long and repedative to find out the various different answers. For instance when finding the binomial probability of a range of numbers in a set you might have to calculated 12 different binomial probabilities and then add them together so you can then caluculate the complement of that probability to find the other side of the range of numbers. It is just way too repedative in my liking.

The advantage of this is it really re-kindled my love of the Python language. I just wish the language was a bit more useful for game development sadly. The performance hits are just way too high when you progress onto 3D.

After I finished my homework I decided to do a comparison of the Python and C++ code required for calculating the binomial probability of a number in a set. This is the overall gist of the post because it is really amazing to see the difference in the code of two examples of the same program and it is simple enough to really demonstrate both in a reasonable amount of time. The interesting thing here is from a outside perspective runing both they appear to be run instantaniously with no performance difference at all. So here is the code btw it is indeed a night and day difference in readability and understandability.

Python (2.7.3)

def factorial(n):
if n n = 1
return 1 if n == 1 else n * factorial(n - 1)

def computeBinomialProb(n, p, x):
nCx = (factorial(n) / (factorial(n-x) * factorial(x)))
px = p ** x
q = float(1-p)
qnMinx = q ** (n-x)
return nCx * px * qnMinx

if __name__ == '__main__':
n = float(raw_input("Value of n?:"))
p = float(raw_input("Value of p?:"))
x = float(raw_input("Value of x?:"))
print "result = ", computeBinomialProb(n, p, x)


C++

#include
#include
int factorial(int n)
{
if (n n = 1;
return (n == 1 ? 1 : n * factorial(n - 1));
}

float computeBinomialProb(float n, float p, float x)
{
float nCx = (factorial(n) / (factorial(n - x) * factorial(x)));
float px = pow(p, x);
float q = (1 - p);
float qnMinx = pow(q, (n - x));
return nCx * px * qnMinx;
}

int main()
{
float n = 0.0;
float p = 0.0;
float x = 0.0;
float result = 0.0;
std::cout std::cin >> n;
std::cout std::cin >> p;
std::cout std::cin >> x;
result = computeBinomialProb(float(n), float(p), float(x));
std::cout return 0;
}


Sorry for no syntax highlighting I forget how to do this.
The bigest thing you can notice is that in Python you don't need all the type information which allows for really easy and quick variable declarations which actually slims the code down quite a bit. Another thing to notice is you can prompt and gather information in one go with the Python where in C++ you need to use two different streams to do so. I think the Python is much more readible but the C++ is quite crisp as well.

blewisjr

blewisjr

 

Farewell GDNet.

I figured before I bow out of this excellent community I would give an appropriate farewell. I have been a member of this site since 2006 and enjoyed a lot of moments here. But it is time to move on. The first thing bringing me to this conclusion is the lack of passion I have for game development anymore. I have fallen into a world where I am more enthused learning the inner workings of various data structures and languages as well as Security and Computer Architecture. These are things that will always give me something to strive for that does not involve needing art or music or even game play. The next thing bringing me to this conclusions is that although I truly love the new site design the overall community has taken quite a awkward turn from what I grew up in. This site shaped me as a effective programmer and problem solver. This was because I knew when to ask the proper questions to receive the right answer. Those kinds of questions seem to have long expired on this site. Despite the few great questions and discussions most questions anymore fall unto the answer of Learn to Google or RTFM. This has become the norm because a lot of questions out there are horribly thought out and downright inept.

I miss the days of the old GDNet where we would break threads because of truly great intellectual discussions. I also miss the days of the great news posts written by our own which are now nothing more then add blurbs. But most of all I miss the debates about implementation and algorithms. These days are gone and it is because there is a new generation of wannabe game developers that have less prior experience and less enthusiasm for self research.

I have a huge passion for learning. I force myself to try and tackle the same problem in different ways even if it means diving into the inner workings and re inventing the wheel because of this passion. For every answer I receive I have to know WHY it is that way and WHY it works that way and then HOW that conclusion is reached. This is the very essence that has left the community.

Thank you for all of the help and great discussions maybe I will poke in here and there but I am moving on to a place where I can dig a hole of learning that will never end.

If anyone out there has enjoyed my twisted musings of doing things the hard way on purpose and other general really geek oriented thoughts and questions. Stop by my new blog at http://partsaneprog.blogspot.com/ There is nothing up yet but keep checking in I am working on something totally crazy atm.

blewisjr

blewisjr

 

The Mosin Nagant is here

As I promised the Mosin Nagant has arrived. The Mosin I have received is 1942 Izhevsk 91/30. I think it would be best to give some background before the pictures.

The Mosin Nagant was originally designed by the Russians in 1891. The approximate pronunciation of Mosin Nagant is (Moseen Nahgahn) due to the Russians emphasizing vowels over consonants. Over the years they made some modifications to the rifle the most obvious modification was the switch from a Hex to a Round receiver to produce more accuracy. My particular year is a very interesting year for the Russians. In 1942 the Russians were in some very heated and significant battles to protect their homeland from the unstoppable German war machine. One such example was Stalingrad which everyone here should even know about. This meant the Russians were in a tight bind and really needed to get more weaponry out to the soviet soldiers so often the refurb process in the arsenals was quick and half assed so to speak in order to the the rifle out on the field. In 1942 the Mosin Nagant was still a mainstay weapon for the Russians due to their lack of a efficient Assault Rifle. This meant they suffered in medium range combat as their only other weapons were really the PPSH sub machine gun and some shovels and grenades.

The Mosin Nagant was a top notch rifle and very rugged. Accuracy was a key point in designing the Model 91/30 and other models as the sport a whopping 28 3/4" barrel or larger in some early models. They were designed and sited in to use the Bayonet all the time as it was Soviet doctrine to never remove the Bayonet. Hand picked the most accurate 91/30's were retrofitted with a bent bolt and often a PU scope or some other model scope for the snipers. The 91/30 was used as the Russian sniper rifle all the way up to the cold war when they designed the Dragonov sniper rifle based off the AK-47. Even during the post war time up to and including the cold war Mosin Nagant's were still in use and manufactured but in a Carbine form known as the M44. Numerous other countries also used the Mosin as many of them were part of the Soviet Block at some point or another including Poland, Hungaria, Finnland, and Bulgaria. Many other countries outside the Block used them as well including China, and the North Vietcong. Even today there have been reports of terrorist forces in Iraq, and Afghanistan are using Mosin Nagant rifles.

As stated above the rifle was designed for accuracy. The 7.62x54R was designed as a high velocity cartridge. To give some perspective with some Russian Surplus ammunition ballistic test using 148gr LPS ammunition which is a Light Ball ammo with a steel core instead of led. The muzzle velocity (this is as the bullet leaves the barrel aka 0 yards) sits around 2800 feet per second+. The impact force under 50 yards sits around 2800 foot lbs per sq inch. With the right configuration of load this rifle and push over 3000 feet per second. For those who do not know velocity and twist ratio really decide the accuracy of the rifle from a ballistic perspective. These rifles can easily hit out to 1000 meters if needed.

Ok now more about my rifle. My rifle was manufactured in 1942 by the Izhevsk arsenal in Soviet Russia. This is a wartime rifle in a wartime stock meaning the stock was not replaced post war. The rifle has been refinished by a Soviet Arsenal even though it appears that the refinishing stamps are missing, however, this is normal they forgot this stuff all the time. The rifle is also known as all matching numbers. This means the serial numbers on all the parts match which is good. I am 99% sure the rifle was force matched which is well known for military surplus as the fonts look slightly different on the stamps. There are no lineouts on the old serial numbers they were probably totally ground off and then re-stamped. There is lots of black paint on the rifle as well which was common to hide the rushed bluing jobs and light pitting. One thing you will also notice is a amazing stock repair job done by the Russians on the front of the stock. When it was done I do not know but it really adds to the unique character and history of the rifle.

The best part of this rifle is the fact that it is one heck of a good shooter. Had her down the range and it still functions great. The trigger does take some getting use to I estimate the trigger pull is around 8 - 9 lbs possibly 10 lbs. I would estimate the rifle weighs in at about 12 - 13 lbs or so.

As promised here are some pictures. Due to there being some 18 pictures or so I will just post the link to the album and you can check out a piece of history. http://s752.photobucket.com/user/blewisjr86/media/DSC_0001_zpsfbd2b09e.jpg.html?sort=9&o=0

blewisjr

blewisjr

 

Small Update

Hey GDNet

This is just a quick update on where things stand.

First sorry for not posting more PIC journal entries. There are two main reasons for this.
The first reason was after working my way through a majority of the tutorials I feel PIC is not quite the right micro controller for me. It is a great micro controller don't get me wrong and I would not hesitate to use it in a personal project but there are a few issues that led me to this decision. The first issue is the development tools. They are rather bad. The MPLABX IDE is based off NetBeans. This is not an issue but their plugins are rather buggy. The first issue with the IDE is getting it to actually interface with the MCU without getting yelled at like in my first HelloWorld Post. The next issue is the in circuit debugger ugh. When having issues and trying to debug the application half the time the debugger just did not work!!!! There is also no options or functions to power the device without programming it. This is rather icky because if you want to run the application you already burned into the chip you need to reburn the program or actually use external power. I don't like this because the nature of flash memory on MCU's is that you only can burn the chip so many times before it dies. Next is the state of quality compilers for C. Without a doubt I want to use C to program these after learning to understand the architecture through assembly. The issue with PIC is the compilers are not free. XC8 which is the 8 bit compiler is $500 which is not bad by embedded compiler standards, however, it is only for 8 bit if you want 16 bit and 32 bit they are $500 each as well. Quite pricy. There are free versions of the compiler available but the optimization is horrible often generating hex files double the size of just using raw assembler. So this means if you want to fit a slightly more complex application written for PIC in the 14 kb of flash you have you need to A. Meticulously code your c to try and force the compiler to generate halfway decent assembler and to then inline ASM code to shave bytes just to get the size reasonable to fit on the chip; or B. Dump the hard cash and get a proper compiler that does it's job.

So I decided to switch to AVR chips. I picked up and Arduino pack today. The benefits of this are you get a fully optimized C compiler based off of GCC for free which can not only program Arduino with it's custom api but can also code for raw AVR chips later down the road. You can also use these tools to code Assembly for both Arduino dev boards and raw AVR chips. Secondly you have 2 IDE's both free the first is the Arduino IDE but there is also AVR Studio 6 which is also free and built using Microsoft's Visual Studio system to make your own IDE's. So you get the full benefits of Visual Studio 2010 plugins and all for Atmel AVR and ARM chips. This is a win win. Solid development tools all around with no restrictions on your capabilities.

The second reason I have not been posting is that I am in the process of setting up an external blog. I have not really been doing game development for quite a long time. I feel really out of place posting this Micro Electronics stuff here and I feel many people won't read or just don't have the interest in it. So I will be moving on and getting my own blog going for my new hobby of interest and hopefully build a little bit of a following.

That is all for now quite busy I need to get in contact with my hosting provider for verification stuff. See you on the flip side.

blewisjr

blewisjr

 

Linux saves my day again

Why hello there GDNet. Once again the odd ball me gets to share something that not may GDNet people get to experience all that often. The topic of today has to do with how Linux has saved the day for me. I am sure many people here already know I am a very avid Linux user. I don't have anything against Windows I have that too after all. I like to play AAA games and to do that effectively I just dual boot. Despite this I still do a majority of my work under Linux. This is because I find it very very productive. I find the POSIX interface to be a life saver in many circumstances like the one I will explain today.

First as you may know if you read my blog I am in the process of learning OpenGL. This is a huge step for me because I have been working with 2D for way too long. I fee this is the next logical step for me in interest. To do this I am using the OpenGL Superbible 5th edition which covers the GL 3 core profile. During the book the author eases you into OpenGL by introducing concepts through the library they developed called GLTools. As you progress through the book they start to strip away GLTools so they can introduce you into each concept a little at a time.

Now the problem. I need to set up this library on Linux. The first issue was getting the code. This was mentioned in my last blurb on the blog I was using git-svn to pull down the svn repo. This took forever about 20 min or so. For such a small amount of code this was shocking. I realized later it was actually capitolized on because even tho SVN is slow git had to rebuild the entire repo. Oh Well task one done.

Now task two I need to build the GLTools library and the Glew library. So I navigate into the repo and stop dead wait a second there is no Makefile. So I shoot back to the Linux directory and look there wait no Makefile. They had Makefiles for every project but none for GLTools/Glew then I saw it. They were building GLTools and Glew for every project and storing the libs local to the project. EWE. So now I need to write a new Makefile for this stuff.

Step 3 ok so I fire up Emacs and hack up a Makefile. Once it is done I type make all and it starts and KABOOM. Cannot find header glew.h WTF. So at this point nothing built because GLTools uses Glew to fire up the Extensions required for the OGL 3 core profile. So I navigate up and see that the glew. h file is present so I go and look at my make file and see if I made a mistake. I did not. Turns out in the ifdef preprocessor for Linux they are including instead of which is where they had the file stored. So I moved the header and tried again. KABOOM can't find glew.h. Turns out Glew looks for glew.h in the GL directory. Oh Bugger. Now how are we going to fix this? Before we get into that here is the Makefile if anyone else actually needs to go through this.

[source]
# Compiles the OpenGL SuperBible 5th Edition GLTools Library including
# the Glew library.

# Below are project specific variables to setup the proper directories
# to pass to the compiler and linker.
GLTOOLS = libGLTools
GLEW = libglew
SRCPATH = ./src/
INCPATH = -I./include

# Below are variables for the compiler, linker, and also the flags
# pertaining to the compiler and linker.
CXX = g++
CXXFLAGS = $(INCPATH)
AR = ar
ARFLAGS = -rcs

# The actual compilation and linking process goes on down here.

# Compile and link everything
all : $(GLTOOLS) $(GLEW) $(GLTOOLS).a $(GLEW).a

# Basic setup of object file dependencies
GLBatch.o : $(SRCPATH)GLBatch.cpp
GLShaderManager.o : $(SRCPATH)GLShaderManager.cpp
GLTriangleBatch.o : $(SRCPATH)GLTriangleBatch.cpp
GLTools.o : $(SRCPATH)GLTools.cpp
math3d.o : $(SRCPATH)math3d.cpp
glew.o : $(SRCPATH)glew.c

# Compile GLTools
$(GLTOOLS) :
$(CXX) $(CXXFLAGS) -c $(SRCPATH)*.cpp

# Archive GLTools
$(GLTOOLS).a : GLBatch.o GLShaderManager.o GLTriangleBatch.o GLTools.o math3d.o
$(AR) $(ARFLAGS) $(GLTOOLS).a *.o

# Compile Glew
$(GLEW) :
$(CXX) $(CXXFLAGS) -c $(SRCPATH)glew.c

# Archive Glew
$(GLEW).a : glew.o
$(AR) $(ARFLAGS) $(GLEW).a glew.o

# Cleanup
clean :
rm -rf *.o
[/source]

Ok now that this is out of the way how do we fix it. Well POSIX + Linux to the rescue. So here is the problem. We have a directory of 11 header files. We do not know which header files contain the declaration for glew.h because the make file is bailing on us before it tries the others due to dependencies needed to continue the build. We don't want to open all 11 files into an editor and manually change all of those. For one we are programmers and programmers are lazy. This is a total waste of time so lets use the power of our POSIX based command line BOOYAH. So here is what we need to do. We need to first find all the header files then we need to search each header files for and replace it with . I know you are asking how are you going to do that? Well let me explain. On POSIX based systems each command you use at the terminal has 3 different streams stdin, stdout, and stderr. The nice thing is since every command has a proper in, out and err we can actually by definition in the POSIX standard "Pipe" together different commands to transfur the data onto another process. So to do this task there are 2 commands we need. The first is find which basically reads the specified directory structure and outputs a list of that structure. Then we need a command called sed which is actually a data stream manipulation command. It basically allows you to hack and modify the data streams to bits. So what we need to do is find the headers and modify them with sed so that we can make the correction in one swoop without needing to open all the files and type the fixes by hand. Here is how this is done.




find . \( -name GL -prune \) , \( -name '*.h' -type f \) -exec sed -i 's/\(/g' '{}' +




Basically what is going on here is we are telling find we want all of the header files in the current directory structure minus the directory structure of GL and pipe it into sed to use a regex search to find and change it to for every file in the list that find provides.

Cool stuff one line fixes all the the appropriate files and boom make all compiles everything I need. Go Go POSIX and Linux.

blewisjr

blewisjr

Sign in to follow this  
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!