The benefit of hashing is being able to use a practically infinite grid with an arbitrarily small cell size, with negligible performance costs (computing hashes of positions instead of quantizing them, discarding the rare aliased objects).
With a directly addressed grid you would have to compromise because you cannot afford the memory for enough grid cells, making your universe smaller and/or grid cells larger than they should be.
I recommend Bruce Eckel's free-to-download "Thinking in C++" books. Great content, good explanations - deep enough to clarify but not enough to lose focus - and good exercises. Sure, a bit old, but all code works (I ran most examples and solved most exercises with gcc and clang) and it won't stop you from learning C++11 after you've mastered the basics - actually, it will provide you with a solid base for C++11. Read Vol. 1 and do all exercises. Worth every second you spend on it.
It WAS a good book, but C++ has progressed and now it is a good book about an obsolete language that shouldn't be used. There are, of course, common roots, but why learn (and unlearn) old and unpleasant techniques along with the complex enough current state of C++?
This is a tricky legal area. You may not be doing anything illegal if all you do is strictly hack the ROM to make it do what you want, but if you copy its contents (which, BTW, is required to hack the ROM), you're breaking the law.
One could distribute a patch, containing no data from the original ROM and requiring it to build the hacked ROM.
Bleeding cannot depend on damage type only; target type (e.g. a person vs. a robot with electric actuators) and wound location (e.g. flesh vs major blood vessels vs bones) are two equally important factors. Bleeding could also depend non-linearly on damage amount (in the case of people, small and superficial cuts stop bleeding much faster).
Likewise, pain makes serious assumptions about target type and hit location.
Do you really need formal damage types like Concussive, Bludgeon, Puncture, Incisive if you already have stats which are both more meaningful and more general? Some attacks could use the same standard formulas with different numbers, for example an almost "incisive/piercing" blade with vicious barbs could be rated 4x pain rather than 2x pain. Vague names are likely to have no use for players.
A short and educational corporation sim, with a focus on human resources, might have the aim of bringing the company to its knees as fast as possible. Hire incompetents and put them in the most harmful places, spread discord and dissatisfaction with compensation policies, give the best employees reasons to quit, reorganize effective teams into oblivion, and so on; all the time without losing management and shareholder support. A pure bureaucracy company, like a bank or insurance, would probably work well: organization is more arbitrary and there are no industrial aspects or meaningful marketing and R&D projects distracting the player.
As a music making tool they aren't obsolete at all. It depends on what kind of interface you prefer. I know composers that only can use pentagrams, some that can only use trackers, some that can only use piano rolls... In the end the final music is all that matters. In the specific case of trackers, it makes it easy to see all instruments at once (which is very handy) and to see when the special effects are applied, although polyphony can be a mess and all notes look the same at first glance (as they're just text).
Trackers also provide a good layered structure: raw samples -> instruments -> end to end effect chains -> main track editor -> sequence and repetition of song pieces.
Unlike most other music authoring software, all these layers are integrated with each other (unlike e.g. a VST sampler requiring concerted usage of an audio editor for samples, a text editor for sfz instrument definitions and its own GUI for playback options) and accessible at all times (e.g. no juggling of rendered and partially mixed audio parts).
I don't value the kind of story that can be found in Starcraft very high. It's nice to get some background on who you are, why are you fighting, and what's happening outside the battlefield; but it isn't the focus of the game, because it remains a game about using your army to defeat the enemy army and plot or characters have no bearing on that.
The most meaningful type of player-driven narrative in a strategy game is probably the sort that can arise in Civilization or a lot of 4X games: choosing alliances, choosing whom to attack or betray or provoke, the diplomacy involved in deploying military units, and all the other player and AI actions that make strategy and tactics the backbone of a plausible simulated history.
Another point of view is accounting for wasted vertex processing. If every vertex, degenerate or important, gets the same moderate amount of processing (transforming, a few interpolations and texture lookups, etc.) adding x% useless vertices to the real geometry is a x% load increase, which up to a certain point is free.
Anything you do to avoid processing degenerate geometry needs to cost less than x% of clean geometry processing to have a chance of being useful; testing every triangle for degeneracy, even if the cost of rebuilding buffers could be avoided, appears quite out of the question.
I suggest simple prerequisites, combining nodes from any number of skill trees as prerequisites of "advanced" skills.
For example, some advanced combat maneuver could be available if you have any tier 3 or above skill in any two weapon types (i.e.two different skill trees), while a fancy magical attack might require "all-in attack" skill with any weapon type, any other skill with the same weapon type, and a rather common spell. Egregious examples of this style include third edition D&D feats and other features, and GURPS spells (which commonly have prerequisites like "N spells of the same school as the desired spell", "any spell from any N different schools", "any N of this list of N+M similar or relevant spells").
From the point of view of the player, the skills form a direct acyclic graph (I can learn a certain set of skills because I already know another disjoint set of skills), even if the potential skill dependencies could have cycles (e.g. if knowing A or C allows learning B and knowing B or D allows learning A, players can learn C then B then A and D at any time, or D then A then B and C at any time).
What if the inputs of the neural network were not just an enemy character in the game, but inputs for how the screen scrolls and zooms, enemies are spawned, enemies react, die, what happens to the player when he touches different things... basicly every single activity that happens during the game.
I was thinking you could train it all with backpropagation, you actually hand feed the neural network the motions of the game as if it was playing, then after it is trained it should be able to run the game for you.
I don't want to add to the previous heartfelt advice against neural network misapplication, because stubbornly defying contrary opinions might still be a useful learning experience; this description of what you want to do is much more worrying than the risk of wasting time by trying an inadequate technique, because you don't want to try a technique, you want a magic wand. Wishful thinking and learning rarely mix.
While trying to develop a bot to play your game is a fine objective, you don't seem to approach it on a sound problem-solving basis, hoping instead that applying a neural network is going to be easy and effective: this sort of a priori preference for a certain solution is the opposite of good engineering and design, and it would be equally bad in the case of a good technique.
You don't even state clearly what sort of game you are thinking of, neglecting the analysis of what an AI for your game needs to be able to do and what are the difficulties and non-difficulties in such tasks, which is the first step in choosing appropriate AI architectures and algorithms and/or modifying game rules to make the AI perform better (for example, simplifying game state to reduce the amount of training needed and facilitate unsupervised trial and error learning). Do you expect this work to disappear?
In a normal data-driven design, what a Unit instance needs at runtime is not the name of its type, but a pointer or something equivalent to a UnitType instance, containing read-only data and function pointers. The use of string or numerical identifiers in the engine can stop at level loaders, script interpreters or the like.
Exterminate the black clusters and the black outlines, except the few ones which can be recycled as deep shadow: Blooman will thank you. He'd also like 1 or 2 pixels less space between his legs and equal sized hands, preferably with thumbs.
On top of that, after I load the image data into the texture object, I can get the same data back out with glGetTexImage().So the texture is loading properly, and the vertecies are rendering correctly, but it still isn't textured right.
Success with glGetTexImage() doesn't prove that you are actually using that correctly loaded texture and using it correctly.
For instance, where does texCoord in the fragment shader come from? Where is the vertex shader, and where are you actually loading and binding the shaders?