What if the inputs of the neural network were not just an enemy character in the game, but inputs for how the screen scrolls and zooms, enemies are spawned, enemies react, die, what happens to the player when he touches different things... basicly every single activity that happens during the game.
I was thinking you could train it all with backpropagation, you actually hand feed the neural network the motions of the game as if it was playing, then after it is trained it should be able to run the game for you.
I don't want to add to the previous heartfelt advice against neural network misapplication, because stubbornly defying contrary opinions might still be a useful learning experience; this description of what you want to do is much more worrying than the risk of wasting time by trying an inadequate technique, because you don't want to try a technique, you want a magic wand. Wishful thinking and learning rarely mix.
While trying to develop a bot to play your game is a fine objective, you don't seem to approach it on a sound problem-solving basis, hoping instead that applying a neural network is going to be easy and effective: this sort of a priori preference for a certain solution is the opposite of good engineering and design, and it would be equally bad in the case of a good technique.
You don't even state clearly what sort of game you are thinking of, neglecting the analysis of what an AI for your game needs to be able to do and what are the difficulties and non-difficulties in such tasks, which is the first step in choosing appropriate AI architectures and algorithms and/or modifying game rules to make the AI perform better (for example, simplifying game state to reduce the amount of training needed and facilitate unsupervised trial and error learning). Do you expect this work to disappear?