• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
sion5

Neural Network - Discussion

103 posts in this topic

Did anyone post this link already?
http://www.gamedev.net/reference/articles/article1988.asp

However I think you are too much focusing on FPS-like games, which is not exactly the genre where you want extremely intelligent opponents. Rather "realistic" behaving but easy to kill opponents.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by InnocuousFox
Quote:
Original post by sion5
Truth is academia is there to encourage innovation. I'm sorry but anyone can work in a factory pushing out the same product one after the other, but it takes academics to say "Hey wait, surely this can be done better?".

This is such a load of arrogant crap it is bordering on invalidating the usefulness of this entire thread and disqualifying you from further consideration on any relevant subject matter. 99% of the innovation in the modern world has come from outside academia. In the games industry, much the same can be said.


InnocuousFox, since this post has started you have been rude and flipant. I dont consider you helpful and quite frankly I haven't taken any positives from anything you have said.

Why is it that every other poster can structure their reply in such a way that its informative and friendly (even if they disagree with my views) yet you feel the need to have a dig at MY topic of research. If this gets me excluded from further discussion then I appologise to everyone for lowering the tone. There's only so much I can take. For me to get a job in the industry I need to have qualifications and show an ability to learn, the fact that your president of a company could very well mean this is not the approach you taken but it doesn't work for everyone!
0

Share this post


Link to post
Share on other sites
Quote:
Original post by sion5
InnocuousFox, since this post has started you have been rude and flipant.

Methinks you need to reread my first few responses... or the first two pages.
Quote:
I dont consider you helpful and quite frankly I haven't taken any positives from anything you have said.

That is often the reaction when people don't get the answers they want to hear.

Of course, then you dropped this bomb.
Quote:
Original post by sion5
Truth is academia is there to encourage innovation. I'm sorry but anyone can work in a factory pushing out the same product one after the other, but it takes academics to say "Hey wait, surely this can be done better?".

And that is where I severely lost respect for you. You dropped into the groove that has many of us exasperated with academia in the first place. There is an arrogance, condescension and Messiah complex all rolled into those two sentences of yours... and yet, the industry continually rejects (most of) what (many of) the academics that you laud so highly have to offer.

I think that your approach to this entire conversation has illustrated why in microcosm... that your attitude has been "obviously you wrench-turners are missing the point!" You even challenged us with "prove that it doesn't work" - which is a ridiculous statement. Which brings me to the next tidbit...

Quote:
Why is it that every other poster can structure their reply in such a way that its informative and friendly (even if they disagree with my views) yet you feel the need to have a dig at MY topic of research.


Allow me to return volley... why is it you feel the need to disregard anything that anyone has to say on the subject? In my first post, I warned you "be careful what you wish for." You got feedback and steadfastly refused to hear it much less accept it. (e.g. "show me proof")

Quote:
If this gets me excluded from further discussion then I appologise to everyone for lowering the tone. There's only so much I can take.


And there's only so many times we can tell you "been there - done that." *shrug*

Quote:
For me to get a job in the industry I need to have qualifications and show an ability to learn,


First, after your arrogant statement above, why would want to descend into the uninspired trenches of the industry when all the thinking happens in the ivy-covered walls of academia? Putting that aside, no one has discouraged you from acquiring qualifications and knowledge, nor has anyone argued that the ability to learn is not a prime qualification in and of itself. (In fact, read Damian's post I linked to on page 1 and he says much the same thing.) However, I believe that your approach in this whole conversation was insisting that the industry needs to learn the technology that you want to pursue rather than you learning what the industry as a whole has found to work.

Seriously... reread the thread in its entirety. Read it aloud. Read everyone's posts... including your own. And then, like a good academic researcher should be, back off a step or two into the realm of objectivity and see what is really being said by everyone... and what went wrong.
0

Share this post


Link to post
Share on other sites
Lets state the most basic of facts in regards to various AI technologies:

All the technologies discussed are function(s) that take some number of inputs and produce some number of output(s), in an attempt to maximize or minimize some score or cost (even if that score is purely subjective such as 'it does what I want')

In this regard, an NN is no different than an FSM or any other methodology.

A key advantage to leveraging NN+GA as an approach to a problem is that YOU do not have to decide which inputs are important and which ones are not. Use all the inputs available and burn some CPU. The point of such an excersize is often that you do not trust your own judgement and instead will relegate the problem to 'Machine Learning', with an aim not to evolve the best 'thing' but instead to determine a measure of the relative worth of each input in regards to maximizing and minimizing.

This does not mean that at the production point of your product you are using NN's trained by GA's. It could mean that you then took that information that you garnered from ML and produces a hand creafted FSM out of it.

This isnt an either/or proposition. All these technologies are tools and if the tool works to solve the problem, then all other arguements are completely moot. The problem isnt always focused on the end result product, but rather a middle step on the way to that product.

I think many game developers have disregarded many of these tools in their race to get their product out the door. This isnt necessarily a bad thing. The companies score metric is almost always profit per year (always so if they are publically traded), which is often at odds with overall product quality. It is precisely at the academic level where quality and innovation are primary on the human cost metric.
0

Share this post


Link to post
Share on other sites
*moderator hat on*
Everyone take a chill pill please. It's quite okay to have a strong opinion, particularly in the academia vs industry debate, but please try to keep the discussion polite... or at least avoid making directed, personal attacks at each other.

Thanks.

*moderator hat off*

On the original question of researching the use of NNs in games. That's quite valid. Go for it. Just don't expect anyone to actually use ANNs just because you might find a valid application for them. As has been pointed out several times in this thread, there exists, almost always, an alternative solution to a problem that an ANN can solve (and usually *how* it solves it is more easily and more widely understood). Doing research for research sake is not a good use of your time. You should be looking for quantifiably useful results. That is, research must have significance AND importance. Thus, you should be looking at problems and asking "can an ANN solve this better than the existing methods". Significant quantities of previous research though have shown that, generally, the answer is no.

Some comments on the parallel, off-topic discussions of game education and industry vs academic innovation...

In Australia over the past 5 years many universities have jumped onto the game dev/design education bandwagon... 10 years ago there were only 2 places in Oz you could go to study games. Now it's more like 20. This is a recognition of two things: 1) that there is strong demand for 'cool' courses (and those perceived as vocational) amongst high school graduates; and, 2) there is a demand from industry for graduates who have some basic understanding of the problems that must be overcome in the production of games software.

Traditionally though, the role of universities has been to develop scholarship and engender graduates with the skills for life-long learning. These skills can of course be learned outside of the university environment. The role of universities is not (or at least, should not be) to teach people how to do a specific job. These skills should be learned through practice, while on the job. Unfortunately, in this modern, economically focused age, universities have been forced to sacrifice scholarship for 'graduate outcomes' (meaning employment prospects) because industry does not want to bear the expense of training workers. The result is that universities now try to cater to what industry wants and what students want, rather than on what society needs. Hence the rise in games dev programs. (There is also another driver: market growth in the games industry due to the 'leisure lifestyle' of Gen Y... but that's a discussion for another day). We should not though expect universities to churn out people who are job-ready on day one. It simply isn't possible. They have a lot to learn and it's up to industry to choose those most capable of learning and employ them, when they have the need.

Having said that, there ARE very good degree programs teaching game development in a computer science/software engineering framework, where students learn fundamental skills applicable across a broad spectrum of IT roles, but also focus heavily on game development. One would expect that graduates from these programs are useful to industry. Sure, they're wet behind the ears and need to learn a lot... but at least they have some basic foundations from which to grow from.

As for innovation...

I cannot recall the source of the data, nor the exact figures (so please, take this with a grain of salt), but I remember reading that around 95% of innovation in IT was achieved by industry, rather than academia and that this was simply because it was industry trying to solve the day to day problems in software development. In other words, they needed a solution so they went out and developed one. That doesn't mean though that academia is a waste of space and money. The role of academia is NOT to produce commercial applications of knowledge, nor to produce knowledge with immediate commercial value (although this does happen from time to time). Indeed, because there is not an inherent, immediate commercial value in what academics do, many people denounce them as useless.

On the contrary though, academics are afforded the luxury of the time and money to investigate problems that *may* have a commercial value in the future (or may lead to an advancement of knowledge). In Australia we have two government funded research streams, provided by the Australian Research Council, to support this: Discovery grants and Linkage grants. These are aimed, respectively, at developing fundamental knowledge (discovery) and developing commercially viable applications of fundamental knowledge (linkage). The latter is always done in partnership with industry. Thus, at least in Australia, the role of academics is to solve the problems, or develop the knowledge, that industry has neither the time nor money to investigate, simply because they cannot guarantee a benefit to their bottom line. We get to look at the big picture, or the fuzzy, distorted picture that no one else can afford to look at, to find new solutions to old (or new) problems.

Principally, our aim is to inform industry of what is possible and to provide them with a strong foundation from which they can develop the solutions that they need. Both groups are necessary. Without industry, academia has no funding support (no one paying taxes that fund the research) and without academia, industry has to bear the cost of the research upon which their innovations are often based (and it's been shown time and again that industry cannot afford to do this). In the end, we all need to get along with each other, which, if I recall correctly, was the original comment in this post! ;)

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
Oh, just a quick response to kirkd... if it weren't for Kolmogorov, who built on Markov's work, we wouldn't have our modern information age (including computers, electronic and photonic communications systems, the internet, etc)! So, it's probably a little unreasonable to suggest that Markov's work sat idle for 100 years until applications of Markov chains arose in speech recognition! ;)

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
AI for most games is relatively simple. Move there, shoot at this guy, take cover, etc... Training a NN for such simple tasks is pointless overkill.

NN are for complex tasks where the solution is either complex or not feasible to hand code. Thus most applications of NN are in solving complex engineering, robotics, and computer vision tasks.

If you want to (usefully) apply NN to games, think of a situation where coming up with a formula or hand-coded state machine to solve the problem would be extremely hard. For example, if you read about the NN implementation in Colin McRae 2.0 racing game, they had very complex car and track physics (mud track racing) and were unsuccessful hand-coding a good general purpose AI. Thus training NN drivers was a good solution.

Another example would be AI for a complex game like "Go" (each move leading to many times more decisions than Chess). So far nobody has successfully come up with a competent Go AI. Some type of Machine Learning will solve it ... eventually.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Timkin
Oh, just a quick response to kirkd... if it weren't for Kolmogorov, who built on Markov's work, we wouldn't have our modern information age (including computers, electronic and photonic communications systems, the internet, etc)! So, it's probably a little unreasonable to suggest that Markov's work sat idle for 100 years until applications of Markov chains arose in speech recognition! ;)

Cheers,

Timkin


Would be nice to be able to read what kirkd said. I do not agree with that. Perhaps there would have been stalls and things might have ended up with slightly different notations like in say complexity theory, perhaps a different way of doing probabilities but I am certain that no one man has had so much impact since Aristotle. There were simply too many people approaching the notion of computing from many angles including off the top of my head: Russell, Church, Haskell Curry, McCarthy, Turing, von neuman, John backus...

Interestingly what does owe alot to AI is programming languages, lots of things considered powerful today were already being used back in the day to make AI programs easier to tackle: functional programming, relational/logic programming and constraint solving, object oriented programming, module concept and dynamic dispatch to name a few.
0

Share this post


Link to post
Share on other sites
Daerax,

I apologize, however, I deleted my original response rather than leave what was considered to be misinformation in place. What I had originally said was the Markov developed the basis of Hidden Markov Models in the 1880s but didn't find much practical application until the 1990s with speech recognition and bioinformatics. It was not my intent to suggest it found no usage, but rather that a technology that found limited practical application for 100 years could be applied to a modern problem.

-Kirk

0

Share this post


Link to post
Share on other sites
Quote:
Original post by InnocuousFox
NNs can pretty much handle only a static time slice and don't handle looking ahead (or behind) very well. That makes any sort of planning algorithm a little muddy.


I am not sure I understand your reasoning for this statement.

NN's are nothing more than function approximators, and as such has no extra limits attached to them. The reason for NN's is machine learning the functions to begin with when you don't have the information necessary to simply map input(s) to output(s) using more traditional methodolgies.

If you want to evolve a NN to look ahead or behind, give it some feedback nodes (outputs that are used specifically for inputs on the next query.) This does put a constraint on training methodology as far as I am aware (can't backprop train a node without an error metric for it.) A GA training approach is well suited to this kind of setup.

Once a NN is suitable (or any function approximation methodology), it can usualy be easily converted to lookup tables + interpolation, a simple and efficient function approximator that is also easily tweaked by hand.
0

Share this post


Link to post
Share on other sites
First, in order to read non-current data (e.g. for detecting trends), you would have to assign an input to "what came before". In a sequential action, this is fine because you can look at whatever data structure you are using to keep track of "what came before" (e.g. a list) and say (current - n) or whatever. However, if you are looking at time sensitive things, e.g. "things within the past 5 minutes", you have to get a little more clever. For example, in doing stock market analysis (a common example) you have to have inputs for "yesterday's closing price", "last week's closing price", "last month's closing price", or whatever time scale you find necessary. The more of those you throw in there, that is that many more inputs you need to account for. Each of those is subject to glitches and spikey local minima/maxima.

More to the point, however, NNs are good for creating an output based on the current state of the inputs. You can't as easily start playing with things such as a Minimax-style problem or something as complex as a plan of actions in a sequence.

More later... I'm at a client site.
0

Share this post


Link to post
Share on other sites
Some responses to recent comments...

Original post by Daerax
Quote:

I do not agree with that. Perhaps there would have been stalls and things might have ended up with slightly different notations like in say complexity theory, perhaps a different way of doing probabilities but I am certain that no one man has had so much impact since Aristotle. There were simply too many people approaching the notion of computing from many angles including off the top of my head: Russell, Church, Haskell Curry, McCarthy, Turing, von neuman, John backus...


Most of what arose in western engineering (particularly telecommunications and control) and subsequently computing from the 40s onward was based directly on the understanding of stochastic processes developed by the Russian-Germanic alliance of the late 19th and early 20th century. Generally speaking, western scientists and mathematicians were simply no where near the level needed to create this understanding. There is countless evidence of advances in western engineering and computing being directly based on Russian publications, or on Western scientists having spent time with their foreign counterparts, bringing back the knowledge with them.

During the latter half of the 19th century and into the 20th, there is a single strong thread of Russian mathematicians, predominantly coming from the same school at Moscow State University. The mathematics group there was pivotal to the developments of the time. Everything that came later in this area can be shown to have grown from the knowledge developed by this one group. Kolmogorov was one of those who stood out from the crowd, hence my selection of him.

I could provide examples of the direct links and the basis of my opinion if anyone is particularly interested, but I'd end up waffling on for ages, hence the omission from this post! ;)

On the issue of handling time in ANNs...

Feed forward networks are very poor at handling time. Even when you provide inputs covering information at previous time, which is essentially an attempt to model the autocorrelation of the process. However, there ARE network architectures that handle time very well... they're just harder to train, because you now have the problem of ensuring that you're seeing all processes passing through a given point at a given time.

Recurrent networks can be designed to model the underlying time-space differential of the process. You can even ensure properties such as stable (non-divergent) learning. I've made some particular contributions in this area in the application of recurrent architectures to learning control problems (where you know nothing of the system you are trying to control, only the performance requirements). Having said that, I certainly wouldn't advise anyone to apply these architectures to control problems in games.

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
Quote:
Original post by InnocuousFox
First, in order to read non-current data (e.g. for detecting trends), you would have to assign an input to "what came before".


I think you are missing the point of machine learning.

You really shouldn't assign any sort of historic input. Instead, you can let the machine figure out whats important to "remember", when to "forget", and so forth. You can simply continue to give it current state information, and access to the feedback nodes. The more feedback nodes you give it, the more historic state it has.

The historic state, the feedback, has no assigned meaning to you. Thats for the GA to optimize.

Quote:
Original post by InnocuousFox
More to the point, however, NNs are good for creating an output based on the current state of the inputs.


True.

Quote:
Original post by InnocuousFox
You can't as easily start playing with things such as a Minimax-style problem or something as complex as a plan of actions in a sequence.


NN's are just numeric functions. It does what functions do. Of course it isnt suitable for searching a game tree. Thats what search algorithms are for.
0

Share this post


Link to post
Share on other sites
When I hear "AI" I immediately think "expected utility maximization", not "artificial neural networks".

Not all of AI is expected utility maximization, but a huge chunk of it is. It takes on many different shapes (e.g., game tree search) and it takes a lot of effort to get this general idea to solve real problems, so in that sense it's not a silver bullet either. But it is probably the first thing you should think of when approaching any decision-making problem.

Unfortunately, ANNs have a much catchier name and most people think of them first.

0

Share this post


Link to post
Share on other sites
Quote:
Original post by InnocuousFox
... which is why NN's are more "silver spoon" than "silver bullet". It's a big ol' mathematical function - not a problem-solving algorithm.


Problem-solving is another name for the perhaps overly generalized term, Search Algorithms (be it AB, MTD(f), A*, etc..)

Chess is a fine example where, while dominated by the traditional searches, Machine Learning has also played an important role.

What is the value of a Knight or Bishop sitting on D6?

AB-driven engines have an eval() that needs to know the values as they relate to its own run-time search capabilities. Many engines use piece-square tables, a set of many hundreds of coefficients which are tweaked not by humans, but instead by machine learning algorithms. Not only are these coefficients too big a problem for a human to manualy define due to the sheer number of them, the human also isnt well suited for the task because he or she is not truely capable of groking the intricacies of how these values relate to an N-ply AB search nor the horizon effects associated with that.

The ML algorithms are very powerful tools which are typically not very useful as part of the end-product, but can be very usefull on your way to creating that end-product.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Timkin
Some responses to recent comments...

Original post by Daerax
Quote:

I do not agree with that. Perhaps there would have been stalls and things might have ended up with slightly different notations like in say complexity theory, perhaps a different way of doing probabilities but I am certain that no one man has had so much impact since Aristotle. There were simply too many people approaching the notion of computing from many angles including off the top of my head: Russell, Church, Haskell Curry, McCarthy, Turing, von neuman, John backus...


Most of what arose in western engineering (particularly telecommunications and control) and subsequently computing from the 40s onward was based directly on the understanding of stochastic processes developed by the Russian-Germanic alliance of the late 19th and early 20th century. Generally speaking, western scientists and mathematicians were simply no where near the level needed to create this understanding. There is countless evidence of advances in western engineering and computing being directly based on Russian publications, or on Western scientists having spent time with their foreign counterparts, bringing back the knowledge with them.

During the latter half of the 19th century and into the 20th, there is a single strong thread of Russian mathematicians, predominantly coming from the same school at Moscow State University. The mathematics group there was pivotal to the developments of the time. Everything that came later in this area can be shown to have grown from the knowledge developed by this one group. Kolmogorov was one of those who stood out from the crowd, hence my selection of him.

I could provide examples of the direct links and the basis of my opinion if anyone is particularly interested, but I'd end up waffling on for ages, hence the omission from this post! ;)

Cheers,

Timkin


I do not not want to pollute this thread though it seems to have run a course. I do quite enjoy the history of mathematics.

I am not disagreeing that Kolmogorov played an important role, mainly that his role was pivotal. Things would have happened anyway since there was so much going on. In the area of control and telecommunications there is no doubt his role was greater but not as directly into Computing. With the exceptions of his contributions to intuitionistic logic, a branch of constructivism which people are just now noticing computer science is basically a branch of I do not see much direct influence. The current computer science is a child of early 20th century obsessoins with formalism and rigour propelled by Weiserstrass an overeaction to the poor proofs of the time. Type theory so important now - including for verification of hardware, software and for automated provers - is older than the turing machine.

In terms of giants Turing (british), Shannon (american) and Von Neuman (hungary) have most influence modern computer hardware and design. In software the legacy starts with Peano (italy) and Frege (germany). The names and location attributions are of the top of my head so i cannot gurantee spellings and exact locations.

I would be interested in the sources which state that much of modern computer technology is based in any one place.
0

Share this post


Link to post
Share on other sites
Quote:
Original post by Daerax
I am not disagreeing that Kolmogorov played an important role, mainly that his role was pivotal.

Perhaps I over-stated my original point. I was not trying to suggest Kolmogorov was the only person responsible... but that he was a key link in a chain that, had it not developed and had it not been applied, we would not be where we are today. I was trying to make the original point that Markov's work did not sit idle for 100 years.

A particular example of the importance of Kolmogorov in the chain...

Shannon (whom we would probably all agree has played a key role in the development of modern information theory and subsequently the modern computer and telecommunications industry) directly indicates that his mathematical theory of communication (published in his pivotal paper of the same name) is directly inspired by the work of Norbert Wiener (check out the acknowledgements in his pivotal 1948 paper). Wiener, of course, is famous for many things, but many will know his name from the Wiener filter and Wiener process (also known as Brownian motion). While a Wiener process is an example of a Levy process, it is actually a simple case of the more general differential Chapman-Kolmogorov equation, which describes the general class of drift-diffusion stochastic processes. The Wiener filter is the generalised filter for Wiener processes. Many would know the specialised solution for linear, discrete time Wiener processes, being the Kalman filter.

While it is acknowledged that both Chapman and Kolmogorov developed their solutions independently, both build on Kolmogorov's work on reversible Markov processes, which are recognised as pivotal to the whole field of Markov processes, particularly in our modern usage. Without Kolmogorov's extensions to Markov's original work, we would not be where we are today.

I also do not agree that 'someone else' would have developed this understanding simply because there were others working in the field. It is definitely the case that more than one person is responsible for where we are today... but as with many fields of endeavour, they can be traced back to a handful of key discoveries/breakthroughs that were necessary before we could move forward.

Cheers,

Timkin
0

Share this post


Link to post
Share on other sites
Quote:
1) single-player "campaign" type games are very scripted environments. A very small subset of the rules design wants for the game I'm working on now:
- when the player destroys this Item, the AI will counter-attack.
- When the AI squad gets to 20% of its original units -> retreat.
- AI should only throw grenades when no other AI is throwing a grenade and has not for at least 10 seconds.
- Only 1-3 AI should ever rush the player at any given time
- AI should only spend 30% of their time hidden behind cover; further no single AI should remain hidden in cover for more than 3 seconds
- When the player's units get to a Point of Interest they should stop and play a canned set of animations at a specific point to develop the plot.

Personally I see this as one of the great failures of the state of game design today.

Players are tired of scripting, it holds up a sign which reads "BORING AND PREDICTABLE" and they can spot it a mile away.
0

Share this post


Link to post
Share on other sites
Does anyone have a small game that has AI based on the "conventional" way that would be willing to let me write my own AI controller using NN's? Obviously you will take all credit for the game, all im looking to achieve is a set of results to compare and evaluate for my project.
0

Share this post


Link to post
Share on other sites
I don't know if it is helpful or not but I have a PacMan clone on SourceForge that you're welcome to use. The AI is based on NEAT (Ken Stanley's Neuro-Evolution through Augmenting Topologies) so essentially it exists as a GA-NN already. 8^( You're welcome to use it as a platform if you like, though.

http://sourceforge.net/projects/aipac/

-Kirk

PS - If you use it, please keep me updated on the results. You can send me a PM here or we can swap e-mail.


0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0