• Create Account

genetic programming or evolutionary algo for game AI

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

38 replies to this topic

#21Álvaro  Members

20235
Like
0Likes
Like

Posted 23 July 2010 - 07:34 AM

Quote:
Original post by Daerax
Quote:
 Original post by alvaroActually, no. Imagine you are trying to optimize the strength of a poker bot. The only measure of strength you have is matching one bot against another for a number of hands and seeing who makes money off of whom. I don't see what a finite difference have to do with anything.

off topic but are you making a poker bot?

No. I thought about it for a while and I will probably get to it some day. But I have too many other projects right now and too little time to finish them. My job is getting in the way of my hobbies once again!

#22willh  Members

160
Like
0Likes
Like

Posted 23 July 2010 - 01:02 PM

Quote:
Original post by Kylotan
Quote:
Original post by willh
Quote:
 Original post by KylotanFirstly, you missed crossover - without that, it's not a GA.

Crossover is just another form of mutation.

No, it's not.

Crossover and mutation both change the solutions to explore the search space but they are completely distinct in their purpose, which is why you can't leave one out and talk as if you're still using a GA

Crossover IS a mutation operation. This is a mathematical certainty.

Quote:
 Original post by KylotanThat member of the population is chosen using some routine that favours members with higher fitnesses. Therefore the parts of the solutions that are better persist unchanged into future generations.

The logical conclusion of this is that all members will eventually become nearly identical. At this stage cross-over will have little effect as it becomes no different than a 'simple' mutation operation.

Quote:
 Original post by KylotanThis is critical as it speeds up convergence, which is its primary purpose - it takes 2 solutions which have often optimised different dimensions of the problem and combines them into 1 solution. Mutation alone cannot do this.

That is not necessarily true for two reasons.

1. It is very possible for a mutation operation to introduce the exact same effect as would have happened with crossover.

2. If you look at convergence as TIME spent (i.e. milliseconds) and not generations required, then having an effective crossover scheme (which may work very well for the first few generations of a population) could actually result in a decrease in convergence time. This is due to the number of poor candidate fitness evaluations that are required for each generation, resulting in wasted cycles. It's like exploring obviously dumb moves in a gametree-- wasted time with no return.

In other words, an environment having 2,000,000 members that convereged in 10 generations sounds a lot better than one that took 10,000 generations to reach the same goal-- but if the 10,000 generation one took 5 minutes and the 10 generation one took 5 hours, then they don't stack up so well.

Quote:
 Original post by KylotanGenetic algorithms require crossover, mutation, and selection. Without implementing crossover properly you've pretty much crippled selection too and all you're left with is a mostly random form of beam search, which of course is going to be worse than pretty much any other algorithm you might throw at it. If that is how people use GAs then it is not at all surprising that they get poor results.

We'll have to agree to disagree that crossover is required for a GA; it's not a requirement in nature so I don't see why it should be a requirement when talking of algorithms.

In practical terms I do not think it is important to always include crossover in to a framework. You could go a step further and include multiple different crossover strategies (one point, two point, etc..) but in many cases I think the effort would be better spent on implementing more intelligent mutation operators (or any number of things).

#23Daerax  Members

1207
Like
0Likes
Like

Posted 23 July 2010 - 01:49 PM

Quote:
Original post by willh
Quote:
 Original post by KylotanThis is critical as it speeds up convergence, which is its primary purpose - it takes 2 solutions which have often optimised different dimensions of the problem and combines them into 1 solution. Mutation alone cannot do this.

That is not necessarily true for two reasons.

1. It is very possible for a mutation operation to introduce the exact same effect as would have happened with crossover.

2. If you look at convergence as TIME spent (i.e. milliseconds) and not generations required, then having an effective crossover scheme (which may work very well for the first few generations of a population) could actually result in a decrease in convergence time. This is due to the number of poor candidate fitness evaluations that are required for each generation, resulting in wasted cycles. It's like exploring obviously dumb moves in a gametree-- wasted time with no return.

mathematical certainty is a term I have always found redundant.

A genetic algorithm with no crossover is a waste of time, there is no point maintaing multiple solutions. at this point you may as well be using another algorithm, SA say.

Certainly you can look at cross over as a type of mutation just as you can look at them both as a probing blind search, randomly looking for serendipitous solutions near a point. GA are not related to nature so what you find in nature has no impact here, saying cost minimization and random combination of list elements is genetics is like saying drawing a square and building a house is the same since they both contain a four sided polygon.

Your point 2 is interesting because studies show that GA performs better for certain problems over long runs than SA (no crossover) which tends to arrive at and settle near its optimal solution quicker. A cleverly coded crossover could be the difference here, for certain problems. Although SA also sometimes outperforms GA for reasons you mention, as always there is no clear winner.

If you look at my earlier post you will see that I do not disagree with you in essence. Just in presentation. I agree that crossover is not necessary for this type of stochastic search (again , ala SA) but if your *Genetic* Algorithm doesnt do this then you have no business calling it such.

#24Kylotan  Moderators

6362
Like
0Likes
Like

Posted 24 July 2010 - 02:55 AM

Quote:
 Original post by willhCrossover IS a mutation operation. This is a mathematical certainty.

The two concepts are different in the algorithmic sense. Of course it would be possible, if you wrote your mutation operation in a certain way, to alter a solution such that it could change in a similar way to applying a crossover operation. But the chance of a typical mutation creating the same change as a typical crossover would make would be tiny. The point of crossover is to make use of the information elsewhere in the population, and the point of mutation is to introduce new information. The fact that these two can possibly overlap in their result is not terribly relevant.

Quote:
Quote:
 Original post by KylotanThat member of the population is chosen using some routine that favours members with higher fitnesses. Therefore the parts of the solutions that are better persist unchanged into future generations.

The logical conclusion of this is that all members will eventually become nearly identical. At this stage cross-over will have little effect as it becomes no different than a 'simple' mutation operation.

Yes, they typically become closer together. That's convergence, and is exactly what you're looking for. Compare to other optimisation algorithms which typically just follow a single solution from start to end - they only had 1 possible option all the time, not just near the end. And of course convergence slows down as the algorithm progresses - this too is quite common in optimisation problems. Once you home in on the optimum there is often less room to improve.

But it doesn't become the same as a simple mutation operation. Quite the opposite in fact - as the solutions converge, mutation will produce quite different results to crossover.

Quote:
 2. If you look at convergence as TIME spent (i.e. milliseconds) and not generations required, then having an effective crossover scheme (which may work very well for the first few generations of a population) could actually result in a decrease in convergence time. This is due to the number of poor candidate fitness evaluations that are required for each generation, resulting in wasted cycles. It's like exploring obviously dumb moves in a gametree-- wasted time with no return.

Compare this with many other optimisation algorithms, which don't even have a game tree - they just explore one promising looking route and often need some sort of added stochastic element or restarting procedure to get them out of local extrema.

Fitness evaluations in GAs are not particularly different from heuristics in other search or optimisation algorithms. Their usefulness is proportional to the quality of the heuristic.

Selection is only as expensive as you make it, also - tournament selection is incredibly cheap for example.

Quote:
 We'll have to agree to disagree that crossover is required for a GA; it's not a requirement in nature so I don't see why it should be a requirement when talking of algorithms.

Genetic algorithms use a biological metaphor to explain their constituent parts, that's all. It doesn't mean that anything that may exist in biology makes up a valid GA.

Quote:
 In practical terms I do not think it is important to always include crossover in to a framework. You could go a step further and include multiple different crossover strategies (one point, two point, etc..) but in many cases I think the effort would be better spent on implementing more intelligent mutation operators (or any number of things).

Without crossover you're not sharing useful information across the population, so there is little point maintaining that population. If you have a problem where there is no useful information to share, that's fine, but I can't think of one myself. If such a problem did exist then it would probably be better to run many instances of a cheaper single-member optimisation algorithm serially rather than dressing it up as a GA.

#25willh  Members

160
Like
0Likes
Like

Posted 24 July 2010 - 09:15 AM

While I don't agree with everything in this article, I thought it might be useful to share this bit from Wikipedia regarding Genetic Algorithm
Quote:
 Wikipedia Genetic AlgorithmReproductionMain articles: Crossover (genetic algorithm) and Mutation (genetic algorithm)The next step is to generate a second generation population of solutions from those selected through genetic operators: crossover (also called recombination), and/or mutation.

(Bold added for emphasis)

Daerax: I have a hard time accepting that the Genetic in Genetic Algorithm has no relation to the biological term 'Genetic'. Having read (one of) John Hollands descriptions first-hand, and the amount of space devoted to discussing biological genes, I'm pretty sure he would agree.

Quote:
 Original post by KylotanBut it doesn't become the same as a simple mutation operation. Quite the opposite in fact - as the solutions converge, mutation will produce quite different results to crossover.

For a problem where population convergence and high-fitness are both achieved at approximately the same time then I agree with you. If population convergence happens before high-fitness then I still hold that crossover essentially becomes a simple mutation operation. There has a been a lot of research put in to strategies to maintain popluation variance specifically for this reason.

Quote:
 Selection is only as expensive as you make it, also - tournament selection is incredibly cheap for example.

Some problems are expensive to evaluate and there is no way around it. Complex simulations, large data crunching exercises, etc.. I think we agree at this point, we're just looking at it from different angles.

Quote:
 If you have a problem where there is no useful information to share, that's fine, but I can't think of one myself. If such a problem did exist then it would probably be better to run many instances of a cheaper single-member optimisation algorithm serially rather than dressing it up as a GA.

I can think of quite a few actually. As a complex problem space is 'explored' there are times when having fewer members is more effecient. There are also times where it is neccessary to branch out and 'explore' different variations on the same theme.

In fairness, I have always included crossover in my own frameworks. I also track how often each mutuation operator causes an improvement in fitness; crossover is (almost) always the least effective.

#26Daerax  Members

1207
Like
0Likes
Like

Posted 24 July 2010 - 08:33 PM

willh I am not sure what you are saying. Your arguments are correct but your conclusion doesn't fit. You say you always implement crossover and yet are trying to argue that crossover and mutation are the same thing. When it is clearly not the case and certainly *should* not be the case. Otherwise you are doing it wrong. The only way they could be the same is if you were taking a broad sweeping view of both mean creating new lists, say. But that is missing the forest for the tree, the point of multiple solutions is crossover (recombining elements across lists vs. selecting elements near a point in a list) to increase the chance of stumbling near a global optimum.

As for the original paper. Original papers are good to read because they are much less dry and provide insight and motivation than later distillations. But they are often much more bloated as they contain the raw, unpolished inspirations. As Kylotan said the biology is just a metaphor that guided the original work. Just as Immune systems insipire the Negative Selection Algorithm. I could just as easily call crossover as cross list recombination and mutation as Rearrangement via Random Neighbourhood Point Sampling. This doesnt make GA biological any more than playing Operation prepares you for heart surgery.

Quote:
 Original post by KylotanWithout crossover you're not sharing useful information across the population, so there is little point maintaining that population. If you have a problem where there is no useful information to share, that's fine, but I can't think of one myself. If such a problem did exist then it would probably be better to run many instances of a cheaper single-member optimisation algorithm serially rather than dressing it up as a GA.

Its a matter of runtime. Clearly both would eventually arrive at the same result giving enough time but there are some problems where maintaining multiple solutions are distracting. One example from personal experience is when tuning some model where there is only one 'answer' and taking even a few step away results in wildly divergent answers. Here, keeping multiple solutions is detrimental and increases runtime drastically. Where as something like optimizing a schedule is better fitted to the GA approach.

Nonetheless to get around the weakness of not having multiple solutions in SA I retain a best second solution where the probability of restarting from that point increases as the time without a better find increases. Also attenuating range of search as Temperature cools is useful.

#27willh  Members

160
Like
0Likes
Like

Posted 25 July 2010 - 11:05 AM

Quote:
 Original post by Daeraxwillh I am not sure what you are saying. Your arguments are correct but your conclusion doesn't fit.

If we agree that:
a. Mutation can give the same results as crossover.

b. Omitting crossover does not necessarily invalidate some of the other positive (and fruitful) aspects of GA/EP/'Dice rolling' for a specific problem.

then I don't think we have any disagreement. I didn't say 'never use crossover'. I just said that it's not always necessary. (I implement it in my own frameworks out of habit BTW).

I think the argument originated with the claim that 'its not a GA if there is no crossover'. I disagree with that claim for reasons stated previously but will elaborate:

Bob is using a GA to solve some problem.
1. Bob has implemented crossover and 5 types of mutation.
2. By generation 100 an acceptable fitness has been achieved for Bobs problem.
3. In all 100 generations, crossover did not once produce any fruitful result and did not actually contribute to the solution.

Given 1,2, and 3, did Bob use a GA to reach his solution? Would it have been any less a GA if Bob simply omitted crossover given that the results would be identical?

You could argue that the the crossover/genome was poorly implemented; for reasons which I've already provided this is not always the case.

#28Daerax  Members

1207
Like
0Likes
Like

Posted 25 July 2010 - 11:02 PM

Again, No argument. You make sense. This is a matter of semantics. If you strip away the metaphors of mutation and crossover and instead look at it at a pure algorithmic level then you see that what you have is either

a) something similar but not equivalent to a genetic algorithm with cycles and memory wasted on a superfluous function (crossover) and maintaining multiple solutions

b) a badly implemented crossover function.

Thus in conclusion it appears your definition of GA is broader than everyone who disagrees with you (and most literature I have run into), you use it more as a covering term rather than focusing on the specific algorithmic properties.

#29willh  Members

160
Like
0Likes
Like

Posted 26 July 2010 - 05:34 AM

Quote:
 Original post by DaeraxIf you strip away the metaphors of mutation and crossover and instead look at it at a pure algorithmic level then you see that what you have is either a) something similar but not equivalent to a genetic algorithm with cycles and memory wasted on a superfluous function (crossover) and maintaining multiple solutionsb) a badly implemented crossover function.

I haven't said anything to that effect.

Quote:
 Original post by DaeraxThus in conclusion it appears your definition of GA is broader than everyone who disagrees with you (and most literature I have run into), you use it more as a covering term rather than focusing on the specific algorithmic properties.

I can assure you that GA w/o crossover is very much in the literature; it has been a topic of debate for some time. Maybe you're not reading the right material? Pretty much everything I've written as part of this thread has already been talked about in 'the literature'.

You're insisting on a rigid definition that necessarily includes crossover. I reject this definition while maintaining that the baby is still safely in the proverbial tub. You're also insisting that this view is somehow concrete and the only legitimate one; while that may be true within the GD community, it is certainley not the case!

In your view, what type of crossover is required for something to qualify as a GA? How flexible are you willing to be with your crossover operator while still having, what you would consider, a GA?

In terms of your own GA implementation, what is the goal of crossover?

[Edited by - willh on July 26, 2010 1:34:15 PM]

220
Like
1Likes
Like

Posted 27 July 2010 - 03:11 AM

See kids, this is where GA/DP/ANN leads: semantics arguments over ill-defined methods that gives poor results and takes so much time and hacks and tweaks just to make it work that you could have coded a straightforward solution a hundred times over instead.

#31Emergent  Members

982
Like
0Likes
Like

Posted 27 July 2010 - 02:39 PM

Quote:
 Originally posted by Daerax:excellent post! one qualm, GA without cross over is not really crippled and is really not so different from Simulated Annealing - the latter of which I have used to tune an SVM with good result

@Daerax:

Off topic, but... why couldn't you just solve a quadratic program? I mean, from my point of view, that's the whole advantage to SVMs; they're built using convex optimization, so you don't need stochastic methods like SA/GA/etc...

#32Kylotan  Moderators

6362
Like
0Likes
Like

Posted 28 July 2010 - 02:50 AM

Quote:
 Original post by SteadtlerSee kids, this is where GA/DP/ANN leads: semantics arguments over ill-defined methods that gives poor results and takes so much time and hacks and tweaks just to make it work that you could have coded a straightforward solution a hundred times over instead.

IF you don't know what you're doing, sure.

If you know what you're doing, you can get awesome results with a GA on the right problems. Personally, I think GAs are perfectly straightforward.

We don't quibble over the need for or the worth of a decent heuristic for A*, or tell people to stick with DFS because it's more straightforward. So why do we do so over the need for decent operators for GAs? Perhaps it's because we don't have so many concrete applications with concrete heuristics, unlike A* where we have the pathfinding scenario with geographical estimates of distance.

#33Daerax  Members

1207
Like
0Likes
Like

Posted 28 July 2010 - 03:52 AM

Quote:
 Original post by willhI can assure you that GA w/o crossover is very much in the literature; it has been a topic of debate for some time. Maybe you're not reading the right material? Pretty much everything I've written as part of this thread has already been talked about in 'the literature'.You're insisting on a rigid definition that necessarily includes crossover. I reject this definition while maintaining that the baby is still safely in the proverbial tub. You're also insisting that this view is somehow concrete and the only legitimate one; while that may be true within the GD community, it is certainley not the case!In your view, what type of crossover is required for something to qualify as a GA? How flexible are you willing to be with your crossover operator while still having, what you would consider, a GA? In terms of your own GA implementation, what is the goal of crossover?

Originally I believe the argument was crossover is equivalent to mutation. This is correct enough, for a sufficiently general definition of these terms as to offer little information. Any meaningful definition of the terms must partition them. e.g. when considering the mathematical aspects of trying to minimze the probability of destruction of schema, we want to reduce the length of the 'chromosomes' to reduce p_c(S) that crossover will destroy some schema S and reduce the size of the "gene" with regards to mutation affecting S.

Now you are stating that GA does not require crossover. My contention is that while this may make sense GA without crossover loses the main benefit and core idea of what makes a GA: unlike gradient descent (which often takes a long time to converge and gets stuck in valleys) and the like it benefits when you can get good improvements to search by combining good results from different locations.

Without this the GA loses its bite, enough so that it reduces to a poor imitation of an algorithm such as simulated annealing. You may as well use a method such as Best First Search. This is not to say that this method (no crossover) won't work, it will, especially if combining from differing locations is detrimental but it will converge slower than say momentum based methods or SA and may well get stuck at a local extremum.

I think I have exhausted what I can say on the matter. I will leave with this, if you consider the biology metapohor that originated GA then the benefit of crossover vs is mutation is the benefit of sexual vs asexual reproduction. And then this argument reduces to a subject that is older even than computers.

#34Daerax  Members

1207
Like
0Likes
Like

Posted 28 July 2010 - 04:29 AM

Quote:
Original post by Emergent
Quote:
 Originally posted by Daerax:excellent post! one qualm, GA without cross over is not really crippled and is really not so different from Simulated Annealing - the latter of which I have used to tune an SVM with good result

@Daerax:

Off topic, but... why couldn't you just solve a quadratic program? I mean, from my point of view, that's the whole advantage to SVMs; they're built using convex optimization, so you don't need stochastic methods like SA/GA/etc...

Excellent question, so maybe I was not clear. We are talking about two different things. Using a stochastic method would be very dumb of me when trying to optimize the margin around the dividing hyperplane but I wasn't talking about this.

See SVM are extremely robust, if you define the problem well and have selected and transformed features appropriately they are very, very hard to beat. They can classify even more complex spaces than a MLP Neural net. But their main weakness is that they don't generalize so well without correct selection of parameters and live and die by correct tuning. Selecting a Kernel function can be done in terms of understanding the problem domain but then after that you have to search for what parameters (e.g. Cost and gamma for a RBF kernel) lead to the best results. This is what I used SA for.

See http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1333843 for more

#35WeirdoFu  Members

205
Like
0Likes
Like

Posted 11 August 2010 - 07:01 PM

I just can't resist the urge.....I have to say something.

There really is no argument when it comes to defining terms like crossover, mutation, selection, etc. because they have all been defined in literature and they all fall under the umbrella of evolutionary computation. There is also an agreed upon definition of an evolutionary algorithm (EA) out there (dating back to 1993).

Just to point out a few things. A GA without crossover is not a GA. It is actually an evolution strategy (ES). And the cool thing about ES is that it varies the amount of mutation based on successes of previously applied mutations. However, later on, crossover was added into ES in one form or another to increase its robustness. Other additions to ES include complex covariance matrices to properly guide search, which at that point, I find it pretty much overkill.

Now as to the merits of crossover, that is arguable, but for all EAs to work properly, it all boils down to exploitation vs exploration. The point of crossover is to promote exploration of the search space by incorporating partial solutions from other candidate solutions. Mutation is simply a form of exploitation of the current solution and local solution space. However, depending on the convergence of the population, the roles of the two can flip around. So, going back to a mutation only EA. To make it work well, you must somehow counterbalance the exploitative nature of mutation with some other mechanism that promotes exploration. In the case of the original ES, it varied the amount of mutation to try and promote exploration, but it seems that comparably, crossover created more exploration, which is why few stuck with the original ES structure.

Back to the original posters question about genetic programming. As cool as that would be to have GP in games, chances are slim that it would become practical. Though there may be some use for it somewhere in the development pipeline. Also, note that GP does not have to evolve actual pieces of code. Most practical applications of GP out there involve using it to do curve fitting by evolving the actual mathematical equation for the curve or data in question. More useful for game AI would probably be Evolutionary Programming, which, if memory serves me correctly, was originally designed to evolve finite state machines, which has a much higher level of relevance when it comes to practical game AI. So, for example, evolving the state machine that defines an AI agent's behavior on the fly through gameplay feedback.

In the past decade or so, people have been trying to improve on EAs by pretty much mixing and matching everything they can come across, so no one really works exclusively with any of the 4 originals anymore (GA, ES, EP, GP). At this point, the lines are so blurred that it is quite pointless in arguing whether something is a GA or not. The fact is that they're all evolutionary algorithms in one shape or form.

Oh yeah, and for people who are obsessed about gradient ascent/descent, there is the Society of Hill-Climbers and the evolutionary society of hill-climbers that is definitely worth checking out.

#36AngleWyrm  Members

554
Like
0Likes
Like

Posted 12 August 2010 - 10:21 AM

Quote:
 Original post by noppanitI'm just curious that has anybody used genetic programming for Game AI before? I mean I'm doing my project and I just need some experience.

The paper and dice game Traveler has ship combat, where the players design their own ships, where the ship's stats are encoded as a sequence of numbers. Someone programmed a genetic algorithm to generate the ship design sequences and pit them against each other.

The winning combination was a swarm of medium fighters with heavy armor and no capital ships. The game designer was pissed that his game had been 'solved' by computer simulation, and next year he released the updated rule books just three days before the annual competition, to try preventing computerized solutions.

#37Emergent  Members

982
Like
0Likes
Like

Posted 13 August 2010 - 08:08 AM

Quote:
 Original post by Daerax[...]search for what parameters (e.g. Cost and gamma for a RBF kernel) lead to the best results. This is what I used SA for.See http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1333843 for more

Oh, ok; that makes sense. Have you seen the work that directly optimizes the kernel matrix subject to the constraint that it remain positive definite (e.g. this)? That keeps the problem convex, and is more general than tuning, say, RBF parameters. (Of course, I appreciate that "more general" is a double-edged sword in learning, so I understand why tuning as in your case might be preferred.)

#38Alrecenk  Members

400
Like
0Likes
Like

Posted 15 August 2010 - 06:04 PM

Quote:
 Original post by noppanitI'm just curious that has anybody used genetic programming for Game AI before? I mean I'm doing my project and I just need some experience.

I'm not going to get into the genetic programming vs genetic algorithms vs gradient descent argument, but I will say that as I am typing this I am running genetic algorithms to learn strategies for the base set of the card game Dominion: http://en.wikipedia.org/wiki/Dominion_%28card_game%29 . The game has a lot of cards with varying effects and relatively simple effective combos. It would be quite a pain to write intelligent AI for every card, and doing it this way lets me generate multiple AI players that all play differently while still playing fairly well.

In my experience genetic algorithm are not good at finding the "best" answer to a problem, but they are good at finding okay answers. What makes them fun is that they are good at finding unexpected answers.

#39Daerax  Members

1207
Like
0Likes
Like

Posted 15 August 2010 - 06:17 PM

Quote:
Original post by Emergent
Quote:
 Original post by Daerax[...]search for what parameters (e.g. Cost and gamma for a RBF kernel) lead to the best results. This is what I used SA for.See http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1333843 for more

Oh, ok; that makes sense. Have you seen the work that directly optimizes the kernel matrix subject to the constraint that it remain positive definite (e.g. this)? That keeps the problem convex, and is more general than tuning, say, RBF parameters. (Of course, I appreciate that "more general" is a double-edged sword in learning, so I understand why tuning as in your case might be preferred.)

No I haven't but it looks really interesting. The ability to learn model classes without local minima is great! And the fact they "have also shown how optimizing a linear combination of kernel matrices provides a novel method for fusing heterogeneous data sources" and another paper from MSR I just scanned that tells how to account for missing data in sources is excellent.

But I can't really justify implementing it - it looks like it will take weeks to a month or more of full time work at an unquantifiable gain. I mean the technique itself is more general but there is nothing in the paper that leads to me believe that it learns a more general space. In particular they say its error rate is lower than typical generally but how do I know it is not general only for the specific data they generalized it on? And IMHO the gain improvement is marginal enough to be not worth the effort. Do you have any advice?

See the problem I've faced in implementing a bunch of these advanced methods is that when you are not looking at the iris data set or whether it will rain or amino acids data the methods tend to crumble and barely outperform the simpler methods (out of the box). The methods then aren't so performant (time , space and learning generalization rate) when dealing with fairly large datasets *for a single typical computer* - leading to a lot of time spent "improving by compromising" the method to get better results to your problem. Machine learning is currently part art isn't it?

I've been consider moving a bunch of these matrix based methods to a GPU but my target audience don't have the latest in GPGPUs (i already parallelize so when 8-16 cpus become norm I expect to reap those benefits but with 2 the average the speed gains are not very nice).

Anyways you know alot about this stuff, do you have any specific advice on learning kernels via SDP versus typical methods?

EDIT: To clarify, I am not saying that the more advanced methods aren't worth it. Far from it, they tend to outperform the simpler methods but only after a lot of work done to optimize it to your problem. Where as naive bayes, a decision tree or say kNN just work; Bayesian Nets, SVM or NMF require a significant amount of time 'adjusting' - I find my self reading the proofs of the methods to gain a better understanding for tweaking whereas the other stuff is intuitive - so require alot more work but then blow the simpler ones out of the water. Most of the time. So I always find myself questioning if the improvement will justify the extra work.

[Edited by - Daerax on August 16, 2010 12:17:41 AM]

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.