• Content count

  • Joined

  • Last visited

Community Reputation

250 Neutral

About Stoffel

  • Rank
  1. Cogent confabulation

    Thanks for the comments, mnansgar. I agree with your assessment of his presentation style, and it's something that was off-putting to me during class as well. As for research, he has been mostly in industry--he's an adjunct at UCSD, meaning it's not a pure research paper. He founded a company (HNC) that was acquired by Fair Isaacs where he employs neurocomputing to do all sorts of secret things (probably to figure out your credit risk). He is generally credited with creating the first neurocomputer. So though he's been tangentially involved in academia, he's not a pure researcher. I didn't realize he was so poorly published. Thanks for your comments--nice to hear from someone who tried it out.
  2. Cogent confabulation

    Quote:Original post by dmvieau Perhaps a corresponding article pertaining to IEI technology could be presented citing specific cases with very pragmatic details as to how these "confabulation-like" methods are already working, would be of great interest to your group. (I know the "ultra-pragmatic" tend to "lurk" here). Not just vague theories but actual "case histories" of real world application of these techniques! By all means, please present your article--right here would be fine, or just post a new topic. If you'd like to post your methods (they're patented so there's no IP involved, correct?), please do so.
  3. I think the best thing to do to fix this would be to create a synchronization object for the sleeping thread to wait on, instead of an infinite loop. Events serve this purpose. Create a "timeUpdate" event, have the thread that's updating time set the event, and replace your while loop with: while (game->local_time - last_time < mimimum_delta) ::WaitForSingleObject(/*..your timeUpdate event..*/); If you guarantee that the Event is set in a >minimum_delta period, you can change that while to an if, because returning will guarantee at least that much time has elapsed.
  4. Seems to me the ideal real-world application for hardware GAs are FPGAs. FPGAs are simply silicon chunks containing oodles of gates. Their connections are programmed dynamically, so they can do exactly what you say--randomly switch from any number of boolean operations, randomly connect outputs, etc. Googling for "FPGA genetic algorithm" pulls up lots of hits. Looks like this is a common area of interest.
  5. Cogent confabulation

    Quote:Original post by Timkin Quote:Original post by Stoffel His main complaint is that "people who do AI don't pay any attention to neurobiology". I'm curious as to his source of information on this matter. I for one am a person trained in AI and who has worked in neuroscience research and I can tell you that there are hundreds, nay thousands of researchers out there who regularly broach this division of the sciences on a daily basis. Personal experience, it sounded like. I didn't subscribe to his "me against the world"-isms--I chalked it up to his eccentricity. Quote: What he apparently fails to recognise - or refuses to admit - is that most people who work on AI aren't trying to mimic brain function, but are rather trying to produce rational and/or logical behaviour by artificial means. It sounds to me like he's doing a bit of a beat-up of the AI community to promote the importance/novelty of his own work. Could very well be. Quote: Personally, I've seen many 'models' of cognition proposed. As yet, none has solidified itself as an accepted theory by more than a handful of researchers. I guess we'll wait and see what the jury verdict on this one is. Agreed. This is the very first glimpse of this theory, so it's very early in the game. His approach seems to be, "It sounds crazy because it's different than anything you've ever seen, but it works and here's how you do it." I know of a number of people who are already digging into it, so I think we'll see if the proof is in the pudding within the next 5 years.
  6. For pitch exactness, the FFT isn't probably what you want. The poor-man's way to do it would be to use a DCT at your desired pitch frequencies; if they don't get over a threshold, they missed the note. The tough way to do it would be to look into high-resolution frequency analysis (which FFT is not). For voice coders, we model voice as an ARMA process (auto-regressive moving average). Although if you're only looking for pitch, I believe that's done with auto-correlation. Hopefully that gives you a starting point.
  7. Cogent confabulation

    Quote:Original post by Palidine I guess I'll preface this by saying that I am always completely skeptical (and thereby mildy argumentative) whenever anyone says "this is how <insert brain behavior here> works". Very healthy. I too remain skeptical. Quote: I think that reads as a pretty cool theory. It seems to amount to: "brains think by pattern recognition", right? I mean essentially the article describes his theory as saying that the brain thinks through pattern completion and not through an algorithmic processing of the possible solutions. Is this really new? The novelty of the theory, from what I understand, is that he presents a way for complicated pattern recognition to be broken down into a massive set of co-occurrences. Only two things are checked against each other at a time. Neurons can't do much--they fire or don't fire, and fire with a certain strength. From what he said, most of the current AI field believes the brain works to solve the problem p(e|abcd). That is, if I know the current facts a,b,c and d, what's the most probable event e? And this is devolved into smaller problems using Bayesian analysis. His approach is the opposite. What's the probability of observing facts a,b,c and d given event e? p(abcd|e). He calls this the cogency. The event e that maximizes this for known facts abcd is the one our brain "picks". His justification for this is that p(abcd|e) is sort-of related to (not even proportional to--he goes into great depth in his paper): p(a|e)p(b|e)p(c|e)p(d|e). In other words, the product of the co-occurrences of each fact and event. A machine that would do this would--in parallel--look up all the pairwise co-occurrences and formulate the product (or sum the log). This is a process he calls confabulation. His argument is that the brain is such a machine--there's evidence that neurons do this sort of thing, and they can't do much else so this must be the way it works. Quote: Fundimentally that's how fully connected neural networks operate, and I thought that was the general concensus on how high level memory and other cortical processes work. Essentially certain brain patterns trigger other brain patterns to complete, ad infinitum. There is no "algorithmic processing" of the possible results, but rather the network settles into the local holding pattern that is closest to the inputs received. Yes, I think that's right, although I don't know about neural networks. His main complaint is that "people who do AI don't pay any attention to neurobiology". So even though that's the consensus on how the brain works, that's not how neural nets work. His words, not mine; I'm not knowledgable enough on the subject. Remember, this isn't some neural-net know-nothing. He was a little modest in class about his accomplishments, but his company (HNC) was acquired by Fair Isaacs, he's currently a VP of R&D there, and he's credited with creating the first neurocomputer. He's a pioneer in the field. That said, he makes extraordinary claims that require extraordinary proof. I was personally impressed with the results, but I am new to the field. Quote: I guess I just missed the part where people thought that the brain operated by looking at a possible solution set and running some kind of probablility test on the results. Or have I missed the point entirely and is this article about a revolution in the software simulation of cognition and not about a revolution in the theory of how the brain works? -me Right--his complaint isn't about neuroscience, but that neurcomputing pays no attention to neuroscience. Apparently most traditional AI work by some means of Bayesian application of probabilities to the problem set, and he thinks this is absolutely backward, but nobody's come up with a way to do it "right" until now. He claims that cogent confabulation is the right way.
  8. I just completed two quarters of Dr. Hecht-Nielsen's course, and he recently went public with his new theory for cognition. You can see the release here. The technical paper abstract is here. Doesn't look like the paper is freely available. The first link has a link to the video of his lecture where he introduced the paper. Using his methods, I was able to build a pretty simple machine that would reasonably fill-in the fifth word of a sentence given the first four words. This machine had no knowledge of grammar, and did not do a search of learned text--some of the sentences filled-in were completely novel.
  9. jflanglois, your code is doing something completely different from Motorherp's. Someday someone will make a great multi-dimensional array class. Motorherp's solution is in the ballpark and is pretty ingenious, but has some weird behaviors for out-of-bounds*. The link before to the C++ faq had a very good templated Matrix implementation which used a single memory allocation (which is how static multi-dimensional arrays work as well). But I am dismayed to find in their latest "upgrade" they've changed the class internal storage to a vector of vectors, which is exactly the same as jfanglois's solution--just with automatic memory cleanup. I believe it's a Bad Thing to individually allocate each row in an array. Not only is there the extra heap overhead--it can also be nasty to your cache as Motorherp said. I realize I have the minority opinion here, but shouldn't the goal be to dynamically allocate an object that behaves exactly like a statically-allocated array? Hm, I should take this as a pet project. *Using Motorherp's solution, if the row length is N, accessing obj(0)(N) will give you obj(1)(0). Using jfanglois's solution will just give you a memory violation (maybe).
  10. Nerd and proud

    My favorite system is still the old world of darkess d10 system. Simplest way for GMs to just make up rolls on the fly.
  11. Back from the dead

    Just a quick post here. I miss GameDev, wish I had time to check in more often. So, I just started what will hopefully be my final quarter at graduate school. I've finally picked a project, waiting to talk to my advisor so he can approve it but I'm sure it's a done deal. I'm going to develop a dynamic steadying algorithm for handheld cameras. It lets me put together my classwork on computer vision (structure from motion), robotic vision, parameter estimation, image processing. Feels like a winner. I already have some data collected of different subjects filling a test pattern, and I plan to capture some video from Blair Witch Project to run the algorithm through as a "real world" scenario. 9 weeks left in the quarter, meaning maybe 7 in which to do this project. Crunch time.
  12. Busy busy busy

    It's been at least two weeks since I've checked in here. I've been slammed at work, but the project I'm on finally slipped its deadline, so I have a little breathing room now. This happens the day before I start back to class. Here's a link to my LJ post describing the class--AI fun!
  13. Just wanted to applaud this discussion, this thread is a classic. As far as the golden rule goes, my version of Exceptional C++ by Sutter lists as a guideline: "Prefer cohesion. Always endeavor to give each piece of code--each module, each class, each function--a single, well-defined responsibility." This is in Item 23, where he looks at the Template pattern. Anyway, carry on. I'm a little sad I don't get to do C++ anymore.
  14. Quote:Original post by NightMarez locate all map's (with a illiterator) that holds key 3308 then erase each one of them (multimap.erase(key) ) multimap::erase(key) will erase all entries equal to that key--no need to use iterators. If you want to use iterators, you can use multimap::equal_range to find the range, then use multimap::erase (firstIt, lastIt) As for your second question, looks like you're encapsulating the map. Clients of the class will have less control over the map itself, leaving the class in charge of how things get added, deleted or changed. That's fine.