Jump to content

  • Log In with Google      Sign In   
  • Create Account

Markov chain and world representation


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
17 replies to this topic

#1 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 26 September 2003 - 04:53 PM

hello i would know how to use the markov chain in order to build the world representation (with continuous learning) well i''m not a mathematician but an artist then the theoric doc i have find over the net is not fully understandable the fact is i want to use markov chain rather than GA ou NN because i want to allow the npc to build a world representation and appraise scene from it there is what i plan there is a scene Sn which return a vector of property X to the agent Snt {X1,...,Xn}(there is a finite number of property) i want the agent being able to recognize recurrent element in the scene to build an abstract representation of a scene and i wnant the agent learn about sequence of scene too and able to recognize the probality of the content of the new scene and able to guess how would be the next sequence of scene and even sequence of sequence, etc... the agent would react too to the % of difference of the actual scene apraise and the internal model and how can i return a schematic graphic of the current internal state of representation to see in realtime how the agent represent the world actually it will be use together with different systeme like emotion engine and dcisional engine but''s it''s an another matter, don''t mind at this time >>>>>>>>>>>>>>> be good be evil but do it WELL >>>>>>>>>>>>>>>

Sponsor:

#2 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 26 September 2003 - 08:11 PM

quote:
Original post by Neoshaman
i would know how to use the markov chain in order to build the world representation (with continuous learning)



Then you need to start by reading heaps of literature on Dynamic Bayesian (Belief) Networks.

quote:
Original post by Neoshaman
well i''m not a mathematician but an artist then the theoric doc i have find over the net is not fully understandable



Theyn you''re starting behind the 8-ball. This topic requires a good understanding of probability and computational learning theory.

quote:
Original post by Neoshaman
the fact is i want to use markov chain rather than GA ou NN because i want to allow the npc to build a world representation and appraise scene from it



Like NNs, there are two ways to train a markov model: batch and online. Batch training would involve you computing the probability density functions for a training set and then utilising this model to classify scenes (input sensor vectors). Online training would mean that you need to learn the parameters of these probability distributions bases on all (or a subset) of the observations made so far, as well as estimate the state of the world given these model parameters and the latest observation. This is a far more complex problem and computationally, an exact solution for a reasonably complex domain with nonlinear relationships in time (such as a game scene) is still intractable. Approximation methods abound though and you might want to consider using one.

quote:
Original post by Neoshaman
i want the agent being able to recognize recurrent element in the scene to build an abstract representation of a scene
and i wnant the agent learn about sequence of scene too and able to recognize the probality of the content of the new scene and able to guess how would be the next sequence of scene and even sequence of sequence, etc...


I''m still trying to get my head around exactly what you want to do here. It sounds like you want to utilise a probabilistic model of the domain (scene), including temporal extension, to infer futures scene(s). If this is the case, there has been some limited research in this area for 2D video images. It''s exceptionally advanced though, so be warned that most of it will be mathematics and advanced algorithmic concepts. I can track down the reference for you if you like...

quote:
Original post by Neoshaman
and how can i return a schematic graphic of the current internal state of representation to see in realtime how the agent represent the world



There are many tools for visualising probabilistic belief networks out there. Check google. You might also want to look at Kevin Murphy''s research page at Berkeley. I think he still has a list of software available for dealing with Bayesian networks and causal graphs.


This is an advanced project that you''re undertaking, but don''t let that put you off. Any advances that you make in this field will probably be publishable, so the benefits to your C.V. are great!

Cheers,

Timkin

#3 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 27 September 2003 - 03:15 AM

thanks to reply

the fact is not that i'm not comfortable with math but i drop math for more artistic way, i lack some advenced notion but 'm use to get it if someone explain or if i take good clear doc
For markov they is some notion which bother me and i didn't know where to get them

then i you can guide me it's fine

i'm new in the AI field too, i've come to it without reference and only try to acheive my goal (making a social game) without thinking about problem but about solution, by take everything from every field possible even the most exotic(artist work mode, both rational and irrational)

i've come to markov chain with a single little article about speech recognition, and it's seems to me that it was i need that's all

now i've to study all clue you give me thanks

EDIT:
quote:
This is a far more complex problem and computationally, an exact solution for a reasonably complex domain with nonlinear relationships in time (such as a game scene) is still intractable


well i think it's can be done with a meta representation which give meaning to model

i forgot to mention that the scene appraisal works by first appraising object, with object define as a vector of property and a scene is a vector of object, then the higher layer is sequence of scene
meta representation appraise the world representation and give him a value through emotion appraisal (emotion can be define as retroactive regulation in a complex system, which can be positive or negative), the emotion then inhibit or reinforce decision tree (something like this)
meta representation is the meaning layer

what i want to know to is how to extract something like pattern and give them a label (like word and the kind which create a sort of language) and how to show them (the world representation)
like a UML diagram (class and hierachy between labeled object scene and sequence)
oh, i forget, a scene is divide in two set, event (action and change in the scene) and environnement (what i have mention previously, vector of property)

oh! i forgot, i choose marrkov because i don't know how to use NN to create object, by reversing the markov you can predict/create object by probability of property (like in speech recognition)

well did this make sens?? any comment?

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

[edited by - Neoshaman on September 27, 2003 12:04:19 PM]

#4 Anonymous Poster_Anonymous Poster_*   Guests   -  Reputation:

Likes

Posted 28 September 2003 - 12:06 PM

Do you actaually want to use Markov chains to predict the future from the past, or are you trying to make the agent infer some form of casuality on the system is observed?

The latter I would think was more rewarding, useful and (possibly) easier.

eg Being able to predict the user is statistically likely to jump left is not as useful as knowing that he tend to jump left if you shoot at him.

I suppose that these are just different sides of the same coin - one looking at discrete events, the other a continuous history.

@Timkin:
I''d be interested if you could find those probabilistic inference in video game references. Don''t go to too much trouble though... I try on google myself.

#5 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 28 September 2003 - 12:41 PM

i''m not trying to build a model of the player but the entire world
something like having an abstract idea of flower or object like this by recuring property, just like the litle prince from st exupery know only rose as a flower and doesn''t know that a iris is flower because never seen yet, but after having seen a lot of flower he can say that a tulip is a flower even if he never see one, create abstract representation is what i want
then this representation need meaning for ex like thrustful, apealing, dangerous etc.... these would affect decision and realtion betwenn the npc and the object or a particular environnement (scene), if you have seen a murder in forest ofthen you will fear when you will be in a forest or at least incomfortable, something like that

i''have mix the actual behavorist method with the gestalt aproach

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

#6 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 28 September 2003 - 03:29 PM

quote:
Original post by Anonymous Poster
I''d be interested if you could find those probabilistic inference in video game references. Don''t go to too much trouble though... I try on google myself.


It wasn''t in the context of a video game, but rather video images. The reference came up in a discussion with a colleage at Monash about a year ago. The person doing the research was well known to my colleague, so, I should be able to obtain the reference by talking to him again. If you''re particularly interested, I''ll chase it down for you.

Neoshaman, I''m starting to get a much better feel for what you want to do. Just to make sure I understand, I''m going to talk philosophy for a moment!

Plato introduced the idea (at least for the western world) of the ''form'' of an object as being those essential factors that defined a thing as seperate from other things, although a ''form'' was never an actual thing, but rather an idea, or a pure concept. For example, the form of a chair might have 4 legs and something to sit on. The form of a cat has 4 legs, a tail, a head and a body, etc. Instantiations of these ''forms'' are the things we see in the world.

This idea appeared again in the bible as the ''templars of God''... the concept that the essence of something''s design pattern was an idea in the mind of God, which embodied the perfect form of the thing.

Now, it sounds to me like you want to be able to learn this sort of concept of ''form'' by observation of an evironment. So, from seeing lots of chairs, you want to be able to classify another object that you see as either being a chair or not a chair, based on the essential characteristics of chairs you have seen so far. You''d also like your agents to be able to classify a stool as being chair-like, based on their ''model'' of what chairs are like. Is this correct?

If it is, you have a difficult task ahead, as this latter sort of classification requires an understanding of the functional relationships between objects. The former sort of classification can be achieved with many different tools, one of which is the artificial neural network.

There are two approaches that come to mind. One is Fuzzy Logic and the other is Partial Probabilistic Assignment. Fuzzy Logic determines the degree with which something belongs to a specific set, given a set of input attributes and a model of how attribute values map to sets. Partial Probabilistic Assignment is the notion of describing a probability distribution that represents the probability that the thing belongs to any particular set spanned by the distribution. The latter can be achieved using an information theoretic measure for classification, such as MML/MDL.

Of course, if I''ve completely misunderstood what you want to achieve, then please ignore this post!

Cheers,

Timkin

#7 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 28 September 2003 - 05:09 PM

well if say i would want to train the agent to recognize chair or object that already exist, it's not exactly what i aim, it would create is own classification and give its own name to object and situation (a kind of create it's own culture)

for the philosophy background i use hindu concept of maya which is not far different from plato (and some from phisics quantic too)

actually it's an extension of the model of the game ai i have made, pure curiosity in fact, when i have design the ai i use premade vector (object) and with no learning of the world (the world is already know, design by the author), but it's seems that i can go beyond that with a simple article i have read

actually i use some other thought about classification, it's about value we give to object to classify them through relation we have with them, a chair has not the same meaning while you are sit on or if you get him from the face because someone throw it at you

i have try to acheive this through several layer
the first layer is a rationnal knowledge of the world, and it's the major subject of this topic, kinda like building just a pure representation
we need to give a meaning to object of this representation then we appraise them from property (big, small, loud, etc) and this where we may need fuzzy logic, but it's already implemented because these value come with a strength
the third layer is about emotionnal appraise which give the last layer of meaning, and build the relation of the object towards the agent by giving a subjective value with primitive emotion (compounded with representation building and appraisal it will give more complexity)

for ex vector are appraise
by property which build object through property
emotion of preference (pleasure/displeasure) which give preference value (like/dislike)
emotion of principle (standard) which give a value like praise/blame to action and change in the world related to belief
beleif are made from interaction with the environment from emotion

emotion build the self as an emergence
and the agent could appraise action to other object (fortune of other) by indentify his self to the object (other agent are seen like object, and reaction build more identification to the self if they are more close to possible reaction of the agent, then alter ego feeling emerge from this rule)

emotional appraisal are retroaction regulation they change the representation of an object (with object define as any vector of property primitive/action primitive) then the belief and the emotion attach to them

property can define as sensor's output
but i have no clues yet to clearly define action and change into primitive

conclusion: world representation are build both with external value and internal value of the agent

i would say that i'm not a programmer, and i have only program on a casio graph80 which is only 64ko
a lot of this was made like if i had to make it on a 64ko machine
but i'm still in early stage of thought, i'm designing this in less than a month for a social game (even if the model will be greatly more simple)

i have use some ref with memetics too, systemic and science of complexity

EDIT:
i would say if i have failed to build primitive of action it's because i have not think about it, i was more interested by object, but's kinda like building a base of rule by appraising , when i thought of this, kinda complex no?? when the scene will the base of fact...
sure the decision engine will have to take the rational property of object and emotionnal property to make choice (property lead to possibility, when emotionnal raise the better choice by checking the difference of satisfaction (rewards) of action and the pain of this same action, this should affect too if an object is blame (block goal/desire) or praise, and build belief

is that make any sense?? any critics???

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

[edited by - Neoshaman on September 28, 2003 12:27:25 AM]

#8 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 29 September 2003 - 01:09 PM

Wow, there''s so much in your last post that I need to get my head around. I''m going to print it out and have a really good read of it at lunch time. I''ll see if there''s anything more to add to the conversation after that.

Cheers,

Timkin

#9 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 11 October 2003 - 01:13 PM

sorry, i have lost my net connexion, then what''s the matter, you scare me

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

#10 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 12 October 2003 - 03:02 PM

Sorry Neoshaman,

I''ve been so hectic at work that I haven''t gotten around to going over your post again. I''ll try and find some time this week, but I''m moving offices, so it may be a little hard.

Timkin

#11 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 12 October 2003 - 03:07 PM

ok don''t worry
i''m still doing some research too

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

#12 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 10 November 2003 - 11:59 AM

you have a set of object (context or scene) where action can be taken by object toward other object and produce event (time is also an object and produce action)
events are consequences of ruleset and are create by action when condition is met, while action are create by object and object are changed by events and react by producing an action etc...

is this correct, how we represent this (action and event) normally in semantics??
and did it exist some paper about ai which create their own language and dialecte from experiance, and rather than label each pattern how did semantics deal with things like adjective in their representation??

i have some pist of my own, but i wanted to know what are already do here, if this make sense too...

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

#13 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 10 November 2003 - 12:14 PM

quote:
Original post by Neoshaman
you have a set of object (context or scene) where action can be taken by object toward other object and produce event (time is also an object and produce action)



So, in other words, you''re saying that anything that can act (an agent, for instance) can act on any other thing. That''s fair enough.

quote:
Original post by Neoshaman
events are consequences of ruleset and are create by action when condition is met



Do you mean that events occur (i.e., actions are executed, which produce events) any time that their precondition is met?

quote:
Original post by Neoshaman
while action are create by object and object are changed by events and react by producing an action etc...



Yes... so you get a continual activity occuring... this is the premise behind agent functions. For any possible state that the agent is in, it knows an action to perform that takes it to another state that it could be in... and so on...

quote:
Original post by Neoshaman
how we represent this (action and event) normally in semantics??


There are many different representations for such agents. Subsumption architecture is one, policies are another. This is a very broad area of research, so you''d be best served by looking at one area - like subsumption architecture - and then expanding your reading from there.

quote:
Original post by Neoshaman
and did it exist some paper about ai which create their own language and dialecte from experiance, and rather than label each pattern how did semantics deal with things like adjective in their representation??



There was a very interesting research project a few years back regarding computer agents that migrated around to different computers and spoke to other computer agents. They developed language, dialect and even primitive grammar, which of course shocked many linguists out there who believe that grammar is hard-wired into our brain! I only saw the reference in New Scientist, but you might be able to search their archives or back issues for a reference.


Good luck,

Timkin

(By the way Neoshaman, your English is getting better! Keep at it! I for one appreciate how difficult it must be to try and discuss these concepts in a language that is not your first. Heck, I couldn''t even begin to discuss this in French, and I learned that in high school!)

#14 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 11 November 2003 - 05:12 PM

thanks for replying, and thanks for encouragement :D

i have post question that go so far even if i would not use them yet because i have no need to go that far for my game, but definitly would toy with this for a moment

about events and action, these are an attempts to find basics for explain dinamics (suppose to be changes over time), but i was not sure about the difference between actions and events (this bother me a little)

quote:
Do you mean that events occur (i.e., actions are executed, which produce events) any time that their precondition is met?


well, events are suppose to be (in my mind) a change of state (any change in the context not only agents) and it could be only a change which can be possible, then rules and condition is required to produce an event
but i'm not sure how to define action, but the difference seems to lie in notion like 'passive'(events) or 'active'(action), but it still blur (an action can be seen as an event?? active and passive is only a matter of point of view?? an action required something which take the action and something which under that action like the basic language set subject/verb/complement? and would this basic set the definition of an event??)

is this make some sense??

well i have not begin yet the reschearch of doc, because i start always by defining myself the basics to have something to compare from other thought

------------
well for the language i'm just training by toying in forum even in stupid thread and copy sucessful pattern of language, test them by imitation and observe the effect, we have a great NN and i don't like batch trainings (i'm bad at school, that's why i have turn artist), this useful for the work i'm doing and most of the basics discuss here was directly connected to my experiance with this game (language, knowledges and interactions), since it's a social game...
------------


>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

[edited by - neoshaman on November 11, 2003 12:14:42 AM]

#15 baylor   Members   -  Reputation: 122

Like
Likes
Like

Posted 24 November 2003 - 09:57 AM

i haven''t exactly followed this thread, so i''ll use my standard answer: it really, really, really helps when people give concrete examples. Often, if you force yourself to fill in the details of a specific real example, you''ll find a)you answer your own question, b)the problem isn''t what you thought it was or c)there are small, little details that need to be solved that, when solved, solve the bigger problem

As for what NeoShaman wrote, there''s the concept of modeling the world. The issue is, what does that mean?

There was also the issue of recognizing an object (an iris) one has never seen before by thinking of similar objects (a rose). This is the standard generalization/analogy/exemplar issue that cognitive science has been working on for, oh, a bazillion years. Lots and lots of people have done papers on this, although i don''t know if any would help. Off the top of my head, you could look for "reasoning by analogy", "exemplar based learning" and "structure mapping". The latter is by Gelernter (sp?) at Northwestern and is 20 years old. In AI, you might look for Wetterscheck''s RIBL (relational instance based learner) work from i think 1996. But this is something i''m interested in and i haven''t found an answer yet

Hmm, do you want to recognize an object by a list of attributes or from an image? If from an image, consider looking at "geons", which is dividing an image into a collection of geometric shapes (it''s a common topic so a Ggoogle search should pull up plenty of cites). If from a feature list, you could look at decision trees. In fact, the iris example is one of the more common examples from the UC Irvine data mining test bed. You can find perhaps 80 algorithms that recognize irises in the free Java software Weka

Then there was something about actions. i can pick a rose so can i pick an iris? If the question is only "how do i know who can do what to what?", could you model that with normal logic? Say, situation calculus and STRIPS or PDDL

NeoShaman, i''m not sure what you''re trying to do, but the basic way it works in real life is that you make the most efficient solution that does mostly what it needs to. The result is our brains which is made up of lots and lots of special purpose tools to solve special problems

Consider perception. The code that makes hearing work can also make vision work, although not as well as a dedicated vision system. This is because both sound and vision have vertical elements - pixels in vision, pitch in hearing. Where you get into a problem is side-to-side vision. Hearing has no concept of a horizontal plane/movement and so the hearing subsystem doesn''t handle well visual tasks that rely on side to side analysis

So the hearing part of your brain could substitute for your vision center if need be, although with certain problems. It can''t, however, handle your sense of smell. Smell is a mixture of chemicals that need to be broken apart. The smells do not naturally map to a vertical linear discrimination

What''s that all mean? It means the hearing part of the brain is tied intimately to the data it processes. There is no generic intelligence or processing algorithm, it''s tied to certain invariant parts of the environment. And it''s cheap. It''s a one dimensional matrix because that''s all sound needs. Vision reuses the basic processing plans of hearing (because it''s more efficient to store it in the DNA that way) but adds a second dimension because visual information is 2D. Neither data structure or processing algorithms work for smell, so a completely different solution is used for it, one hard coded to the structure of scents

So you talk about being able to model the world. There is no one mental model for that. There are lots and lots of little models that get filtered as needed into different models that can work on higher-level systems, with the obvious loss of precision. Your mind has lots of special purpose data structures and lots and lots of data loss (which is the basis for "generalization"). Everything that can be hard coded is hard coded. Things that cannot be predicted (diet, parents, peers, weather, predators, etc.) are handled by systems that break the world into a few discrete options (to get food from someone, either beg, cry, show off or punch) and give you desires (hunger, horny, hurt, etc.) to help you figure out which ones are the best

So it''s not as easy as saying "give me some Markov software and i''ll build a world model". There''s lots of little systems you''d need to model all actions, all events, all objects, etc.

Hope that helps

-baylor


#16 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 24 November 2003 - 11:47 AM

well actually its not for a general ai , but a specific ai for virtual world!!!

i am aware of what you are saying but don''t underestimate the power of emergence

my system is an appraisal for a ''known hard coded world'' where i have the agent to appraise without knowing the world, i made the agent simulate the world in his "brain" (knowledge) with the added of value which build ''meaning'' (with ''meaning'' as the relation between the subject and the world)

actually it''s not directly my actual work but an emergent property than i have push a little forward from my work on artificial emotion (specific to game), i think it could work from a more advance work on real robot but the sensor must has to be very sophisticate and the processor strong enough to process a multidimensional data from sensor, actual robot seems to have very poor input

the brain is an emergent system, the hard code layer is in the "lower brain" while the hi level (neo cortex i think) is very flexible and use as input the "lower brain" and the data from ''sensor'' and with try to simulate the external world, actually the "lower brain" is the part which add the meaning depth by appraisal (emotion) which are imporant in discrimation of data and choice as on focus, emotion as the main role in inhibit and reinforcement of learning too, this is in conflict with the knowledge (as the world simulated or the model of the world) which are the ''logic'' part, because of goal and motivation which are emergent from hard coded instinct in the lower brain (maslow ladder is a good aproximation of it)
''consciousness and awareness'' are emergent property of the simulation of the person in the world as a part of part of the world and distinct from this world

about what is hard coded in the brain, it''s not that easy , there proof that when we lose sight or in the case of those which are born blind, the neuron area work is affected to task from neighbourg area, it''s mostly responsible of the sensation of synesthesia (which are the confusion of sense, or why you can find a sound bold just like an image) and are a major influance in the creativity process and imagination, the fact why there is area in the brain is that they are emergent from the input they are link to, and the basic wired scheme (between area, for ex there is a long link between sight area and hearing area which go beyond the neighbourghood), tha fact is the brain with a large amount of sensor (for ex, eye appraise brightness, color, form, contrast, movement etc... with separate sensor unlike most robotic sensor), the hi level brain just simply find pattern in many layer of depth (abstraction) and return them (knowlegde) to the lower brain which give them value and return order to effector (decision), with a feed back from these effector''s sensor to be appraise as well and then ameliorate the "simulation" (the real matrix is not around us but in us )

in fact, a lot is not esclusif of the brain, and a lot could be apply to some complex system, emotion can be seen as the retroaction regulation in the brain, and the brain is one of the most ''complex'' system known by human after society

if you want to have more insight on what i''m saying
you can have some interest in some new science which as emerge recently like:
memetics, science of complexity and science of emergence
some book i have read is:
for memetics> "the selfish gene" by charles dawkins
complex science "l''homme symbiotique" by joel de rosnay
science of emergence "a new kind of science" of steve wolfram

have fun

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

#17 Timkin   Members   -  Reputation: 864

Like
Likes
Like

Posted 24 November 2003 - 02:11 PM

Just as a sidebar...

quote:
Original post by baylor
Smell is a mixture of chemicals that need to be broken apart. The smells do not naturally map to a vertical linear discrimination



That''s not strictly true. The array of sensory filaments in the olfactory system map to an array of neurons in the cortex (actually a neuronal column). When a recognised smell irritates the filaments a specific pattern of activation occurs in this cortex array. This behaviour corresponds to a reduction in the dimensionality of the attractor governing the dynamics of the activation process. That is, when the olfactory system is not being stimulated, the neuronal array displays chaotic activation patterns of high dimensionality (around 10 I think it was... but I can check that if anyone is particularly interested). When it is stimulated, the dimenaionality of the dynamics falls to low single figures (You can think of this dimensionality as the number of independent variables/differential equations needed to describe the dynamics uniquely).

In terms of the ''vertical linear discrimination'', smells do map to a 2-D linear discrimination (i.e., a fixed pattern). Hence my statement that baylor''s statement wasn''t completely true. In 1-D, it is true, in 2-D, it''s not!


quote:

It''s a one dimensional matrix because that''s all sound needs.



Sound is more than just pitch and our brains certainly understand more about sound than just this one variable.

quote:

Vision reuses the basic processing plans of hearing (because it''s more efficient to store it in the DNA that way) but adds a second dimension because visual information is 2D.



Actually visual information is much higher than 2D. Spatial information is 2D, then you have lighting information, texture information, motion information and orientation information. In terms of neurons, we have neurons that are specifically sensitive to orientated movement, others that are sensitive to shape, others that are sensitive to lighting, etc. These combine in some very interesting ways (including phase-synchronised chaotic oscillation with time varying synchronisation levels!) to generate a breakdown of a visual snapshot of the environment. I don''t believe it is therefore appropriate to say that sound processing in the brain is 1D and vision is 2D.



quote:

There is no one mental model for that. There are lots and lots of little models that get filtered as needed into different models that can work on higher-level systems, with the obvious loss of precision.



Sounds like someone''s been reading Dennett.

quote:

Your mind has lots of special purpose data structures and lots and lots of data loss (which is the basis for "generalization"). Everything that can be hard coded is hard coded. Things that cannot be predicted (diet, parents, peers, weather, predators, etc.) are handled by systems that break the world into a few discrete options (to get food from someone, either beg, cry, show off or punch) and give you desires (hunger, horny, hurt, etc.) to help you figure out which ones are the best



Definitely a psychologists view... and perhaps one that should be reserved as a view of the mind and how it works, rather than a view of the brain and how that works!

Cheers,

Timkin

#18 Neoshaman   Members   -  Reputation: 170

Like
Likes
Like

Posted 17 December 2003 - 04:09 PM

quote:
Original post by Timkin
There was a very interesting research project a few years back regarding computer agents that migrated around to different computers and spoke to other computer agents. They developed language, dialect and even primitive grammar, which of course shocked many linguists out there who believe that grammar is hard-wired into our brain! I only saw the reference in New Scientist, but you might be able to search their archives or back issues for a reference.


it was TALKING HEAD from computer science labotory by luc steels

well i have didnot have enough time to get into math decoding for the markov doc (but i have find decent doc for the others thing you give to learn), i have read some but it''s heavy technically speaking (lot of math notation to learn as a language), i could understand but i have reach some limits with the MASS of paper i have read from various subject,(art school works too )

mmm, could i ask for some GENERAL explanation about how it work? i will fill with what i have read, it''s just for make me sure i have understood thing well so far

it was suppose to be for a game then i don''t have to go that deeply, but i couldn''t keep it, eventually there is no more gameplay, it would be a pure experimental simulation, and may end to artificial emotion and "conscious" of agent, at least at some low level i was getting caught by the temptation, it''s evil that things

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS