Markov chain and world representation

Started by
16 comments, last by Neoshaman 20 years, 4 months ago
ok don''t worry
i''m still doing some research too

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>be goodbe evilbut do it WELL>>>>>>>>>>>>>>>
Advertisement
you have a set of object (context or scene) where action can be taken by object toward other object and produce event (time is also an object and produce action)
events are consequences of ruleset and are create by action when condition is met, while action are create by object and object are changed by events and react by producing an action etc...

is this correct, how we represent this (action and event) normally in semantics??
and did it exist some paper about ai which create their own language and dialecte from experiance, and rather than label each pattern how did semantics deal with things like adjective in their representation??

i have some pist of my own, but i wanted to know what are already do here, if this make sense too...

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>be goodbe evilbut do it WELL>>>>>>>>>>>>>>>
quote:Original post by Neoshaman
you have a set of object (context or scene) where action can be taken by object toward other object and produce event (time is also an object and produce action)


So, in other words, you''re saying that anything that can act (an agent, for instance) can act on any other thing. That''s fair enough.

quote:Original post by Neoshaman
events are consequences of ruleset and are create by action when condition is met


Do you mean that events occur (i.e., actions are executed, which produce events) any time that their precondition is met?

quote:Original post by Neoshaman
while action are create by object and object are changed by events and react by producing an action etc...


Yes... so you get a continual activity occuring... this is the premise behind agent functions. For any possible state that the agent is in, it knows an action to perform that takes it to another state that it could be in... and so on...

quote:Original post by Neoshaman
how we represent this (action and event) normally in semantics??

There are many different representations for such agents. Subsumption architecture is one, policies are another. This is a very broad area of research, so you''d be best served by looking at one area - like subsumption architecture - and then expanding your reading from there.

quote:Original post by Neoshaman
and did it exist some paper about ai which create their own language and dialecte from experiance, and rather than label each pattern how did semantics deal with things like adjective in their representation??


There was a very interesting research project a few years back regarding computer agents that migrated around to different computers and spoke to other computer agents. They developed language, dialect and even primitive grammar, which of course shocked many linguists out there who believe that grammar is hard-wired into our brain! I only saw the reference in New Scientist, but you might be able to search their archives or back issues for a reference.


Good luck,

Timkin

(By the way Neoshaman, your English is getting better! Keep at it! I for one appreciate how difficult it must be to try and discuss these concepts in a language that is not your first. Heck, I couldn''t even begin to discuss this in French, and I learned that in high school!)
thanks for replying, and thanks for encouragement :D

i have post question that go so far even if i would not use them yet because i have no need to go that far for my game, but definitly would toy with this for a moment

about events and action, these are an attempts to find basics for explain dinamics (suppose to be changes over time), but i was not sure about the difference between actions and events (this bother me a little)

quote:Do you mean that events occur (i.e., actions are executed, which produce events) any time that their precondition is met?


well, events are suppose to be (in my mind) a change of state (any change in the context not only agents) and it could be only a change which can be possible, then rules and condition is required to produce an event
but i'm not sure how to define action, but the difference seems to lie in notion like 'passive'(events) or 'active'(action), but it still blur (an action can be seen as an event?? active and passive is only a matter of point of view?? an action required something which take the action and something which under that action like the basic language set subject/verb/complement? and would this basic set the definition of an event??)

is this make some sense??

well i have not begin yet the reschearch of doc, because i start always by defining myself the basics to have something to compare from other thought

------------
well for the language i'm just training by toying in forum even in stupid thread and copy sucessful pattern of language, test them by imitation and observe the effect, we have a great NN and i don't like batch trainings (i'm bad at school, that's why i have turn artist), this useful for the work i'm doing and most of the basics discuss here was directly connected to my experiance with this game (language, knowledges and interactions), since it's a social game...
------------


>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>

[edited by - neoshaman on November 11, 2003 12:14:42 AM]
>>>>>>>>>>>>>>>be goodbe evilbut do it WELL>>>>>>>>>>>>>>>
i haven''t exactly followed this thread, so i''ll use my standard answer: it really, really, really helps when people give concrete examples. Often, if you force yourself to fill in the details of a specific real example, you''ll find a)you answer your own question, b)the problem isn''t what you thought it was or c)there are small, little details that need to be solved that, when solved, solve the bigger problem

As for what NeoShaman wrote, there''s the concept of modeling the world. The issue is, what does that mean?

There was also the issue of recognizing an object (an iris) one has never seen before by thinking of similar objects (a rose). This is the standard generalization/analogy/exemplar issue that cognitive science has been working on for, oh, a bazillion years. Lots and lots of people have done papers on this, although i don''t know if any would help. Off the top of my head, you could look for "reasoning by analogy", "exemplar based learning" and "structure mapping". The latter is by Gelernter (sp?) at Northwestern and is 20 years old. In AI, you might look for Wetterscheck''s RIBL (relational instance based learner) work from i think 1996. But this is something i''m interested in and i haven''t found an answer yet

Hmm, do you want to recognize an object by a list of attributes or from an image? If from an image, consider looking at "geons", which is dividing an image into a collection of geometric shapes (it''s a common topic so a Ggoogle search should pull up plenty of cites). If from a feature list, you could look at decision trees. In fact, the iris example is one of the more common examples from the UC Irvine data mining test bed. You can find perhaps 80 algorithms that recognize irises in the free Java software Weka

Then there was something about actions. i can pick a rose so can i pick an iris? If the question is only "how do i know who can do what to what?", could you model that with normal logic? Say, situation calculus and STRIPS or PDDL

NeoShaman, i''m not sure what you''re trying to do, but the basic way it works in real life is that you make the most efficient solution that does mostly what it needs to. The result is our brains which is made up of lots and lots of special purpose tools to solve special problems

Consider perception. The code that makes hearing work can also make vision work, although not as well as a dedicated vision system. This is because both sound and vision have vertical elements - pixels in vision, pitch in hearing. Where you get into a problem is side-to-side vision. Hearing has no concept of a horizontal plane/movement and so the hearing subsystem doesn''t handle well visual tasks that rely on side to side analysis

So the hearing part of your brain could substitute for your vision center if need be, although with certain problems. It can''t, however, handle your sense of smell. Smell is a mixture of chemicals that need to be broken apart. The smells do not naturally map to a vertical linear discrimination

What''s that all mean? It means the hearing part of the brain is tied intimately to the data it processes. There is no generic intelligence or processing algorithm, it''s tied to certain invariant parts of the environment. And it''s cheap. It''s a one dimensional matrix because that''s all sound needs. Vision reuses the basic processing plans of hearing (because it''s more efficient to store it in the DNA that way) but adds a second dimension because visual information is 2D. Neither data structure or processing algorithms work for smell, so a completely different solution is used for it, one hard coded to the structure of scents

So you talk about being able to model the world. There is no one mental model for that. There are lots and lots of little models that get filtered as needed into different models that can work on higher-level systems, with the obvious loss of precision. Your mind has lots of special purpose data structures and lots and lots of data loss (which is the basis for "generalization"). Everything that can be hard coded is hard coded. Things that cannot be predicted (diet, parents, peers, weather, predators, etc.) are handled by systems that break the world into a few discrete options (to get food from someone, either beg, cry, show off or punch) and give you desires (hunger, horny, hurt, etc.) to help you figure out which ones are the best

So it''s not as easy as saying "give me some Markov software and i''ll build a world model". There''s lots of little systems you''d need to model all actions, all events, all objects, etc.

Hope that helps

-baylor
well actually its not for a general ai , but a specific ai for virtual world!!!

i am aware of what you are saying but don''t underestimate the power of emergence

my system is an appraisal for a ''known hard coded world'' where i have the agent to appraise without knowing the world, i made the agent simulate the world in his "brain" (knowledge) with the added of value which build ''meaning'' (with ''meaning'' as the relation between the subject and the world)

actually it''s not directly my actual work but an emergent property than i have push a little forward from my work on artificial emotion (specific to game), i think it could work from a more advance work on real robot but the sensor must has to be very sophisticate and the processor strong enough to process a multidimensional data from sensor, actual robot seems to have very poor input

the brain is an emergent system, the hard code layer is in the "lower brain" while the hi level (neo cortex i think) is very flexible and use as input the "lower brain" and the data from ''sensor'' and with try to simulate the external world, actually the "lower brain" is the part which add the meaning depth by appraisal (emotion) which are imporant in discrimation of data and choice as on focus, emotion as the main role in inhibit and reinforcement of learning too, this is in conflict with the knowledge (as the world simulated or the model of the world) which are the ''logic'' part, because of goal and motivation which are emergent from hard coded instinct in the lower brain (maslow ladder is a good aproximation of it)
''consciousness and awareness'' are emergent property of the simulation of the person in the world as a part of part of the world and distinct from this world

about what is hard coded in the brain, it''s not that easy , there proof that when we lose sight or in the case of those which are born blind, the neuron area work is affected to task from neighbourg area, it''s mostly responsible of the sensation of synesthesia (which are the confusion of sense, or why you can find a sound bold just like an image) and are a major influance in the creativity process and imagination, the fact why there is area in the brain is that they are emergent from the input they are link to, and the basic wired scheme (between area, for ex there is a long link between sight area and hearing area which go beyond the neighbourghood), tha fact is the brain with a large amount of sensor (for ex, eye appraise brightness, color, form, contrast, movement etc... with separate sensor unlike most robotic sensor), the hi level brain just simply find pattern in many layer of depth (abstraction) and return them (knowlegde) to the lower brain which give them value and return order to effector (decision), with a feed back from these effector''s sensor to be appraise as well and then ameliorate the "simulation" (the real matrix is not around us but in us )

in fact, a lot is not esclusif of the brain, and a lot could be apply to some complex system, emotion can be seen as the retroaction regulation in the brain, and the brain is one of the most ''complex'' system known by human after society

if you want to have more insight on what i''m saying
you can have some interest in some new science which as emerge recently like:
memetics, science of complexity and science of emergence
some book i have read is:
for memetics> "the selfish gene" by charles dawkins
complex science "l''homme symbiotique" by joel de rosnay
science of emergence "a new kind of science" of steve wolfram

have fun

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>be goodbe evilbut do it WELL>>>>>>>>>>>>>>>
Just as a sidebar...

quote:Original post by baylor
Smell is a mixture of chemicals that need to be broken apart. The smells do not naturally map to a vertical linear discrimination


That''s not strictly true. The array of sensory filaments in the olfactory system map to an array of neurons in the cortex (actually a neuronal column). When a recognised smell irritates the filaments a specific pattern of activation occurs in this cortex array. This behaviour corresponds to a reduction in the dimensionality of the attractor governing the dynamics of the activation process. That is, when the olfactory system is not being stimulated, the neuronal array displays chaotic activation patterns of high dimensionality (around 10 I think it was... but I can check that if anyone is particularly interested). When it is stimulated, the dimenaionality of the dynamics falls to low single figures (You can think of this dimensionality as the number of independent variables/differential equations needed to describe the dynamics uniquely).

In terms of the ''vertical linear discrimination'', smells do map to a 2-D linear discrimination (i.e., a fixed pattern). Hence my statement that baylor''s statement wasn''t completely true. In 1-D, it is true, in 2-D, it''s not!


quote:
It''s a one dimensional matrix because that''s all sound needs.


Sound is more than just pitch and our brains certainly understand more about sound than just this one variable.

quote:
Vision reuses the basic processing plans of hearing (because it''s more efficient to store it in the DNA that way) but adds a second dimension because visual information is 2D.


Actually visual information is much higher than 2D. Spatial information is 2D, then you have lighting information, texture information, motion information and orientation information. In terms of neurons, we have neurons that are specifically sensitive to orientated movement, others that are sensitive to shape, others that are sensitive to lighting, etc. These combine in some very interesting ways (including phase-synchronised chaotic oscillation with time varying synchronisation levels!) to generate a breakdown of a visual snapshot of the environment. I don''t believe it is therefore appropriate to say that sound processing in the brain is 1D and vision is 2D.



quote:
There is no one mental model for that. There are lots and lots of little models that get filtered as needed into different models that can work on higher-level systems, with the obvious loss of precision.


Sounds like someone''s been reading Dennett.

quote:
Your mind has lots of special purpose data structures and lots and lots of data loss (which is the basis for "generalization"). Everything that can be hard coded is hard coded. Things that cannot be predicted (diet, parents, peers, weather, predators, etc.) are handled by systems that break the world into a few discrete options (to get food from someone, either beg, cry, show off or punch) and give you desires (hunger, horny, hurt, etc.) to help you figure out which ones are the best


Definitely a psychologists view... and perhaps one that should be reserved as a view of the mind and how it works, rather than a view of the brain and how that works!

Cheers,

Timkin
quote:Original post by Timkin
There was a very interesting research project a few years back regarding computer agents that migrated around to different computers and spoke to other computer agents. They developed language, dialect and even primitive grammar, which of course shocked many linguists out there who believe that grammar is hard-wired into our brain! I only saw the reference in New Scientist, but you might be able to search their archives or back issues for a reference.


it was TALKING HEAD from computer science labotory by luc steels

well i have didnot have enough time to get into math decoding for the markov doc (but i have find decent doc for the others thing you give to learn), i have read some but it''s heavy technically speaking (lot of math notation to learn as a language), i could understand but i have reach some limits with the MASS of paper i have read from various subject,(art school works too )

mmm, could i ask for some GENERAL explanation about how it work? i will fill with what i have read, it''s just for make me sure i have understood thing well so far

it was suppose to be for a game then i don''t have to go that deeply, but i couldn''t keep it, eventually there is no more gameplay, it would be a pure experimental simulation, and may end to artificial emotion and "conscious" of agent, at least at some low level i was getting caught by the temptation, it''s evil that things

>>>>>>>>>>>>>>>
be good
be evil
but do it WELL
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>be goodbe evilbut do it WELL>>>>>>>>>>>>>>>

This topic is closed to new replies.

Advertisement