Then the concept of volitional movement should not be possible since in a clockwork object, everything moves, dependent on each other and nothing changes. It seems that the brain is a system of cells which can direct its own behaviour and change its own state. The behaviour of the brain is not the same one repeated over and over again, there exists ordered patterns and groupings and more importantly, self modification that have reason (to us) behind them. We define reason based on the behaviour of the brain which allows us to define reason and then say there seems to be reason in the brain's action. But is there actually reason to the brain's self modification? Any motivation? Why should such exist?
How is this possible? What is an AI expert take on this?
Suppose the brain was deterministic in the sense than given the same set of inputs (environment) each time it would output the same result. That is, the same morphisms or change of configurations will always occur due to the same actions being performed resulting in a complex change in mental, memory and emotional state etc. Which make it seem that behaviour was being directed when instead a new state is triggered where one imagines that such things as where to place some limb, some extra attributions to an internal representation of some object or what object is currently of interest are actually what occur. But then where does thought fit in? What processes initiates a state of sentience or even self awareness? What are the current means of begining to approach this in AI?
I will offer my non AI viewpoint that the key lies in language. I feel that the essence of a weak form of the Sapir Whorf Hypothesis must be correct. Indeed it supports my view that physics and any intepritation of reality is entirely subjective. For example, the article talks about the Native American Hopi, whose language reflects a thought process where space and time have no true meaning but rather, processes are key - for them Relativity would be entirely intuitive! That statement is so important because modern physics has begun to drop such things as space, time and motion. But i digress.
I instead think languages are a reflection of the thought process, the two are related although i feel thought determines language which in turn affects the form of the thought processes. I.e. culture. I conjecture then that the first step to an approximation of AI would have some formal system as the background to its operation. That is, place neural nets on a formal system. It operation would be built and categorized by its logic ang language. Instead of the *from logic* approach what about an *of logic* approach?
What happens when you have some state which you wish realized by you NN and have a language whose form is semirandom and evolve (using concepts from self organization in complex systems and GA's) towards that state based on the form of said language. Is this sensible Nick? Please correct my misassumptions.