MikeD

Members
  • Content count

    500
  • Joined

  • Last visited

Community Reputation

158 Neutral

About MikeD

  • Rank
    Advanced Member
  1. masters

    I can recommend the school I took my Masters at: The University of Sussex. They're more specialised in ALife and robotics, but if that's your bag check them out.
  2. Quote:Original post by Alaric ...but, my point is that his understanding of the properties of 'seeing' was completely different from that of someone with 'normal' vision. How would it be even possible for an artificial entity to 'update' its understanding of a particular property based on new experiences besides a simple mathematical comparison? How is it possible for us to update our understanding? We do it in a human way, an AI#334 would do it in a AI#334 way. The simplest form of an entity understanding X is the way X changes the entity during any interaction between the entity and X. That is update and understanding, beyond that, each to their own :) Quote:Original post by Alaric (really simplified example...bear with me) another thing would be...It is obviously true that humans do not know everything there is to know. Imagine for a second that research were done and an understood property were to change. for example, imagine if what if the object known as 'tree' were no longer known as 'tree.' Imagine that tree became 'ball' and ball became 'tree.' and then imagine that there were one person who were not told that 'tree' was now 'ball' and vice versa, and no one ever specifically told him that the two had switched. After observation of people exhibiting knowledge of the switch, a person could deduce that a switch had been made...but its not like there's a formula to determine when the switch had been observed enough times to say that the individual understood that indeed a switch took place. Now imagine the same scenario, but the only being that's "Not in the know" is an entity with AI. How would it be able to determine that a switch had taken place without specifically being programmed ahead of time that a certain number of observations of other people saying 'tree is ball. ball is tree' that it made the update of its understanding of the world? ...and if it had to be programmed when to know a switch were made, how is that artificial intelligence? To me, that is the intelligence of a specific programmer being represented in an artificial entity...that isn't the intelligence of an artificial entity. I think your problem here is that you don't know how an AI would function, not that such an AI could be built. If an AI was preprogrammed to update its understanding by a programmer automatically, why is that not AI? If you can understand it can it not be AI, if I make it update in a suitably complex way so it appears to be totally autonomous and life like is that AI? If I can tell you precisely how a human being does it, do we stop being intelligent? Or do you want me to program it in such a vague way that its symbolic manipulation isn't built in and is an emergent property of some ANN or otherwise non-inherently symbolic system? Mike
  3. Quote:Original post by Scared Eye There is no such thing as AI. AI is a state in which a machine [un-natural being] can make a choice[to learn, to discard, etc], while it still has a choice not to take a decesion. Just because you can make a machine to react to something doesnt make it intelligent, it makes it a program or slave. And explain to me, if you will, why you feel you somehow have free will and are not just a program that's a slave to the rules of physics and just as deterministic (if a lot more complex) as a "hello world" program? Quote:Original post by Scared Eye AI is not possible because the prime factor for intelligence is morality. You will be able to make a machine behave the way you want, allow it for change, but because it is you who allows it to change cannot be considered to be intelligence. Morality has nothing to do with intelligence. A person without morals can still be intelligent. Hitler was intelligent. You can create a machine to learn as it acts, but, even so, learning is not necessary for intelligence. Take a fully grown human being and ask it a question. Without learning or changing more than a tiny amount (in fact, with some brain damaged individuals without short term memory, without changing in any significant way) the individual can be intelligent. Quote:Original post by Scared Eye PS: The human being uses 12% of his brain while conceous This was a mistake made by neurophysiologists a long time ago. I believe you use most of your brain, most of the time.
  4. Quote:Original post by Timkin Quote: You said "or the objects and events within the Universe would not have properties". I say there are no objects and event within the Universe without observers. I cannot agree with that. There are ways of measuring "whiteness" that don't rely on humans for the measurement process, only for the translation of that measurement into the human understanding by giving it a label. Paint matching machines do this regularly. So presumably I can make a sensor to observe something that I cannot observe directly. According to my interpretation of your reasoning, you're saying that if I cannot observe it directly, it doesn't exist. So how could I make something to percieve something that doesn't exist? ... If you're not, however, saying that things don't objectively exist when you remove the observer (and I've misinterpreted you), then could you explain your hypothesis in a little more detail please so that I can work out what it is you are saying? ; Sure, I will try to explain here, because I'm sure we pretty much agree. When I said "there are no objects or events without observers" I meant it in the same way as when I said "there is no white without observers". The fundament is there, but it is a mistake to say that there is even energy or particles in it in any objective way. It just is. Energy and particles are still subjective human terms defined by our own innate perceptions of such quantities and are, again, no more objective than "white". Even if you can make a white detecting machine it is still only a white detection machine to human observers who define the colour white by their perceptions (their behaviour, essentially) and then build a machine that has certain behaviour when it undergoes similar stimulus to a human observer experiencing white. The machine's understanding of white is, again, not the same as the human understanding, even though it reacts under the same stimulus (not saying you thought otherwise). In the end, what I'm saying is that if you can point at it, think about it, talk about it, measure it or perceive it, it's subjective reality. The fundament doesn't disappear when we turn our backs. The that-from-which-all-reality-springs still exists, it's just that all perception, all reality, is created second-by-second in/by your head. Mike...still rolling off topic
  5. Quote:Original post by Timkin ...but you've not actually experienced the Unicorn, so by your earlier reasoning, you could not say you understand Unicorns, only the components of imagined Unicorns. I do note that you're not actually in disagreement with your earlier comments... but now you claim to have an understanding of the Unicorn based on an understanding of the parts. Or are you still just saying that the only thing you understand is the parts and how they might reasonably interrelate given your other beliefs about the laws of nature? If this is what you're saying, then I think you're actually closer to my definition of understanding based on models than you might think you are. I still believe that understanding depends only on knowing the properties of a thing (object or event) and the relationships between the thing and other things. What else is there? Therefore, for the Unicorn, understanding could be attained by knowing about the component parts and how they relate to form the Unicorn and then knowing how the Unicorn relates with its environment. Now, I certainly agree that without actually seeing a Unicorn in its natural environment, one's understanding may be limited (because one is unlikely to be able to fully define all of the relationships the Unicorn has with its environment unless one completely understands the Universe). However, does not completely understanding something mean that we don't understand it at all? As long as you agree that knowing the properties of a thing and the interrelations of things, means understanding them, which means having experience of them in some domain and your understanding of things outside your direct experience is only understanding as far as your application of current experience maps accurately to a similar (if potential rather than instantiated) experience of the described thing. So, your understanding of the symbol Unicorn might be by analogy and that is your understanding by having the symbols contained in the analogy grounded in experience. However, that understanding by analogy is only related to your potential understanding of the object Unicorn as far as your grounding of the symbol by analogy is intransient with the potential grounding of the symbol by experience. I've read the above a few times and it's exactly what I'm trying to say. I may have to explain it further as it's a bit brief and to the point and my language use might not be self-explanatory enough. Quote:I'd like to hear some examples of properties that don't require observers. Quote:Original post by Timkin Do you believe that if you killed off all of the observers in the Universe, the Universe would not exist, or the objects and events within the Universe would not have properties? I agree that it would no longer exist for those observers. However, imagine an external observer who can see that the Universe exists, both before the internal observers are killed and after. This observer cannot see into the Universe to observe the properties of things in the Universe, but it can verify the continued existance of the Universe. Do the things inside the Universe no longer exist simply because some of the atoms in the Universe change their state? I don't think that's probable. The insides of atomic nuclei still exist even though I cannot see them. I can perform an experiment on a nuclei on one day and obtain results and I can perform that experiment the next day and receive the same results (within my ability to measure comparable results). I cannot see the quarks in the nucleus nor can I observe them directly. Does that mean they don't have properties? Indeed, while I believe in subjective reality, I believe this only exists in so far as we are subjective about our beliefs about the Universe. That does not mean the Universe itself is subjective. Thus, I believe that there are properties of objects that exist without observers. However, I do agree that if we were to try and measure these properties, we are necessarily thrust into the realm of subjectivity. Then, scientific method is a good tool to use to attempt to narrow down the plausible objective value of that property at the time of measurement. Can science give us absolutes? I don't believe so. But that doesn't mean we are unable to form reasonable (subjective) beliefs about the objective properties of things. I agree then that understanding is subjective and I agree that it may therefore be impossible to formulate an objective understanding within communication between two observers, only a communal understanding. However, I do believe that two observers can agree on a joint understanding and that through independent observation it may be possible to tune that understanding to reflect the objective reality, even though neither of them will be able to prove that this is the case. I would say that if you were to kill all entities that can interact with the Universe in such a way as to have their behaviour affected intransiently when confronted with certain interactions, which were mass-subjectively labelled as "the colour white", then you would no longer have the colour white. I am not saying that the Universe would disappear on an objective level. I'm simply saying the the human understanding of the colour white would no longer exist, so "white" would no longer exist. You said "or the objects and events within the Universe would not have properties". I say there are no objects and event within the Universe without observers. There is a fundament, but it just is, it does not have objects, these are created by observers delineating their interactions with their environment. The properties of these objects don't exist either. Without an observer led differentiation of interaction there is no reality. There is a fundament from which all reality occurs, but there is no objective reality itself. All reality is subjective and defined by the form of the observer which defines the domain of interactions of that observer with its environment. This includes all attempts to "measure the properties" of the Universe. Measurements are defined by interaction, which is defined by perception and which defines what is measured and how. There are no nuclei without an observer, there are no quarks without an observer, there are no interactions between anything without an observer. There is the fundament that forms the interaction that creates the observation, but as soon as you label it, measure it, observe it, interact with it, it's your subjective reality and no longer the fundament. If you try to talk about it, it becomes the "not that, not that". All our perceptions are defined by an evolutionary process. All our interactions are from a single perspective. There is no white without us, because we define white by our interactions, not because white was always there waiting to be seen. In some ways this is an unimportant distinction, but it's the basis for all science. Which makes it fundamental. I may have wandered off topic here :) Mike
  6. Quote:Original post by Nice Coder With the unicorn analogy, We were basically given a discription of a unicorn. We then made up a new object, with those phicical properties, at random. Then, from its behaviour that we know (from the picture-book, or ?), we generate a model of the unicorns behaviour to stimuli. After all that, we "understand" unicorns, simply by analogy and egsample. Remove the word "random" and that's what I believe. Except you could not describe the Unicorn to me by any of those physical properties (in a useful way) without my having some experience of those physical properties. Unless you described those physical properties by analogy of other phenomona I had had experience of. Else it's just ungrounded symbolic processing and contains no understanding :) Mike
  7. On how we learn by analogy: Quote:Original post by Timkin But then by extension, one cannot have an understanding of compositions of things they have experienced, only an understanding of the components, if they're never actually experienced the composite object/event. In the Unicorn example you could describe to me a horse, a horn and the placement of that horn on the horse's forehead. By my experience of horses and horns and my understanding of how the horn might be placed on the forehead (depending on your explanation), I could have a good visual idea of what you meant by Unicorn. Further, if I had an idea about physiology I might be able to picture how the horn might attach to the bone. If I'd seen a Narwhal (http://www.worldlandtrust.org/images/paintings/narwhal.jpg) I might have an understanding of the fact it might have evolved from a tooth in some distant ancestor. If I'd seen other creatures with similar head accoutrements then I might be able to picture how the Unicorn would use its horn in mating displays or in forms of offensive or defensive behaviour. This would be my understanding of Unicorn from analogy, all of which require experience, none of which might be true. Quote:Original post by Timkin I'm not sure that I agree with the definition of qualia... but putting that aside, I think that there are properties separate from observers. We had this discussion in another thread some time ago... I'll try and dig it up and look at it again before continuing... Shall we find a definition of qualia we both like and discuss things based on that? I'd like to hear some examples of properties that don't require observers. Mike
  8. Quote:Original post by Timkin Okay... here are some of my surface thoughts on understanding. Feel free to pick them apart and expose the flaws. I'd certainly enjoy refining my ideas. ;) Always a pleasure in helping people refine their thoughts. Exactly why I posted mine in the first place. Quote:Original post by Timkin I'm trying to convince myself that one can understand something without having experienced it, which would mean that understanding and experience are only correlated, rather than causally related (and computers could understand trees). We have the ability to learn by analogy, so in principle we should be able to gain understanding by analogy. I would not describe this as learning without experience. You must have a domain of experience with the world and your understanding of the abstracted, communicated ideas are based in that domain of experience. It might not contain the actual phenomenon you're learning about, but your understanding will be limited to the phenomena you've already experienced. Quote:Original post by Timkin For those that would say that without the experience there is no understanding, then by deduction I would say that you believe that withouth qualia, there is no understanding. An interesting notion which fits with Mike's thought that understanding is subjective. So is qualia the key to understanding or is it just a fancy Latin word for observation and the internal processes generated by observation. My opinion is that there are no qualia, no ingrained characteristics of anything. Qualia being defined at dictionary.com as "A property, such as whiteness, considered independently from things having that property." There are no property separate from things. In fact there are no properties separate from an observer's interaction with a thing (observing being an interaction). Quote:Original post by Timkin Put aside for the moment the issue of fluency and assume that both a toddler and an adult learning Chinese know the same limited set of characters and have the same vocabulary and understanding of grammar in Chinese. Does the adult understand Chinese any less than the child, or vice versa? I don't think so, since both could presumably use their limited vocabularly to interact with each other and other Chinese speakers. This would suggest that the difference in experiences of the child and adult does not result in a different understanding of Chinese. I would still argue that the child and the adult (or, let's make them more generic and call them individal A and individual B) have different understanding in precisely the way that their physiology, and interaction of that physiology with their environment, differ. If A has seen three trees of different types or, indeed, three different trees or the same three trees from different angles or at different times of day from B, then they have qualitatively different experiences of trees, even if they are identical twins who have had identical experiences up until that point. That they can communicate is true, but think about the level of communication. They only communicate over the symbol "tree" insofar as their experiences of "tree" share similarities. These similarities may be qualitatively very, very similar, but there will always be a necessary difference (given that the experiences cannot be identical) and in that difference, no communication or shared understanding occurs. I completely concede that, if you took an individual and created an identical copy and they discussed some experience and used the symbol "tree" then communication would be perfect and their understanding identical. Away from that example, all communication is in degradation and all understanding differentiated by differences in experience. Quote:Original post by Timkin Unless of course, one believes that somehow the toddler and adult encode Chinese differently and the adult is only using pattern matching algorithms to associate outputs to inputs. Neuropsychology doesn't bare this out. Which areas of the cortex encode language doesn't depend on the way in which you learn that language, so the only possibility is that we encode information within the same area differently when we learn as a child or as an adult. There isn't, to my knowledge (having just spent 2 years working in a neuro team that does research in this and related areas), any evidence that this is the case. That doesn't mean it isn't the case and we may yet learn this... but I doubt it. So, if the toddler and the adult both understand Chinese, then uniqueness of experience does not define unique understanding... and therefore computers could learn Chinese and presumably understand it if they were given the opportunity to learn Chinese as an adult or toddler does. Of course, this means that the computer must be able to ground the symbols of Chinese and this requires certain sensory abilities. I completely agree about the neurophysiology part of the above. I don't believe it follows that "uniqueness of experience does not define unique understanding". Understanding is not just about how learning occurs, it is about what learning occurs. A difference in either leads to different understanding. What if individual A had only seen trees and individual B only smelt them. They would both use the symbol "tree" validly. They would both also be using it differently. There may be some communication and similarity of understanding over the symbol "tree" but you agree it would be very limited (if possible at all). This is just an extreme example of what I'm trying to say. Quote:Original post by Timkin I think that understanding is the ability to take a model of something and link it to one's other models so as to preserve the consistency of all internal models and the ability to make predictions not only as to the behaviour of the new thing being modelled, but also the affect on the rest of the things that are understood. Thus, understanding is about building a set of models forming a self-consistent representation of things in the world and how they behave and interact. This would mean that understanding is contextual to the individuals model but not necessarily contingent on their experiences. Because two individuals models are grounded in the same world, albeit through different interactions with that world, there are sufficient commonalities due to grounding upon which they can share understanding and communicate effectively. I would not describe that as understanding, although I may be in the minority by not doing so. Maybe we'll find out on this board :) I think ants have understanding. I don't think ants necessarily have to build internal models to have that undertanding. I think I have an understanding of heat and cold and pain and happiness and depression that has no origin in internal modelling of any phenomena. I don't think models and understanding are related at all. Quote:Original post by Timkin If understanding is then about the consistency of models grounded in the world, then one can understand something without having experienced it, so long as one can ground enough of the larger context of models so as to make accurate predictions with the new model. Even if internal modelling is nothing to do with understanding I'd say this bears truth. If I have an understanding of big and grey and leathery skin and prehensile nose and round feet, then your description of an elephant can give me understanding of that concept without my experiencing it. My understanding is limited in the similarity of experience of those words to the concept in question. Quote:Original post by Timkin So, if you want to teach a computer to understand Chinese, you're going to have to teach it to understand a lot of things in addition to Chinese. Of course, this leaves us with an interesting conundrum: how does one understand the first model? My brief statement on this is that for many animal lifeforms on Earth, it is evident that some understanding is hard-wired into the brain (presumably through evolution). Of course, for human babies, many things are also not understood and it is a very interesting day indeed spent watching ones child and seeing how they try and build consistent models of the world around them without any starting points! I honestly don't think it matters whether understanding occurs through ontogenic or phylogenic interaction. It's the final form and the behaviour this form causes that defines the understanding (otherwise the copied individual described earlier wouldn't "understand" at all, despite being identical to the individual they're copied from) Mike
  9. Pack Behaviour (Dogs\Hounds)

    As Morbo says, you could google it or you could just go to the source and read Craig Reynolds paper "Steering Behaviors for Autonomous Characters", which is available online here: http://www.red3d.com/cwr/steer/ Mike
  10. Quote:Original post by Timkin Mike! It's great to see you around here again! It's been quite some time! Merry Christmas and a Happy New Year to you! Thanks Timkin, somehow I drifted away from the boards and kinda forgot they existed for a while. I hope your Christmas and New Years were good :) Quote:Original post by Timkin ROFLMAO: Heheh... no offence intented, but one could not accuse Mike of not knowing much about AI. He has a postgraduate education in AI from one of Britains leading schools in that area; he's a member of the AI Interface Standards Committee, the body undertaking the task of developing a common interface standard for AI for the computer games industry; he works for Lionhead studios and has worked on leading titles like Fable... so all in all, I'd say Mike knows exactly what he's talking about! I thought about saying "this is what I've done and who I am" but, in the end, that makes no odds to the discussion. If he thinks I don't know much about AI then he's entitled to his opinion ;) I'm glad you remembered who I am though :) Quote:Original post by Timkin Personally, I think that's a very good post Mike. It's somewhat aligned with my own thoughts on the matter, but there are some fundamental issues (differences of opinion) I have with it. I've started to put some down on paper, but my wife is hassling me to get our daughter off to sleep for an afternoon nap. I'll try and put something up tomorrow. And what were your thoughts in the end? I'd be interested to know, the ideas above were only a first stab and full of inconsitencies and half baked ideas, I'm sure. Mike
  11. Here's a few thoughts I've been having about AI, what is it we're really trying to achieve and whether it's possible. Posting here to gather my thoughts and have them torn apart. I'll start with two sets of definitions, one for intelligence: intelligence ( P ) Pronunciation Key (n-tl-jns) n. The capacity to acquire and apply knowledge. The faculty of thought and reason. Superior powers of mind. See Synonyms at mind. And one for knowledge: knowledge ( P ) Pronunciation Key (nlj) n. The state or fact of knowing. Familiarity, awareness, or understanding gained through experience or study. The sum or range of what has been perceived, discovered, or learned. So, nice, I've copied and pasted some information from www.dictionary.com, I'm sure you're very proud of me. What do these definitions matter? Intelligence is derived from the state of knowledge and understanding and the process of reasoning over that knowledge. Knowledge is acquired from the process of experience or learning. All experience or learning occurs from the act of interaction and the changes caused by that interaction. Interaction, leads to change, we call that change experience, that experience leads to knowledge, which can be measured as a change in behaviour. (abstract reasoning is covered by this chain of events as the interaction of an entity's brain with itself) To further reduce you could say Interaction leads to experience leads to change in behaviour or Interaction leads to internal change, leads to external change Where does this leave intelligence? It leaves it in the position of being a metric we use to measure the complexity of behaviour and the effects of interaction on behaviour, i.e. how experience changes us. So how can you create an artificial intelligence? Well, what do you mean? "How can you create an artificial entity with a domain of interactions with an environment that has behaviour of measurable complexity and whose behaviour is altered by it's interactions with its environment"? Well, if that's what you mean, then we already have. If you want some references I'll dig up some papers but, as a thought experiment, imagine a minimal simulation or experiment which contains an entity that matches the above criteria. It's not difficult. Or did you mean "How can you an artificial human being, with the same complexity of behaviour, ability for adaption and understanding of its world as a human being". Well, if that's what you meant then we can't. Why not? Well, we can clearly make a machine or arbitrary complexity, that's no issue. I also have no doubt that we can create a machine with an equal amount of adaptionality as a human being. The only reason why that should not be possible is if you believe we are more than just machines or if you think that there is something special about the chemistry we are constructed from. Something so special that you couldn't model it in a computer of "suitable power". If you do, then I'd like to ask why (beyond spouting "Quantum processes, it's the nano-tubules guv'nor and we can't simulate them, honest", which always feels like an argument of the form "but we must be special else...else...I don't feel special and I like feeling special, can't we just pretend we have souls?"). The problem I have is that of "understanding". Understanding is acquired from experience and experience is formed from interactions with an environment and the changes those interactions cause. By that definition your understanding is regulated by the structure of your being. Of your inputs, internal processes and outputs (as much as you like to arbitrarily delineate such processes from each other or from the act of "being"). Change the structure and you change the domain of interactions and domain of perturbations (how you interact and how you are changed by such interactions). So your understanding is literally that, it is "your understanding". My understanding of "trees" is caused by my every interaction with entities I choose to label "tree". Your understanding of trees is qualitatively different from mine by the differences in our structures and the necessary differences in our experiences: being at different points in space-time for an otherwise similar experience, for instance. A dog's understanding of tree is, again, qualitatively different from our understanding, as it is from every other dog's understanding. There are intransients, which can be measured purely by the intransients in behaviour, as no other metric can tell us anything of any certainty as to the similarities or differences in our understanding, which is why anyone who thinks about other beings for any amount of time comes to the conclusion that they cannot be certain of the realness of anyone but themselves. So, for a computer to have understanding of trees, it must have experience of trees, which is mediated by its being and defined by its structure and its domains of interactions and perturbations. To have human understanding you need human inputs, human thought processes, human outputs, made from the same chemistry as a human being, else it is computer inputs, computer thought processes, computer outputs forming computer understanding (and only for that specific computer). So human understanding is impossible for a computer, without that computer being of form and function as a human being, which would get us absolutely nowhere. In the end, we both have artificial intelligence and we will also never achieve it. To apply this paradigm to problems in AI, such as John Searle's Chinese Room Problem: the man in the room understands Chinese to the point of understanding his environment, of the inside of the room, filled with Chinese symbol input, a book of thought processes and Chinese symbol output. He does not understand Chinese as a Chinese speaker, but he understands it by his interactions with the symbols of it, in a way no Chinese speaker would. That a person talking to the room might measure the intelligence of the room by their interaction with it and come to the conclusion that the room speaks Chinese in the same way that he speaks Chinese is unimportant. I no more know that someone replying to this post understands the ideas in the same way I do, in fact I have guaranteed that they do not (to some degree of understanding). All I guarantee is that their kind of behavioural response shows some kind of understanding that is different to mine but has some intransients based on the similarity of the behavioural intransients. And that's all any of us can guarantee, ever. And that's my thinking over for today. Mike