The Programming Paradigm

Started by
23 comments, last by bishop_pass 23 years, 5 months ago
There have been some interesting topics in here lately, such as programming languages, scripts, compilers, etc. OK, who is up to dissecting the programming paradigm, especially as it is related to existing languages and how programs are executed, and AI? The ''standard'' programming paradigm is to tell the computer what to do, step by step. You think OOP changes this? Hardly. It all boils down to the same stuff in the end. You''re telling the computer what to do. Think about this instead: Tell the computer what it needs to know. Think in terms of a language and the constructs that are required for this shift in programming.
_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
Advertisement
quote:The ''standard'' programming paradigm is to tell the computer what to do, step by step. You think OOP changes this? Hardly.


I remember this quote from "Stating the Odvious 101" . Of course OOP doesn''t change the ''standard'' paradigm.

Humans think of life in terms of different objects interacting with each other, having different outcomes and effects depending on how these objects interact. Programmers are human. So OOP makes it easier for the programmer.

=======================================
Better to reign in hell than serve in heaven.
John Milton, Paradise Lost
quote:Original post by Zipster

Humans think of life in terms of different objects interacting with each other, having different outcomes and effects depending on how these objects interact.


Really? That''s sums up how humans think?

quote:

Programmers are human. So OOP makes it easier for the programmer.


Sure. Maybe easier. Shift your thinking a little bit more there. I''m stating the obvious?


_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
Have you ever tried to think of two things in the same time... I''ll say that it''s impossible to do so. You can have more than one thing on your mind (means have in in your "backbuffer") but once you start thinking, you''ll think of one and only one thing. It''s possible that you have the impression to think of more than one thing but what you do is jump from one item to the next.

Computers do the same but they can process more than one statement at time (think of parallel programming etc.). They have more than one thing in back buffer.

So what''s the difference between humans thinking and computer programming ?

The structure ! Humans are able to jump from one thing to another without forgetting what they''ve been thinking about. Computers need to have a straight line to follow. That straight line has been programmed. That straight line is nowadays represented by OOP enabling to have a better overview of what has to be processed and how it is done.

I don''t think that (in near future) it will be possible to program a computer to react or handle information as does a human mind.

Just my 2 cents,
Metron
----------------------------------------http://www.sidema.be----------------------------------------
quote:
metron wrote:

I don''t think that (in near future) it will be possible to program a computer to react or handle information as does a human mind.


Well, I don''t know. How about creating a program that can self-modify its programming based on reaction with other objects. For this to happen someone would have to program an environment similar to earth inside a computer and then let the computer interact with it. It would be interesting to see all of this. Would we then be sucked up to the screen all the time interacting witht the computer and not with other humans? Are we doing this already? Something to think about...
quote:Really? That''s sums up how humans think?


I was giving a broad, generalized view. I never said "This is the way it works". You knew thats what i meant. Don''t be citical.

quote:Sure. Maybe easier. Shift your thinking a little bit more there. I''m stating the obvious?


You said that OOP doesn''t really change the fact that you still have to tell the computer what to do, step by step. Isn''t that pretty odvious? Every language boils down to this.


=======================================
Better to reign in hell than serve in heaven.
John Milton, Paradise Lost
Zipster, no hard feelings here.

_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
OK, think about a few concepts here.

Reevaluate the traditional architecture of a computer, (microprocessor, sequence of instructions fetched from memory, and memory itself) and how this is either helpful or a hindrance.

Reevaluate the concept of sequenced instructions. Think instead about axioms (bits of knowledge) that can be fired in parallel. In other words, knowledge is pulled up as it is triggered.

Reevaluate the notion of the standard program, compile, debug cycle. Instead, imagine the program you are creating is executing as you are creating it. How would this work? You are giving it knowledge, not a sequence of instructions. Each additional piece of knowledge you give it increases its domain of awareness. Also, each additional axiom, rule, or piece of knowledge must be logically consistent with all prior pieces of knowledge. Any which are not are rejected because the program is running as you are creating it.

So you want a program that will raytrace? (Bad example, maybe, but follow). First off, give the program knowledge, or expertise in programmin in C. We will never have to tell it how to program in C again. Now, give it knowledge on what raytracing is all about. This involves knowledge about pixels, screens, vectors, sphere equations, etc. As we increase its knowledge in this domain, it will catch many possible bugs for us, because anything which is logically inconsistent with its evolving knowledgebase is rejected.

Ultimately, the more it knows, the easier it is to give additional knowledge, becasue most everything is already there. In theory, it becomes easier to work with and more flexible, as opposed to the standard programing methodology, where the larger it grows, the more difficult it is to work with.

This all may sound farfetched, but it embodies what AI is all about. Tell the computer what it needs to know, not what it should do. Give the program a true understanding of the domain it is working in, and let it derive solutions.

_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
that doesn''t sound that viable, or even efficient. Basically it all comes down to this: we want the computer to do something for us. In order to get it to do that something we need to tell it how.

The innovations come in making it easier to tell the computer what we want. Newer languages and paradigms are the correct way to proceed. Once you start applying an AI layer though it makes things harder, we don''t actually want that. Humans aren''t precise, they make mistakes, they interpret things in varying ways. That''s what the AI would do too. So in order to get the AI to understand what we want we''d have to give a massive description, of untold size. So I really think the way to improve is to make small and continuous improvements (and yes that does mean abandoning all current languages in favor of something slightly better, and doing so every decade)
Programming a super-programmer? Well, this has already been done in 1994 by a scientist in New Mexico. The experiment, which lasted 3 years, failed to be of any use because the "electronic super-programmer" would never actually work on any particular problem; instead it would simply program a copy of itself to do the job for it.

cmaker

- its not the principle. its the money.
cmaker- I do not make clones.

This topic is closed to new replies.

Advertisement