Archived

This topic is now archived and is closed to further replies.

bishop_pass

The Programming Paradigm

Recommended Posts

There have been some interesting topics in here lately, such as programming languages, scripts, compilers, etc. OK, who is up to dissecting the programming paradigm, especially as it is related to existing languages and how programs are executed, and AI? The ''standard'' programming paradigm is to tell the computer what to do, step by step. You think OOP changes this? Hardly. It all boils down to the same stuff in the end. You''re telling the computer what to do. Think about this instead: Tell the computer what it needs to know. Think in terms of a language and the constructs that are required for this shift in programming.

Share this post


Link to post
Share on other sites
quote:
The ''standard'' programming paradigm is to tell the computer what to do, step by step. You think OOP changes this? Hardly.


I remember this quote from "Stating the Odvious 101" . Of course OOP doesn''t change the ''standard'' paradigm.

Humans think of life in terms of different objects interacting with each other, having different outcomes and effects depending on how these objects interact. Programmers are human. So OOP makes it easier for the programmer.

=======================================
Better to reign in hell than serve in heaven.
John Milton, Paradise Lost

Share this post


Link to post
Share on other sites
quote:
Original post by Zipster

Humans think of life in terms of different objects interacting with each other, having different outcomes and effects depending on how these objects interact.


Really? That''s sums up how humans think?

quote:


Programmers are human. So OOP makes it easier for the programmer.


Sure. Maybe easier. Shift your thinking a little bit more there. I''m stating the obvious?


Share this post


Link to post
Share on other sites
Have you ever tried to think of two things in the same time... I''ll say that it''s impossible to do so. You can have more than one thing on your mind (means have in in your "backbuffer") but once you start thinking, you''ll think of one and only one thing. It''s possible that you have the impression to think of more than one thing but what you do is jump from one item to the next.

Computers do the same but they can process more than one statement at time (think of parallel programming etc.). They have more than one thing in back buffer.

So what''s the difference between humans thinking and computer programming ?

The structure ! Humans are able to jump from one thing to another without forgetting what they''ve been thinking about. Computers need to have a straight line to follow. That straight line has been programmed. That straight line is nowadays represented by OOP enabling to have a better overview of what has to be processed and how it is done.

I don''t think that (in near future) it will be possible to program a computer to react or handle information as does a human mind.

Just my 2 cents,
Metron

Share this post


Link to post
Share on other sites
quote:

metron wrote:

I don''t think that (in near future) it will be possible to program a computer to react or handle information as does a human mind.



Well, I don''t know. How about creating a program that can self-modify its programming based on reaction with other objects. For this to happen someone would have to program an environment similar to earth inside a computer and then let the computer interact with it. It would be interesting to see all of this. Would we then be sucked up to the screen all the time interacting witht the computer and not with other humans? Are we doing this already? Something to think about...

Share this post


Link to post
Share on other sites
quote:
Really? That''s sums up how humans think?


I was giving a broad, generalized view. I never said "This is the way it works". You knew thats what i meant. Don''t be citical.

quote:
Sure. Maybe easier. Shift your thinking a little bit more there. I''m stating the obvious?


You said that OOP doesn''t really change the fact that you still have to tell the computer what to do, step by step. Isn''t that pretty odvious? Every language boils down to this.


=======================================
Better to reign in hell than serve in heaven.
John Milton, Paradise Lost

Share this post


Link to post
Share on other sites
OK, think about a few concepts here.

Reevaluate the traditional architecture of a computer, (microprocessor, sequence of instructions fetched from memory, and memory itself) and how this is either helpful or a hindrance.

Reevaluate the concept of sequenced instructions. Think instead about axioms (bits of knowledge) that can be fired in parallel. In other words, knowledge is pulled up as it is triggered.

Reevaluate the notion of the standard program, compile, debug cycle. Instead, imagine the program you are creating is executing as you are creating it. How would this work? You are giving it knowledge, not a sequence of instructions. Each additional piece of knowledge you give it increases its domain of awareness. Also, each additional axiom, rule, or piece of knowledge must be logically consistent with all prior pieces of knowledge. Any which are not are rejected because the program is running as you are creating it.

So you want a program that will raytrace? (Bad example, maybe, but follow). First off, give the program knowledge, or expertise in programmin in C. We will never have to tell it how to program in C again. Now, give it knowledge on what raytracing is all about. This involves knowledge about pixels, screens, vectors, sphere equations, etc. As we increase its knowledge in this domain, it will catch many possible bugs for us, because anything which is logically inconsistent with its evolving knowledgebase is rejected.

Ultimately, the more it knows, the easier it is to give additional knowledge, becasue most everything is already there. In theory, it becomes easier to work with and more flexible, as opposed to the standard programing methodology, where the larger it grows, the more difficult it is to work with.

This all may sound farfetched, but it embodies what AI is all about. Tell the computer what it needs to know, not what it should do. Give the program a true understanding of the domain it is working in, and let it derive solutions.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
that doesn''t sound that viable, or even efficient. Basically it all comes down to this: we want the computer to do something for us. In order to get it to do that something we need to tell it how.

The innovations come in making it easier to tell the computer what we want. Newer languages and paradigms are the correct way to proceed. Once you start applying an AI layer though it makes things harder, we don''t actually want that. Humans aren''t precise, they make mistakes, they interpret things in varying ways. That''s what the AI would do too. So in order to get the AI to understand what we want we''d have to give a massive description, of untold size. So I really think the way to improve is to make small and continuous improvements (and yes that does mean abandoning all current languages in favor of something slightly better, and doing so every decade)

Share this post


Link to post
Share on other sites
Programming a super-programmer? Well, this has already been done in 1994 by a scientist in New Mexico. The experiment, which lasted 3 years, failed to be of any use because the "electronic super-programmer" would never actually work on any particular problem; instead it would simply program a copy of itself to do the job for it.

cmaker

- its not the principle. its the money.

Share this post


Link to post
Share on other sites
quote:
Original post by clonemaker

...the "electronic super-programmer" would never actually work on any particular problem; instead it would simply program a copy of itself to do the job for it.


Why do what you can delegate to clones of yourself? Not a bad idea...



Share this post


Link to post
Share on other sites
OK, bishop_pass, my mistake. I just didn''t get what you meant by what you said and i interpreted it the wrong way.

quote:
Give the program a true understanding of the domain it is working in, and let it derive solutions.


Yes, this is an ideal AI program. But how ?


=======================================
Better to reign in hell than serve in heaven.
John Milton, Paradise Lost

Share this post


Link to post
Share on other sites
quote:

that doesn''t sound that viable, or even efficient. Basically it all comes down to this: we want the computer to do something for us. In order to get it to do that something we need to tell it how.



I wouldn''t be so sure... that''s probably what a lot of people said when some "crackpot" came up with the idea of OOP.

quote:

Ultimately, the more it knows, the easier it is to give additional knowledge, becasue most everything is already there. In theory, it becomes easier to work with and more flexible, as opposed to the standard programing methodology, where the larger it grows, the more difficult it is to work with.



I actually had a similar idea to bishop_pass a while ago. You have some sort of a base program, which is run, and it asks you "What do you want me to do?". You might reply, "Check my e-mails for me".

The program in return will start asking you more questions about its task, e.g. "What is e-mail?", "Who is me?", "How do I check something?" (the base program might have some sort of basic grammatical disector which can distinguish between verbs/nouns/pronouns/etc., and therefore can handle them in different ways)

As you answer more questions, the program finds more and more missing knowledge, and asks you questions about it. It may seem that it would just find more and more missing knowledge, but i''m fairly sure that eventually the whole thing would restabilize and give you a completely working, bug-free program.

I dunno obviously this is very theoretical and I have no idea whether it would work or not, but it''s an idea. I think one of the major hurdles would be explaining to the program what a user is

Share this post


Link to post
Share on other sites
This sounds too much like a modern "focus" group - lots of people sitting around talking about anything that strikes their fancy, and they end up going in circles or moving backwards.

I think you''ve got some good ideas; ya'' just need some direction. You are trying to come up with a "new" paradigm like OOP? There are other paradigms, too. I think generic programming is in vogue right now (think C++ templates).

Personally, I think attaching oneself to any one paradigm is a bit silly (anyone want to argue that OOP is the best in all situations? heh), though it does make for an interesting discussion. In real life I''d rather just use the language as good as I can in a given situation. Polymorphism? fine. Multiple inheritance? fine. Global functions? fine. Void pointers? fine. Templates? fine. Use the right tool for the right situation. When you start lopping off parts of the language as "useless" or "bad" according to some paradigm you are probably just limiting your abilities.

What I am saying, is why use a paradigm at all?



- null_pointer
Sabre Multimedia

Share this post


Link to post
Share on other sites
For a program to do the type of knowledge learning some of you are suggesting would take some fairly clever AI. More clever than any AI I''ve seen functioning. More clever than I expect any AI to be in the foreseeable future.

Many intelligent people have been working in the field of AI for decades now, and they still haven''t gotten much past ''Eliza'' style programs (if any AI researchers here bother to flame me, please back it up by posting links to some useful AI that can parse natural language (as many of the posts above suggest should be done) and do anything near the complexity that it would take for the computer to write its own code)...

So you may be asking a bit much at this point in time.

Share this post


Link to post
Share on other sites
quote:
Original post by gmcbay

For a program to do the type of knowledge learning some of you are suggesting would take some fairly clever AI. More clever than any AI I''ve seen functioning. More clever than I expect any AI to be in the foreseeable future.


Have you heard of predicate calculus and resolution theory? This is a way of describing knowledge. How about planners and belief systems?

quote:

Many intelligent people have been working in the field of AI for decades now, and they still haven''t gotten much past ''Eliza'' style programs


''Eliza'' is terribly superficial and AI has progressed way beyond that. One of the trends I notice among Gamedev posters is how they think problems should be solved. They often look at the results they want and move only one step into the problem space to produce these results. This is what ''Eliza'' did. The results are very superficial and essentially useless.

quote:

...and do anything near the complexity that it would take for the computer to write its own code)...


LISP programs can write their own code quite easily. As for writing C code, here''s a start on how it would be done. Give it knowledge like this:

CodeBody:
InSequenceHas: CodeBodyHeader, OpenCurlyBracket, Statements, ClosingCurlyBracket

CodeBodyHeader:
ValidTypes: ifthen, if, while, dowhile, for

for:
performs: looping
looptestedby: IterationTest

forCodeBodyHeader:
InSequenceHas: forconstruct, OpenParanthesis, Initialization, Comma, IterationFunction, Comma, IterationTest, CloseParanthesis

IterationTest:
IsA: Expression

looping:
UsedIn: RepetitiveTasks
HasComponents: IterationTest

The above is of course just a start on thinking how it would be done, but the point is, the knowledgebase provides conceptual knowledge on what looping is, what an IterationTest is, and so on.

quote:

So you may be asking a bit much at this point in time.


I (we) are not asking for anything. All I''m doing is seeing what people''s opinions are. And I''m seeing how familiar people are with these concepts. I''m also not actually proposing this as a project.

As for links, I don''t have time to sort through my bookmarks, but if you want to see some real AI concepts represantive of this kind of stuff, try searches on any of the following:

cyc
Soar
loom


Share this post


Link to post
Share on other sites
Guest Anonymous Poster
It seems to me that what you are proposing in your last post is just a higher-level language, which has the same upside and downside of all higher-level languages. Your initial post seems to just to be concerned with an AI able to produce code using that high-level language.

OOP was actually a much larger change than you give it credit for. The ability to have self-contained code objects with the ability to create other code objects to deal with tasks as they arise is closer to what you are really looking for (think of it as a kind of code nanotechnology).

Share this post


Link to post
Share on other sites
quote:
Original post by Anonymous Poster

It seems to me that what you are proposing in your last post is just a higher-level language, which has the same upside and downside of all higher-level languages. Your initial post seems to just to be concerned with an AI able to produce code using that high-level language.


No. Wrong on both counts.

OOP does not have a conceptualization of the inter-relation of all componenets. OOP does have links to different components, but there is a fundamental difference. OOP cannot reason about the links between different components. Adding functionality to an object in OOP needs to be done before you can call on that functionality. Knowledge, on the other hand, uses functionality where available, and recognizes the absence of functionality and attempts to find alternative solutions. Also, OOP does not validate new functionality with the logical framework of the existing knowledge.



Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I think you''re failing to understand the concept of ''code nanotechnology'' that I was trying to get across. The ideas behind OOP lead naturally to the breaking down of a large problem into smaller and smaller independent chunks until a solution is found. There is no need to reason about the links between different components as they are self-contained and are created to deal with a specific set of circumstances.

I agree with you that functionality must be in place in a code object before it can be used but it is also true that knowledge must be provided before it can be used.

I am not talking about a specific OOP application here by the way - I am merely talking about ideas and the way that OOP principles naturally lead to different ways of thinking, in some ways similar to what you initially proposed.

Share this post


Link to post
Share on other sites
The original idea sounds interesting, but is it possible? It would be great if computers were like brains, but as far as I''m aware, they''re only superficially similar, in that they store information, and communicate with the outside world. A computer (as is currently available) has no understanding of anything. It''s just a big, powerful, calculator.

So, if you have a paradigm that says ''tell the computer'' something, you need a lot of underlying stuff to make it understand anything. Possible? Maybe, but I don''t think so. By all means prove me wrong, but I''ll keep on telling my PC what to do in small sequential steps; it''d only get ''confused'', otherwise.

Dave

Share this post


Link to post
Share on other sites
quote:
Original post by Heraldin

So, if you have a paradigm that says ''tell the computer'' something, you need a lot of underlying stuff to make it understand anything . Possible? Maybe, but I don''t think so. By all means prove me wrong, but I''ll keep on telling my PC what to do in small sequential steps; it''d only get ''confused'', otherwise.


You need to look into cognitive modeling languages, predicate calculus, situation calculus, first order logic, resolution theory, symbolic reasoning, cycl, production systems, knowledgebase development, and ontologies.

Everything above is about telling the computer what it needs to know, beginning with the notion that the computer knows nothing to begin with.

Share this post


Link to post
Share on other sites
I think if we move away from this abstract discusion and give a solid example, the discusion would be better.

=======================================
Better to reign in hell than serve in heaven.
John Milton, Paradise Lost

Share this post


Link to post
Share on other sites
AI seems like one of the most interesting computer fields out there. There are many divisions of what the AI is used for, but I think that in it''s purest form it is trying to create a copy of ourselves. So, then we have to ask ourselves, what makes us human? What is it exactly that gives us a consciousness, an awareness. Computers do not have an awareness of their surroundings. They are simple electrons travelling along wires as the result of the opening and closing of circuits. I think all that needs to be done (as if it''s easy) is to give the computer an existence, a consciousness of it''s own. Granted, this is quite probably impossible, but we will continue trying. Once we have a consciousness though things like teaching it how to program are easy.

I agree with the theorists who say that the algorithm for consciousness will boil down into something really simple, like a = 5b or something weird like that. Of course it''s all completely theoretical, so everyone can think whatever they please, but it''s still interesting.

Sidenote: Computers are based on binary coding, two volatage levels. Wouldn''t it be neat to design a computer based on 10 or even 16 different volatage levels? It would change things considerably. I think this would be an efficient way to store and send data faster.

Come on, flame me, I dare ya...

chaos1111@hotmail.com
ICQ: 22527985

Share this post


Link to post
Share on other sites
If you are looking for a super-Eliza, check out Neuromedia.com and talk to Nicole (formerly known as Red). It very rarely makes a mistake... almost like talking to a real person.

--------------------


You are not a real programmer until you end all your sentences with semicolons; (c) 2000 ROAD Programming


You are unique. Just like everybody else.

Yanroy@usa.com

Visit the ROAD Programming Website for more programming help.

Share this post


Link to post
Share on other sites