Jump to content
  • Advertisement
Sign in to follow this  
Tutorial Doctor

The difference between Logic, Reasoning, and Thinking; Data, Information, and Knowledge?

This topic is 1465 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am digging deep into intelligence programming, and to do so, I have to draw out some very basic and non-superfluous, accurate definitions of Logic, Reasoning, and Thinking. I also need some for Data, Information, and Knowledge. And I need to know how they are different.

I have searched over the interent, but most definitions are too grandiloquent and wordy. I have to look up definitions for words in the definition.

I figure this site would be the best bet.

So, what is Logic, Reasoning, and Thinking, and how are they different?

What is Data, Information and Knowledge, and how are they different?

Edited by Tutorial Doctor

Share this post


Link to post
Share on other sites
Advertisement
I'm in agreement with Alvaro.

My approach would be to pick a concrete problem of any kind (regardless of whether computers are good at it or not), and break that problem down until I can implement it. Existing definitions, especially informal ones, will not help you. When you make a program, you get to decide what everything means. Edited by Nypyren

Share this post


Link to post
Share on other sites
Here is the thing with deciding what everything means myself...

Perhaps something may mean one thing to one object, and a completely different thing to something/someone else because of how that thing will be interpreted.

I don't want to have to reason about every possible occurrence that "could" happen. I would rather generalize some things, and leave things open for interpretation.

Because as situations change, so could the answer to a problem change.

I am looking for formal, but not so superfluous definitions.

The word "define" itself denotes that one particular object is unique from another in whatsoever way it may be (whether small or great).

Computers are not good at a lot of things that humans are good at, mainly on the subject of autonomous adaptive learning, where things can be defined and thereafter re-defined for the current problem.

I plan on taking this formal information and making it more measurable. For now, I am looking for some very considerate definitions.

Share this post


Link to post
Share on other sites

Here is the thing with deciding what everything means myself...

Perhaps something may mean one thing to one object, and a completely different thing to something/someone else because of how that thing will be interpreted.

I don't want to have to reason about every possible occurrence that "could" happen. I would rather generalize some things, and leave things open for interpretation.

Because as situations change, so could the answer to a problem change.


Concrete requirements! Concrete!

You can achieve a "calculation that gives a different answer in different situations" with any function that has at least one input and output.

Other than that, your requirements are too vague to do anything with.

Nail down your requirements. What *exactly* do you want to do? Your goal must be something that you can break down into smaller sub-problems. The terms you use (heck, terms ANYBODY uses) don't mean anything to a computer. You need to convert your terms into things that a computer cares about. A computer only knows what bytes and instructions are. You need to take the concepts you're thinking of in your brain and refactor them until they are something that you can give to a computer.

Perhaps when I said "you GET to decide what everything means" I really should have said "you HAVE to decide what everything means". Edited by Nypyren

Share this post


Link to post
Share on other sites
Think about any program based on what its inputs and outputs are. If it's just a black box which sits there and thinks, but never produces any output, then it doesn't matter what happens inside the box. It could be curing cancer or creating a utopian society, but nobody would know. It's like data which has been sucked into a black hole - nobody outside will ever know what's in there.

If a program has output but no input, it *guarantees* the output will always the same (random number generators are seeded by timers, and the time comes from outside the program). This may be useful in some cases, but not for anything that should have changing behavior.


So for your goal, what do you want to use as input, and what do you want to get as output? Edited by Nypyren

Share this post


Link to post
Share on other sites

It might not fit the specific words  you ask about, but :

 

Part of intelligence is the ability to learn  (from experience)

 

And another part (cognition) can have an element finding generalizations from data the system is supposed to 'read' (inputs).  You cant usually just use data paterns given if you want to interpret data that is slightly different (real world)

 

That feeds into the learning aspect as the system somehow needs to  generalize the data given to some conclusion so that it can make a decision about it.

 

Somehow the system has to determine right from wrong (good data from bad, what is significant, etc..) - whether it made the a correct decision to add it to its learning.  And that usually comes from something else telling it how to judge  (usually a human sets the criteia -- alot if its a complex set of data the system is working with)   and THAT often turns into the chokepoint -- the time that a human has to be involved which the system can learn from.

Edited by wodinoneeye

Share this post


Link to post
Share on other sites
Good tips wodinoeeye and Nypryen.

I will create concrete scenarios and use them to test the intelligence of my system. Right now I did want generalizations. Perhaps definitions mean nothing to computers, but I want to teach it the meaning.

I used an AI program a while ago that you could teach. The things you teach it were stored in a .txt file in a certain format. You could edit the text file to add more knowledge, or you can interactively teach it.

I think it was MsAgent. I can't find it anymore though.

Someone suggested it is similar to a program called Answerpad.

Here is all I have for AI so far:
http://forum.maratis3d.com/viewtopic.php?id=763

Share this post


Link to post
Share on other sites

I'm going to play devil's advocate here and argue that everything you're looking to do already exists and the definitions you're looking for are clearly defined and distinctions drawn. It turns out there is already a calculus dedicated to knowledge representation and logic actions along with a programming language (Prolog) designed for the purpose you seem to be describing.

 

I believe the information you're looking for begins in chapter 7 of Artificial Intelligence: A Modern Approach by Russell and Norvig. I'm not going to say you should download the pdf from a google search (because you really should just buy the book if you're interested), but it isn't hard to find.

 

If you're looking to expand more on ideas of generalization you want bayesian inference and fuzzy logic, which you will encounter if you read to the end of the book (but it won't make sense until you have a good understanding of logical operations and knowledge representation). For an understanding of the current academic state of knowledge on the subject try some searches on IEEE Xplore (google it) about the subject.

 

Hope this helps.

Edited by Algorithmic Ecology

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!