Artificial Neural Networks

Started by
71 comments, last by Victor-Victor 14 years, 6 months ago
Quote:Original post by Victor-Victor
Yes, it's silly, because that is not learning.

That's what she said.

Quote:
What you're talking about is one-to-one memory mapping, that's nothing like what NN does. Neural networks is not just memory, it's also a CPU, it integrates *logic* together with information.

You could represent the entire thing using TinkerToy too.

Quote:
Why would anyone "train" some independent values in some static table if any such memory array can be populated by simply addressing a memory one-to-one?

What if you don't know what the values are supposed to be?

Quote:
When you train NN you change the dynamics of the whole system, not just memory, but the way it processes the information, the way it "thinks".

Changing the memory facilitates this.

Quote:
Static tables have only one static function, a simple "recall". Program searches the input database, then if and when it finds the exact match it returns whatever is stored there as output pair. It's one-to-one mapping, and if it wasn't, it would be random or probabilistic, but the important thing is - there is no *processing* here, which is why we call it a 'look-up table' and use it for optimization.

Please refer to definition of Finite state machine

Quote:
There is simply not enough space in the universe to map logic one-to-one.

Can you prove it? I wonder how the universe does it?? Let's ask God.

Quote:
The number of COMBINATIONS for anything over two bits of input increases very dramatically.

The term is 'exponential'. Real ANN's don't work in binary. They use trinary. We learned this during the Syndicate Wars, when the KGB tried to infiltrate the Society of Artifical Neural Networks League of Intelligent Designers.

Quote:
So, instead of to memorize the answers, ANN can somehow generalize it, like humans do, and use this generalized *logic* as a function to really calculate, not just recall, the answers and produce correct output even for inputs it has never seen before.

Can you prove it?

Quote:
"Give a man a fish and he will eat for a day. Teach him how to fish and he will eat for a lifetime."

What if you teach him how to fish but then his fishing pole breaks? Ha! A real ANN would forsee this. Are you sure you're not really a robot?

Quote:
What you have is no threshold, it was just some initial value.

Threshold = some initial value. Initial value = number. Number = sequence of bits. Sequence = Medusa. Flash Gordon = Kaptian Krunch. Kaptian Krunch = BlueBox. BlueBox = 2600. 2600 = number. Number = threshold. Step 3 = profit.

Quote:
Why do you think some static table would ever need to be initialized in such indirect way?

Why do you think some static table would ever need to be initalized in such indirect way?

Quote:
Where did you ever see anyone is using this kind of learning methods on anything but neural networks?

That is interesting. Tell me more.

Quote:
Are you seriously suggesting any of those minimax or whatever other algorithms can compete with this:

Nobody ever said it would.


Quote:
Yes, it does not make any sense to use AI for physics equations. What EJH meant, most likely, is that physics is getting more and more complex, requiring more and more complex AI to be able to handle it, like driving a car.

Physics changes every day. Just yesterday someone had to change the speed of light because it exceeded the state of Nevadas single occupancy vehicle regulations. Note to arcade game players: do not eat the urinal cakes.

Quote:
Taking it further, eventually we might see AI walking and actually looking where it's gonna step next

You sir, are a visionary. Maybe one day a Japanese car company will build a walking robot and use it as a PR tool. Could you imagine if a Boston-based robotics company could build a robotic pony that is able to walk over uneven terrain without even falling down? Maybe someday in the far distant future DARPA will hold a contest to see who can build a fully autonomous car that can drive all on its own. If we're really lucky, and Joshua doesn't explode us all, a computer might finally be able to beat a grandmaster at Chess. That would be something!!

Who am I kidding-- those things will never happen!



Advertisement
Quote:Original post by Victor-Victor
I don't understand what do you mean by "toy example" and "not practical suggestion". Is it true or not? ANN research was stuck frozen for 30 years just because everyone assumed ANN can't do XOR. =


This is utter bs. ANN's can do XOR no problem. One layer networks can't because xor is not linearly separable. No one was stupid enough to think that an ANN cannot learn XOR, you just couldn't do it with one line in 2d space.
Quote:Original post by ibebrett
Quote:Original post by Victor-Victor
I don't understand what do you mean by "toy example" and "not practical suggestion". Is it true or not? ANN research was stuck frozen for 30 years just because everyone assumed ANN can't do XOR. =


This is utter bs. ANN's can do XOR no problem. One layer networks can't because xor is not linearly separable. No one was stupid enough to think that an ANN cannot learn XOR, you just couldn't do it with one line in 2d space.


Do not underestimate the power of human stupidity.


http://en.wikipedia.org/wiki/Perceptron
http://en.wikipedia.org/wiki/Perceptrons_(book)

In 1969 a famous book entitled Perceptrons by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function. They conjectured (incorrectly) that a similar result would hold for a perceptron with three or more layers.

Often-cited Minsky/Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s.

The XOR affair - Critics of the book state that the authors imply that, since a single artificial neuron is incapable of implementing some functions such as the XOR logical function, larger networks also have similar limitations, and therefore should be dropped. Later research on three-layered perceptrons showed how to implement such functions, therefore saving the technique from obliteration.


Yes, people are stupid. They can blindly believe even the most ridiculous of books, if you only convince them it was written by some authority, Minsky in this particular case. Yes, people do not think, they will believe Sun revolves around the Earth, and they will put you in jail if you think otherwise. Now, look back at the history of science and you will realize nothing has changed since then. "All truth goes through three stages. First it is ridiculed, then it is violently opposed, finally it is accepted as self-evident."(Schopenhauer)

This topic is closed to new replies.

Advertisement