• Advertisement
Sign in to follow this  

Using neural network to replace math

This topic is 4216 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

My questions shoot for broad, abstract formulations...the specifics of different types of neural/logic networks probably aren't pertinent inasfar as my questions go. Is there any research done in the field of using a neural/logic network as a language to describe physics simulations *instead* of using mathematical language? I am not talking about implementing AI, I'm talking about using a completely different underlying language to describe the physics...maybe the constructs similar to those used in AI could be used to instead describe the rigid body. I don't know if the questions/formulations I am making are valid, or just noise, and if they are not valid, why. The question I might "really" be asking isn't so much as whether a 'neural network' could replace the mathematical language, but is there any language that could replace 'math' to describe the constructs we are programming on a daily basis...and, if there was such a language, but produced near similar results [as math language], would the program itself ultimately be any different (as per at the register level of the computer whilst executing the program). My understanding and experience* is that a mathematical language invents formulations that do not exist**, with all particulars included, and from these inventions logical deductions can be made that can hold 100% accurate (does a 'sphere/circle' exist? Does continuity exist? How long does it take a mathematically perfect sphere on a mathematically 'flat' plane to come to a rest if rolling across it?) One alternative is a 'verbal' language, which would try describing the characteristics of a 'circular shape' on the microscopic/macroscopic level, but from that logic deductions quickly fall apart (what is the 'area' of an imperfect 'circular shape'). If there *is* a language that can make such far sighted logical deductions, would it still be a 'math' language, even if it is a so-called 'neural network' construct instead of 'matrices' and 'vectors'? *I am familiar with most things posted on the math forums on this site, having implemented much of it and being in an engineering major for school. ** "As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality" -Einstein Hey well I'm done my rant. I dont' even know what the hell I'm talking about, exactly. Meanwhile, I'm going to continue implementing my physics simulation using plain old math.

Share this post


Link to post
Share on other sites
Advertisement
I'd have to disagree with Einstein. Math, does in fact, model our world accurately. But typically, one does not program the small imperfections of the real world into their programs. By imperfections, I mean the events that occur at the atomic level or lower which can affect the final result, but that we disregard or are simply unaware of.

But this is an interesting idea. I'm not knowledgable of nueral networks enough to give you any advice or anything, but this sounds like a fairly interesting concept. Nueral networks are capable, to my knowledge, of storing fuzzy data very well. Take for example one's own ability to imagine 3d worlds in their minds without knowing any math. So if neural networks can perform such a task not on their own, but with input of what to imagine, you've got something.

But first, you gotta make nueral networks work well enough to accomplish such a task, which takes a lifetime to do by itself.

Share this post


Link to post
Share on other sites
Quote:

By imperfections, I mean the events that occur at the atomic level or lower which can affect the final result, but that we disregard or are simply unaware of.

Yes, see we are on the same page here. A mathematical language, to take into account all 'particulars' or 'imperfections' invents a construct called a 'circle' and just disregards microscopic molecular imperfections. From that, you can then make logical deductions that hold 100% accurate: you can apply a surface area to the circle (that doesn't exist), you can stack these circles on top of eachother and create 'cylinders' and all sorts of other formulations that hold 100% accurate, but only because of the invention of a circle.

>>Math, does in fact, model our world accurately

Saying that it *models* our world accurately is exactly what I perceived Einstein to be saying, therefore that doesn't seem like a disagreement at all! The important note here is that any sort of 'language' whether it be verbal or mathematical can be viewed as a terrain map. The map, itself, *is not* the terrain it represents...the only connection between the terrain and the map is that they have a similar structure. If the map does not have a structure similar to the terrain, you may get lost..the usefulness of the map (the language, whether it be mathematical or verbal) depends on how similar its structure is to the terrain, but the map *is not* the terrain. The terrain is 'empirical reality' (truth?)

A few explanations of the mathematical constructs I mentioned above, and why they *do not* exist:

-There 'is no' such thing as a circle/sphere that exists in empirical reality
-There 'is no' such thing as continuity in empirical reality
-There 'is no' such thing as 'absolute space' or 'absolute time' in empirical reality

I can make these assertions solidly, because anyone that wishes to counter them also falls with burden of proof that they exist. The example of the sphere on the plane: a 'mathematical' sphere on a 'mathematically' flat incline that is rolling does not ever come to rest! A 'sphere' on an 'flat plane' comes to rest because of surface deformations at the contact areas, meaning a 'perfect' or 'mathematical' sphere *does not* exist!

Continuity...take a 'continuous' force such as gravity. How long does it take for the effects of gravity to propagate through 'space' (whatever that is) ? For any frame of reference, the effects of gravity propagate at the speed of light (670 million miles per hour), meaning that it is not instantaneous, meaning it is not continuous (as the ball gets closer to earth, the 'spatial' distance decreases, and the effects of gravity occur faster). You could use a similar argument for, say, integrating pressure on a hull: there is no such thing as an 'infinitely small' point, rather the smallest unit of pressure comes down to the 'size' (energy region?) of a molecule, which is measurable and finite (really freaking small, but *not* infinitely small). The invention of continuity works well, it is a map that has a structure similar to the terrain of empirical reality, but it *does not* "actually" exist! And the invention of continuity works a hell of a lot better than, say, a 'verbal' aristotolean representation.

My point being, ultimately, that any language is arbitrary and probably contains inventions of things that *do not* exist. Subsequently, instead of using 'math' to describe a physics simulation, why not a neural network language, and could a neural network language (or some other language) be more useful than 'math' ?

Quote:

But first, you gotta make nueral networks work well enough to accomplish such a task, which takes a lifetime to do by itself.

Well, I have experience writing simple NON PLAYER CHARACTER agents that learn how to aim projectiles, find pathways, etc. Maybe I could use similar constructs to replace my implementation of physics and collision detection? One potential benefit is that you could setup a few 'test case' scenarios, tell the computer what outputs you expect, and that can automatically program your physics for you.

thanks for the response.

Share this post


Link to post
Share on other sites
A large amount of mathematical concepts used in physics have been either created or adapted to be useful. They are used because they're the best thing we scientists have around for actually predicting stuff.

By using an existing mathematical concept, you are basically avoiding all the necessary work to

  • Identify all relevant information in a scene.
  • Represent that information in a form that is both concise and understandable.
  • Prove that operations performed will cause an acceptable margin of error on the predicted final result (when compared to the empirical result).


Every approach to modeling physics will have to go through these three steps before it can be used, and such would be the case for an approach based on neural networks (neural networks, as of themselves, do not qualify, you need a translation system first). These three steps have been consistently gone through for all mathematical models currently in use, and there has been constant progress on both the "concise" and "acceptable error" axes.

As far as I am concerned, our brain processes visual information (the relevant one) along with some mass data, and outputs correct predictions for both visual movement AND spatial positioning in both gravity and non-gravity situations. All of this is mainly based on orientation-sensitive (V1) and movement-sensitive (MT) areas of the brain, along with, possibly, premotor-area-friendly output for actual catching as it is evolutionarily useful for brachiation. In this aspect, it serves as a partially preordered general-purpose system that learns the shape of parabolas from experimenting with the universe (and quite often doesn't get it right when only little input data is presented). The important part here is that representation is neither concise nor understandable by humans, thus neural networks would not be a choice at all in real physics. As for simulations, well... Most simulations use mathematical models because they are easier for humans to manipulate than a set of neural inhibitor/exciter weights. These base themselves on an instantaneous representation of the world as initial state (time, position and velocity), while the brain has only knowledge of time and position. This means that the brain uses a lot more space to store velocity as displacements (usually as delayed feedback loops between one or more areas) when a simple vector would be enough for a mathematical model.

I have good reason to believe that a neural network simulator would also use a lot of memory to store velocity information if it modeled physics like the human brain does.

Of course, I might just be misunderstanding what you mean by neural network language.

Share this post


Link to post
Share on other sites
NeuroAnimator from Siggraph '98.

Neural Networks are also used to evaluate your FICO scores.

Robert Hecht-Nielsen; his company was acquired by Fair Isaac. As I recall, Hecht-Nielsen (company) built the first NN hardware.

NN have fantastic promise (for complex apps), but require fantastic computing power (though scale very well for parallel system implementations). For game applications, I can't think of a problem that could be solved more easily and efficiently with a NN vs. a traditional solution.

Share this post


Link to post
Share on other sites
Good post.

Why is a bubble round? :)

...

For the same reason is why mathematics is the best tool we may ever use to reason about our universe.

Regardless, I am unclear by what you mean by a mathematical language. Now, you say such things as 'a mathematical language', this could be anything and is nothing.

let x be an element of some space. What is x? Exactly. So what do you mean by the string 'mathematical language'?

You say things do not exist like circles. Neither do the natural numbers like 1,2,... . Your arguments of the lack of continuity etc can easily be gotten around within the apporpriate mathematical framework (notion of continuity was one of the ancestors of topology). As for space , time, absolutes? Nothing to do with the mathematical language. Essentially, your argument of what exists is not why science has difficulty describing nature.

Note too that the entire functionality of some neural network is constructible using mathematics.

[Edited by - Daerax on July 5, 2006 9:23:56 PM]

Share this post


Link to post
Share on other sites
>>My questions shoot for broad, abstract formulations...the specifics of
>>different types of neural/logic networks probably aren't pertinent
>>inasfar as my questions go.

Neural networks are afaik used mostly as a regression technique: you teach them how to map input (e.g. physical world state) to output (e.g. an updated physical world state).

Of course, "mapping" is the heart of all mathematics and algorithmics: you give your physics simulator a scene description, it gives you an updated scene. You plug X into a mathematical formula and you get Y.

The "mappings" produced by neural networks have the amazing property that they have an incredible number of degrees of freedom (i.e. independent variables). For example, even a smallish feedforward network can have a +1000-dimensional weight-space.

This is simultaneously both exciting and depressing, since our human brains cannot cope with the intricate relationships between these thousands of degrees of freedom. The mathematics produced by human mathematicians has always been rather high-level; humans seem to be able to consciously understand reality only through "languages" such as mathematics, from which we form "sentences" such as "g = 9.81ms^-2". High-level languages are our tool of understanding the low-level languages of the reality.

Neural networks have their own problems as well. Not all networks can learn everything, neither in theory nor in practice. Curiously, the learning formalisms for neural networks are still formed by human mathematicians, and tend to be sentences of very high level languages, such as "minimize the error against teaching set".

-- Mikko

Share this post


Link to post
Share on other sites
the universe itself is the only way to 100% accurately compute anything that goes on within the universe ;) anything else is just a model.

Share this post


Link to post
Share on other sites
How would you train a neural network to simulate the forward motion of physics? You'd probably end up having it watch mathematically-simulated physics and slowly learn to imitate that behavior. And where does that get you? A simulation so complex in method (although simple in result) that it becomes cumbersome to update the simulation with new physical data (such as a new understanding of aerodynamics).

Share this post


Link to post
Share on other sites
First off, I'd like to say that artificial neural networks are a mathematical concept. So you can't replace math with them since they themselves are still math. In fact, even if you could come up with something to replace what we now think of as "math" for physical simulations, it would probably be quickly grouped under the term math. Math is simply a set of numerical tools to model and predict the universe. Whenever we come upon something we don't understand we make new math to explain it and each new element of math is designed to do something different. Neural networks were not designed for physics. They were designed for adapting, generalizing, and pattern recognition primarily(in evolution and in computers). On the other hand, calculus and linear algebra were designed specifically for modelling physics and the natural world. I personally think neural networks are more likely to be replaced than to do any replacing. Calculus and vector math have been well establish and polished over many years, but adaptive systems technology is still relativly new with the invention of computers.

Share this post


Link to post
Share on other sites
SIDENOTE: ANY describtion of the whole universe in that universe is the universe itself. Therefore that description is unique and it is equivalent to the universe.

When you apply a lossy-compression to the universe (get rid of the dirty details) you have an idealization, then you describe what you have in a language.

The idealization is NOT part of the language. And you can put as much detail as you want/need/possible.

Will check this post later for replies, am interested!

Share this post


Link to post
Share on other sites
Quote:
Original post by arithma
SIDENOTE: ANY describtion of the whole universe in that universe is the universe itself.


Are you sure? A lot of things can be fully described by only a subset of themselves (turing machines, second-order logic on integers). Why could this not be the case for the universe.

Share this post


Link to post
Share on other sites
Quote:
Original post by BeanDog
How would you train a neural network to simulate the forward motion of physics?


See the NeuroAnimator link from Siggraph 98 in my post above: they simulated not only forward physics motion, but also multi-linked pendulums, spacecraft, dolphins and cars, claiming huge speed ups over traditional simulation (for '98 sim tech).

Share this post


Link to post
Share on other sites
Quote:
Original post by Christer Ericson
Quote:
Original post by Daerax
Why is a bubble round? :)
Because it accurately solved a minimization problem.


Precisely, so also is mathematics the most efficient language to describe nature with. Contrapositively, it is the language of maximal complexity that may be used to most efficently describe nature.

Any more complexity and we will enter paradoxes in our mathematical-replacement system. Consider what happens when you apply cantor's diagniolization method across a real number which can encode all of informations, (i.e. the number constructed as such s = .000102030405060708091011...0000 000100020003...0011...000000 000001000002... to encode all information to s we simply assign two digit blocks as such 00 -> a, 01 -> b etc. until we have assigned all letters and punctuation to our digit pairs and then let the remainder be simple blanks up to 99. note that we have 100 blocks of these. pairs or triples etc. of blocks also make new blocks. Numbers such as n = 0000 or 0001 are also blocks in s. There are 10,000 of such blocks as n. the pattern repeats ad infinitum) to define some real number r. If we diagnolize across s and define a number r such that if the nth block of s describes a real r(n) then we change the nth digit of r and if not we set r(n) = 1. We have thus just described an undescribeable real. A paradox! The paradox ofcourse is due to the use of a language with ambiguities as english and boils down to what we mean by the description of a real. This is known as Richard's Paradox.

In essence it does not do to do math in a language which serves as its own metalanguage. That rules out a lot of possibilities of what we may use for a mathematical language, including all natural languges (french, english,chinese) .

This method can be extended to computing machines as Turing did and defining digits in terms of blocks of programs. But even if we use computable numbers which allow us to define precisely what a description of a number is, a paradox still ensues implying the inconsitency of the reals. but ofcourse this is explained away by Turing's halting theorem and hints that turing computing machines will not allow us to precisely present much of math. Note that I am not saying you cannot do math *on* computers but rather you should not do it *as*. Constructive logics are certainly well representable and we may define maths from there, but instead I mean it does not do to do math or talk about math interms of computing machines themselves (instead of the logics we may implement on them). Thus neural networks or 'verbal' languages are out.

[Edited by - Daerax on July 9, 2006 4:18:33 PM]

Share this post


Link to post
Share on other sites
Quote:
Original post by ToohrVyk
Quote:
Original post by arithma
SIDENOTE: ANY describtion of the whole universe in that universe is the universe itself.


Are you sure? A lot of things can be fully described by only a subset of themselves (turing machines, second-order logic on integers). Why could this not be the case for the universe.


I think you may have made a misstatement, it is the _Second order arithmetic_ and not _second order logic_ which is the logic that is quantified over the power set of the naturals with which we may do analysis with (work with reals).

Share this post


Link to post
Share on other sites
I think that einstein quote doesn't mean what you think it means - it's out of context.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement