Error Upper Bounds for Initial-Value Problems

Started by
2 comments, last by arnero 8 years, 8 months ago

Hello!

After using of-the-shelf ODE solvers and knowing how to use them, I now want to embed a solver in an application for users knowing nothing about ODEs. From real physics and engineering I know that numerical results need an error margin. If I fly somewhere (interplanetary), I would gladly accept doubled computation time if I know for sure that my probe reaches Mars. I am dreaming of using modern language features from C# or Java or C++11 or so to give an ODE solver more awareness about the problem. The user is supposed to enter a model much like in "the incredible machine", but may in later versions also enter text formulas. Keeping everything in objects, incorporating some simple algebra system, I should be able to go beyond Fortran inspired code! With car racing games or snooker I can estimate errors. But then 4d sports driving and Blender game engine break down occasionally.

Even for simpler problems, I read that I should use proven code. Maybe I can translate C code, but I have trouble finding anything. Since someone else pays for this and wants a return on investment, the code needs to be licensed under MIT or BSD license.

Not enough "coming of age": http://www.hindawi.com/journals/mpe/2012/565896/

Or not enough motivation for me? Why polynom, why Legendre? Dispersion?? Estimate? I want to be sure.

Writing fast programs makes more fun. I feel that I get a speed problem when the AI uses the simulation to optimize its behavior. Premature optimization and all that. The AI only wants the end result, not a fixed time step animation for display.

Off topic: Last posts: The web-project with authentication went to an expensive company with respective track record. With all other projects I use third party authentication.

Greetings
Arne

Advertisement

I tinkered around with this problem a bit and now think I use a trial function (polynom). I can easily take the derivative of a polynom. I then have to check that the user given function which relates by function to its derivate is within an error margin of my polynom. For this I need a library which can take derivatives of formulas for a taylor series or even some other stuff like transformations in orthogonal polynom bases (chebeychev, legendre, laguerre).

You can take the derivative of anything with a bit of template magic. You can read about the basic ideas here: https://en.wikipedia.org/wiki/Automatic_differentiation . I found some libraries in the past that implemented both forward and reverse modes using C++ templates, and I have written my own implementations as well.

Thank you for the link. Bookmarked. It is sure better than trying to find zero points of a function. I could calculate derivatives to some order and calculate pessimistic bounds. That would allow me to modify the highest order term in a Taylor series to get bounds. Let's say I work with RK4, so everything is already precise to fourth order, I would have a x^5 divergence of my trust interval with time. Thank IEEE for 64 bit floating points (and Intel for 80 bit).

Pessimistic bounds: sine gives lower bound of -1 and upper of 1. Product gives upper bound of max*max and lower of min*min (add some cases for signs).

RK and Adam-Moulden assume that the trajectory is a polynom. Taylor is polynom. I can insert them, multiply them, subtract them. It sounds too good to be true. I certainly need some proofs of Weirstrass, Jacobi, Hilbert, orthonomality, idem potenz or the like.

This topic is closed to new replies.

Advertisement