Non-linear constrained minimization

Started by
10 comments, last by Moeller 17 years, 6 months ago
By introducing the max() operation, you've taken away the differentiability of your constraint. This is a big problem for various methods, particularly the ones which count on well-conditioned Jacobians and Hermitians. When you're bouncing around on your generated equality constraint, the discontinuities at 0 and ε will be kicking you quite often. Likewise, when you aren't near the constraint, the constraint will pretend not to give a damn, confusing the optimizer when it passes between active and inactive regions.
Advertisement
This idea is used to solve NCPs by transformation of the NCP into a system of nonlinear equations. Usually one complementarity condition (x > 0, y > 0, x * y = 0) is transformed into one nonlinear equation (analogously) and is know to me as exact regularisation. The big problem with this approach is that the function max(x, 0) is NOT differentiable at 0 which especially Newton-Raphson methods do not like. Depending on the case you still can find a way to get more or less robustly a solution to the system of nonlinear equastion.

This topic is closed to new replies.

Advertisement