s***@arcor.de
2011-02-16 12:39:27 UTC
Hi,
I would like to support the request that Alexander Riess has posted on
January 28. Just like him, I keep getting std::runtime_errors for large
gradients. I am using the C++ library, not the Python interface. I could
provide additional test cases, if necessary, but I suppose the simple
example posted by Alexander already illustrates the problem very well.
Best regards,
Peter
I would like to support the request that Alexander Riess has posted on
January 28. Just like him, I keep getting std::runtime_errors for large
gradients. I am using the C++ library, not the Python interface. I could
provide additional test cases, if necessary, but I suppose the simple
example posted by Alexander already illustrates the problem very well.
Best regards,
Peter
Thanks for the Bugfix.
Today I am back with a class of new, but similar testproblems
min -s*x on [-1,1]
with different s.
opt.set_lower_bounds( [-1.0 ] )
opt.set_upper_bounds( [ 1.0 ] )
opt.set_min_objective(f)
opt.set_xtol_abs(1e-10)
opt.set_xtol_rel(1e-10)
opt.set_ftol_abs(1e-10)
opt.set_ftol_rel(1e-10)
x0 = array([0.0])
1.) s = 1.0
Iteration x f(x) f'(x)
1 0.0 -0 -1.0
2 1.0 -1.0 -1.0
xopt = 1.0
fopt = -1.0
2.) s = 1.0E4
Iteration x f(x) f'(x)
1 0.0 -0 -10000.0
2 0.999806062342 -9998.06062342 -10000.0
3 0.999873394578 -9998.73394578 -10000.0
xopt = 0.999873394578
fopt = -9998.73394578
3.) s = 1.0E5
Iteration x f(x) f'(x)
1 0.0 -0 -100000.0
2 0.562577480974 -56257.7480974 -100000.0
3 1.0 -100000.0 -100000.0
xopt = 1.0
fopt = -100000.0
4.) s = 1.0E6
Iteration x f(x) f'(x)
1 0.0 -0 -1000000.0
xopt = 0.0
fopt = -0.0
5.) s = 1.0E7
Iteration x f(x) f'(x)
1 0.0 -0 -10000000.0
xopt = 0.0
fopt = -0.0
6.) s = 1.0E8
Iteration x f(x) f'(x)
1 0.0 -0 -100000000.0
File "XXX\minExample.py", line 35, in <module>
xOpt = opt.optimize(x0)
File "XXX\Python265\lib\site-packages\nlopt.py", line 231, in optimize
def optimize(*args): return _nlopt.opt_optimize(*args)
RuntimeError: nlopt failure
Conclusion: SQP fails on this class of test-problems if there is a large
gradient ( abs(grad) >= 1.0E6 ). Can you please tell me what is going wrong.
Note: Using MMA eveything works fine.
Kind regards
Alexander Riess
Today I am back with a class of new, but similar testproblems
min -s*x on [-1,1]
with different s.
opt.set_lower_bounds( [-1.0 ] )
opt.set_upper_bounds( [ 1.0 ] )
opt.set_min_objective(f)
opt.set_xtol_abs(1e-10)
opt.set_xtol_rel(1e-10)
opt.set_ftol_abs(1e-10)
opt.set_ftol_rel(1e-10)
x0 = array([0.0])
1.) s = 1.0
Iteration x f(x) f'(x)
1 0.0 -0 -1.0
2 1.0 -1.0 -1.0
xopt = 1.0
fopt = -1.0
2.) s = 1.0E4
Iteration x f(x) f'(x)
1 0.0 -0 -10000.0
2 0.999806062342 -9998.06062342 -10000.0
3 0.999873394578 -9998.73394578 -10000.0
xopt = 0.999873394578
fopt = -9998.73394578
3.) s = 1.0E5
Iteration x f(x) f'(x)
1 0.0 -0 -100000.0
2 0.562577480974 -56257.7480974 -100000.0
3 1.0 -100000.0 -100000.0
xopt = 1.0
fopt = -100000.0
4.) s = 1.0E6
Iteration x f(x) f'(x)
1 0.0 -0 -1000000.0
xopt = 0.0
fopt = -0.0
5.) s = 1.0E7
Iteration x f(x) f'(x)
1 0.0 -0 -10000000.0
xopt = 0.0
fopt = -0.0
6.) s = 1.0E8
Iteration x f(x) f'(x)
1 0.0 -0 -100000000.0
File "XXX\minExample.py", line 35, in <module>
xOpt = opt.optimize(x0)
File "XXX\Python265\lib\site-packages\nlopt.py", line 231, in optimize
def optimize(*args): return _nlopt.opt_optimize(*args)
RuntimeError: nlopt failure
Conclusion: SQP fails on this class of test-problems if there is a large
gradient ( abs(grad) >= 1.0E6 ). Can you please tell me what is going wrong.
Note: Using MMA eveything works fine.
Kind regards
Alexander Riess