# More general multidimensional minimization in Rb-GSL?

Dear all,

in Ruby-Gsl’s multidimensional minimization, the FMinimizer function
requires an equal number of variables and parameters, as
illustrated in the example

include GSL::MultiMin

my_f = Proc.new { |v, params|
x = v[0]; y = v[1]
p0 = params[0]; p1 = params[1]
10.0*(x - p0)(x - p0) + 20.0(y - p1)*(y - p1) + 30.0
}

my_df = Proc.new { |v, params, df|
x = v[0]; y = v[1]
p0 = params[0]; p1 = params[1]
df[0] = 20.0*(x-p0)
df[1] = 40.0*(y-p1)
}

my_func = Function_fdf.alloc(my_f, my_df, 2)
my_func.set_params([1.0, 2.0]) # parameters

Can this be generalized somehow … without tinkering with the
C code ?

More precisely, I’d like to do some minimization of the values
of a vector under some constraints, like:

m*(target-vector)= minimal,

where m is a matrix,
all entries of vectors target and vector are whole numbers,
but differ in a prescribed number of entries .

(Pseudo-)inverting m is not really an option, as can it be quite big (
several thousand lines / rows ).
I’ve tried to implement the constraints in a Proc involving m and
target, called on vector, but can’t get it to work with the Ruby-GSL
minimization
FMinimizer.

Any ideas?

Thank you very much,

Axel

Axel E. wrote:

p0 = params[0]; p1 = params[1]
my_func = Function_fdf.alloc(my_f, my_df, 2)
where m is a matrix,
Thank you very much,

Axel

I’m not familiar with the minimization algorithms in GSL, but if you
don’t mind using R, most of the constrained and unconstrained algorithms
are in an R package called “optim”:

optim package:stats R Documentation

General-purpose Optimization

Description:

`````` General-purpose optimization based on Nelder-Mead, quasi-Newton
and conjugate-gradient algorithms. It includes an option for
box-constrained optimization and simulated annealing.
``````

There is an R-Ruby interface called, I think, RSRuby, and I think it’s
available as a gem. I personally like Nelder-Mead for small problems. It
converges very slowly but it seldom gets trapped in false local minima,
is robust, will handle constraints easily and works where fancier things
don’t. For large problems, it’s probably too slow to be practical.

The “optim” code itself is written in C and is available in the package
source if you want to interface it directly to Ruby. Or you could just
pick up Nash’s book and implement the algorithm directly.

`````` Nash, J. C. (1990) _Compact Numerical Methods for Computers.
Linear Algebra and Function Minimisation._ Adam Hilger.``````