jml.optimization
Class PrimalDualInteriorPoint

java.lang.Object
  extended by jml.optimization.PrimalDualInteriorPoint

public class PrimalDualInteriorPoint
extends java.lang.Object

A Java implementation for the Primal-dual Interior-point method. While general nonlinear objective functions and nonlinear inequality constraints are supported, only linear equality constraints are allowed.

Example:

fval = innerProduct(x, Q.multiply(x)) / 2 + innerProduct(c, x);
H_x = Q;
DF_x = B;
F_x = B.multiply(x).subtract(d);
G_f_x = Q.multiply(x).add(c);
boolean flags[] = null;
int k = 0;
while (true) {
  flags = PrimalDualInteriorPoint.run(A, b, H_x, F_x, DF_x, G_f_x, fval, x, l, v);
  if (flags[0])
    break;
  // Compute the objective function value, if flags[1] is true
  // gradient will also be computed.
  fval = innerProduct(x, Q.multiply(x)) / 2 + innerProduct(c, x);
  F_x = B.multiply(x).subtract(d);
  if (flags[1]) {
    k = k + 1;
    // Compute the gradient
    G_f_x = Q.multiply(x).add(c);
  }
}

Version:
Feb. 28th, 2013
Author:
Mingjie Qian

Field Summary
private static double alpha
           
private static double beta
           
private static boolean converge
          If the algorithm converges or not.
private static double epsilon
           
private static double epsilon_feas
           
private static double eta_t
           
private static boolean gradientRequired
          If gradient is required for the next step.
private static java.util.ArrayList<java.lang.Double> J
          An array holding the sequence of objective function values.
private static int k
          Iteration counter.
private static org.apache.commons.math.linear.RealMatrix l
          Lambda, i.e., the Lagrangian multipliers for inequality constraints.
private static org.apache.commons.math.linear.RealMatrix l_nt
           
private static int m
          Number of inequality constraints
private static org.apache.commons.math.linear.RealMatrix Matrix
           
private static double mu
           
private static int n
          Number of unknown variables
private static int p
          Number of equality constraints
private static org.apache.commons.math.linear.RealMatrix r_cent
           
private static org.apache.commons.math.linear.RealMatrix r_dual
           
private static org.apache.commons.math.linear.RealMatrix r_prim
           
private static double residual
           
private static double s
          Step length for backtracking line search.
private static int state
          State for the automata machine.
private static double t
           
private static org.apache.commons.math.linear.RealMatrix v
          Nu, i.e., the Lagrangian multipliers for equality constraints.
private static org.apache.commons.math.linear.RealMatrix v_nt
           
private static org.apache.commons.math.linear.RealMatrix Vector
           
private static org.apache.commons.math.linear.RealMatrix x
          Unknown variable vector.
private static org.apache.commons.math.linear.RealMatrix x_nt
           
private static org.apache.commons.math.linear.RealMatrix z_pd
           
 
Constructor Summary
PrimalDualInteriorPoint()
           
 
Method Summary
static org.apache.commons.math.linear.RealMatrix getOptimalLambda()
          Get Lambda.
static org.apache.commons.math.linear.RealMatrix getOptimalNu()
          Get Nu.
static void main(java.lang.String[] args)
           
static boolean[] run(org.apache.commons.math.linear.RealMatrix A, org.apache.commons.math.linear.RealMatrix b, org.apache.commons.math.linear.RealMatrix H_x, org.apache.commons.math.linear.RealMatrix F_x, org.apache.commons.math.linear.RealMatrix DF_x, org.apache.commons.math.linear.RealMatrix G_f_x, double fval, org.apache.commons.math.linear.RealMatrix x)
          Run this optimization algorithm.
static boolean[] run(org.apache.commons.math.linear.RealMatrix A, org.apache.commons.math.linear.RealMatrix b, org.apache.commons.math.linear.RealMatrix H_x, org.apache.commons.math.linear.RealMatrix F_x, org.apache.commons.math.linear.RealMatrix DF_x, org.apache.commons.math.linear.RealMatrix G_f_x, double fval, org.apache.commons.math.linear.RealMatrix x_s, org.apache.commons.math.linear.RealMatrix l_opt, org.apache.commons.math.linear.RealMatrix v_opt)
          Run this optimization algorithm.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

x

private static org.apache.commons.math.linear.RealMatrix x
Unknown variable vector.


l

private static org.apache.commons.math.linear.RealMatrix l
Lambda, i.e., the Lagrangian multipliers for inequality constraints.


v

private static org.apache.commons.math.linear.RealMatrix v
Nu, i.e., the Lagrangian multipliers for equality constraints.


gradientRequired

private static boolean gradientRequired
If gradient is required for the next step.


converge

private static boolean converge
If the algorithm converges or not.


state

private static int state
State for the automata machine. 0: Initialization 1: Before ensuring f_i(x) <= 0, i = 1, 2, ..., m 2: Ensuring f_i(x) <= 0, i = 1, 2, ..., m 3: Backtracking line search 4: After backtracking line search 5: Convergence


t

private static double t

k

private static int k
Iteration counter.


n

private static int n
Number of unknown variables


p

private static int p
Number of equality constraints


m

private static int m
Number of inequality constraints


mu

private static double mu

epsilon

private static double epsilon

epsilon_feas

private static double epsilon_feas

alpha

private static double alpha

beta

private static double beta

eta_t

private static double eta_t

s

private static double s
Step length for backtracking line search.


residual

private static double residual

r_prim

private static org.apache.commons.math.linear.RealMatrix r_prim

r_dual

private static org.apache.commons.math.linear.RealMatrix r_dual

r_cent

private static org.apache.commons.math.linear.RealMatrix r_cent

Matrix

private static org.apache.commons.math.linear.RealMatrix Matrix

Vector

private static org.apache.commons.math.linear.RealMatrix Vector

z_pd

private static org.apache.commons.math.linear.RealMatrix z_pd

x_nt

private static org.apache.commons.math.linear.RealMatrix x_nt

l_nt

private static org.apache.commons.math.linear.RealMatrix l_nt

v_nt

private static org.apache.commons.math.linear.RealMatrix v_nt

J

private static java.util.ArrayList<java.lang.Double> J
An array holding the sequence of objective function values.

Constructor Detail

PrimalDualInteriorPoint

public PrimalDualInteriorPoint()
Method Detail

main

public static void main(java.lang.String[] args)
Parameters:
args -

getOptimalLambda

public static org.apache.commons.math.linear.RealMatrix getOptimalLambda()
Get Lambda.

Returns:
Lambda

getOptimalNu

public static org.apache.commons.math.linear.RealMatrix getOptimalNu()
Get Nu.

Returns:
Nu

run

public static boolean[] run(org.apache.commons.math.linear.RealMatrix A,
                            org.apache.commons.math.linear.RealMatrix b,
                            org.apache.commons.math.linear.RealMatrix H_x,
                            org.apache.commons.math.linear.RealMatrix F_x,
                            org.apache.commons.math.linear.RealMatrix DF_x,
                            org.apache.commons.math.linear.RealMatrix G_f_x,
                            double fval,
                            org.apache.commons.math.linear.RealMatrix x)
Run this optimization algorithm.

Example:

fval = innerProduct(x, Q.multiply(x)) / 2 + innerProduct(c, x);
H_x = Q;
DF_x = B;
F_x = B.multiply(x).subtract(d);
G_f_x = Q.multiply(x).add(c);
boolean flags[] = null;
int k = 0;
while (true) {
  flags = PrimalDualInteriorPoint.run(A, b, H_x, F_x, DF_x, G_f_x, fval, x, l, v);
  if (flags[0])
    break;
  // Compute the objective function value, if flags[1] is true
  // gradient will also be computed.
  fval = innerProduct(x, Q.multiply(x)) / 2 + innerProduct(c, x);
  F_x = B.multiply(x).subtract(d);
  if (flags[1]) {
    k = k + 1;
    // Compute the gradient
    G_f_x = Q.multiply(x).add(c);
  }
}

Parameters:
A - matrix for equality constraints
b - the constant vector for equality constraints
H_x - Hessian for the objective function and nonlinear functions for inequality constraints
F_x - a vector of nonlinear functions for inequality constraints, F_x(i) is the i-th nonlinear function value for the i-th inequality constraint
DF_x - Derivative for F_x
G_f_x - Gradient of the objective function at the current point x
fval - objective function value at the current point x
x - current point, will be set in place during the procedure
Returns:
a boolean array of two elements: {converge, gradientRequired}

run

public static boolean[] run(org.apache.commons.math.linear.RealMatrix A,
                            org.apache.commons.math.linear.RealMatrix b,
                            org.apache.commons.math.linear.RealMatrix H_x,
                            org.apache.commons.math.linear.RealMatrix F_x,
                            org.apache.commons.math.linear.RealMatrix DF_x,
                            org.apache.commons.math.linear.RealMatrix G_f_x,
                            double fval,
                            org.apache.commons.math.linear.RealMatrix x_s,
                            org.apache.commons.math.linear.RealMatrix l_opt,
                            org.apache.commons.math.linear.RealMatrix v_opt)
Run this optimization algorithm.

Example:

fval = innerProduct(x, Q.multiply(x)) / 2 + innerProduct(c, x);
H_x = Q;
DF_x = B;
F_x = B.multiply(x).subtract(d);
G_f_x = Q.multiply(x).add(c);
boolean flags[] = null;
int k = 0;
while (true) {
  flags = PrimalDualInteriorPoint.run(A, b, H_x, F_x, DF_x, G_f_x, fval, x, l, v);
  if (flags[0])
    break;
  // Compute the objective function value, if flags[1] is true
  // gradient will also be computed.
  fval = innerProduct(x, Q.multiply(x)) / 2 + innerProduct(c, x);
  F_x = B.multiply(x).subtract(d);
  if (flags[1]) {
    k = k + 1;
    // Compute the gradient
    G_f_x = Q.multiply(x).add(c);
  }
}

Parameters:
A - matrix for equality constraints
b - the constant vector for equality constraints
H_x - Hessian for the objective function and nonlinear functions for inequality constraints
F_x - a vector of nonlinear functions for inequality constraints, F_x(i) is the i-th nonlinear function value for the i-th inequality constraint
DF_x - Derivative for F_x
G_f_x - Gradient of the objective function at the current point x
fval - objective function value at the current point x
x_s - current point, will be set in place during the procedure
l_opt - optimal Lambda
v_opt - optimal Nu
Returns:
a boolean array of two elements: {converge, gradientRequired}