jml.optimization
Class AcceleratedProximalGradient

java.lang.Object
  extended by jml.optimization.AcceleratedProximalGradient

public class AcceleratedProximalGradient
extends java.lang.Object

A Java implementation for the accelerated proximal gradient method. We want to optimize the following optimization problem:

min_X g(X) + t * h(X).

It is a general algorithm interface, only gradient and objective function value are needed to compute outside the class.

A simple example:

AcceleratedProximalGradient.prox = new ProxPlus(); // Set the proximal mapping function
double epsilon = ...; // Convergence tolerance
Matrix X = ...; // Initial matrix (vector) you want to optimize
Matrix G = ...; // Gradient of g at the initial matrix (vector) you want to optimize
double gval = ...; // Initial objective function value for g(X)
double hval = ...; // Initial objective function value for t * h(X)

boolean flags[] = null;
while (true) {
  flags = AcceleratedProximalGradient.run(G, gval, hval, epsilon, X); // Update X in place
  if (flags[0]) // flags[0] indicates if it converges
    break;
  gval = ...; // Compute the new objective function value for g(X) at the updated X
  hval = ...; // Compute the new objective function value for t * h(X) at the updated X
  if (flags[1]) // flags[1] indicates if gradient at the updated X is required
    G = ...; // Compute the gradient at the new X
}

Version:
1.0 Mar. 11th, 2013
Author:
Mingjie Qian

Field Summary
private static double beta
           
private static boolean converge
          If the algorithm converges or not.
private static double fval_Y_k
          f(Y_k) = g(Y_k) + h(Y_k).
private static org.apache.commons.math.linear.RealMatrix G_Y_k
          X_{k + 1} = prox_th(Y_k - t * Grad_Y_k) = y_k - t * G_Y_k.
private static org.apache.commons.math.linear.RealMatrix Grad_Y_k
          Current gradient.
private static boolean gradientRequired
          If gradient is required for the next step.
private static double gval_Y_k
          g(Y_k).
private static double hval_Y_k
          h(Y_k).
private static java.util.ArrayList<java.lang.Double> J
          An array holding the sequence of objective function values.
private static int k
          Iteration counter.
static ProximalMapping prox
          Proximity operator: prox_th(X).
private static int state
          State for the automata machine.
private static double t
          Step length for backtracking line search.
private static org.apache.commons.math.linear.RealMatrix X
          Current matrix variable that we want to optimize.
private static org.apache.commons.math.linear.RealMatrix X_pre
          Last matrix variable that we want to optimize.
private static org.apache.commons.math.linear.RealMatrix Y
          Current matrix variable that we want to optimize.
 
Constructor Summary
AcceleratedProximalGradient()
           
 
Method Summary
static void main(java.lang.String[] args)
           
static boolean[] run(org.apache.commons.math.linear.RealMatrix Grad_t, double gval_t, double hval_t, double epsilon, org.apache.commons.math.linear.RealMatrix X_t)
          Main entry for the accelerated proximal gradient algorithm.
 
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Field Detail

prox

public static ProximalMapping prox
Proximity operator: prox_th(X).


Grad_Y_k

private static org.apache.commons.math.linear.RealMatrix Grad_Y_k
Current gradient.


X

private static org.apache.commons.math.linear.RealMatrix X
Current matrix variable that we want to optimize.


X_pre

private static org.apache.commons.math.linear.RealMatrix X_pre
Last matrix variable that we want to optimize.


Y

private static org.apache.commons.math.linear.RealMatrix Y
Current matrix variable that we want to optimize.


G_Y_k

private static org.apache.commons.math.linear.RealMatrix G_Y_k
X_{k + 1} = prox_th(Y_k - t * Grad_Y_k) = y_k - t * G_Y_k.


gval_Y_k

private static double gval_Y_k
g(Y_k).


hval_Y_k

private static double hval_Y_k
h(Y_k).


fval_Y_k

private static double fval_Y_k
f(Y_k) = g(Y_k) + h(Y_k).


gradientRequired

private static boolean gradientRequired
If gradient is required for the next step.


converge

private static boolean converge
If the algorithm converges or not.


state

private static int state
State for the automata machine. 0: Initialization 1: Before backtracking line search 2: Backtracking line search 3: After backtracking line search 4: Convergence


t

private static double t
Step length for backtracking line search.


beta

private static double beta

k

private static int k
Iteration counter.


J

private static java.util.ArrayList<java.lang.Double> J
An array holding the sequence of objective function values.

Constructor Detail

AcceleratedProximalGradient

public AcceleratedProximalGradient()
Method Detail

main

public static void main(java.lang.String[] args)
Parameters:
args -

run

public static boolean[] run(org.apache.commons.math.linear.RealMatrix Grad_t,
                            double gval_t,
                            double hval_t,
                            double epsilon,
                            org.apache.commons.math.linear.RealMatrix X_t)
Main entry for the accelerated proximal gradient algorithm. The matrix variable to be optimized will be updated in place to a better solution point with lower objective function value.

Parameters:
Grad_t - gradient at X_t, required on the first revocation
gval_t - g(X_t)
hval_t - h(X_t)
epsilon - convergence precision
X_t - current matrix variable to be optimized, will be updated in place to a better solution point with lower objective function value
Returns:
a boolean array with two elements: {converge, gradientRequired}