The toolbox is just one file, the
qmm.py module. The module contains
essentially three part.
The optimization algorithms, implemented as functions, that minimize
Objectiveclass that implements
where \(u\) is a vector and \(\varphi\) a scalar function.
Lossclass that implements \(\varphi\).
The first thing to do is to implement the forward operator \(V\) and adjoint \(V^T\). User is in charge of it. They are callable that could be Python functions or methods of objects.
def forward(array): # ... # do computation # ... return out # An array or a list of array def adjoint(out): # ... # do computation # ... return array
The forward parameter must accept a
numpy.ndarray \(x\), of any shape,
as unique parameter. I recommend using partial in
functools of the standard library, or using object oriented programming.
The output of the forward operator must be
ndarrayof any shape,
or a list of
ndarray(of any shape also).
Consequently, the adjoint operator must accept a
ndarray or a list of
ndarray as parameter, and returns a unique
ndarray, of any shape, as
The list of array allows mixed operators, like combination of data observation models, or multiple regularization.
Everything is internally vectorized. Therefore, the use of list of array implies memory copies of arrays.
The second step is to instantiate loss function \(\varphi\),
from qmm import Huber, Objective, QuadObjective, mmmg phi = Huber(delta=10)
Objective \(\mu \Psi(Vx)\) named
prior can be instanced
prior = Objective(forward, adjoint, phi, hyper=0.01)
If a quadratic objective like \(\|y - H x\|_2^2\) is needed, the specific
QuadObjective can be used
data_adeq = QuadObjective(H, Ht, data=data)
In the example above, the hyperparameter value is set to \(\mu = 1\) and the data term is different that 0. For the prior term, the data is 0 by default and the hyperparameter is set to 0.01.
Then you can run the algorithm,
mmmg() for instance,
result = mmmg([data_adeq, prior], init, max_iter=200)
where the list
[data_adeq, prior] means that the two objective functions
are summed. The output result is an instance of
fun = data_adeq + prior result = mmmg(fun, init, max_iter=200)
Two algorithms are proposed :
mmcg()that implements a Polak-Ribière Conjugate Gradient.
mmmg()that implements a subspace by Memory-Gradient with 2D step (that, therefore, include the conjugacy parameter).
Both algorithms have close form formula for the 1D or 2D step by Majorization-Minimization Quadratic.
In addition a Linear Conjugate Gradient
lcg() is implemented for quadratic