# Tutorial¶

The toolbox is just one file, the `qmm.py`

module. The module contains
essentially three part.

The optimization algorithms, implemented as functions, that minimize

The

`Objective`

class that implements

where \(u\) is a vector and \(\varphi\) a scalar function.

The

`Loss`

class that implements \(\varphi\).

## Operators¶

The first thing to do is to implement the forward operator \(V\) and adjoint \(V^T\). User is in charge of it. They are callable that could be Python functions or methods of objects.

```
def forward(array):
# ...
# do computation
# ...
return out # An array or a list of array
def adjoint(out):
# ...
# do computation
# ...
return array
```

The forward parameter must accept a `numpy.ndarray`

\(x\), of any shape,
as unique parameter. I recommend using partial in
the `functools`

of the standard library, or using object oriented programming.
The output of the forward operator must be

a

`ndarray`

of any shape,or a list of

`ndarray`

(of any shape also).

Consequently, the adjoint operator must accept a `ndarray`

or a list of
`ndarray`

as parameter, and returns a unique `ndarray`

, of any shape, as
output.

Note

The list of array allows mixed operators, like combination of data observation models, or multiple regularization.

Everything is internally vectorized. Therefore, the use of list of array implies memory copies of arrays.

## Losses¶

The second step is to instantiate loss function \(\varphi\), `Huber`

for instance

```
from qmm import Huber, Objective, QuadObjective, mmmg
phi = Huber(delta=10)
```

Several losses are implemented, see Background and the qmm module.

## Objectives¶

Then, a `Objective`

\(\mu \Psi(Vx)\) named `prior`

can be instanced

```
prior = Objective(forward, adjoint, phi, hyper=0.01)
```

If a quadratic objective like \(\|y - H x\|_2^2\) is needed, the specific
class `QuadObjective`

can be used

```
data_adeq = QuadObjective(H, Ht, data=data)
```

Note

In the example above, the hyperparameter value is set to \(\mu = 1\) and the data term is different that 0. For the prior term, the data is 0 by default and the hyperparameter is set to 0.01.

## Optimization algorithms¶

Then you can run the algorithm, `mmmg()`

for instance,

```
result = mmmg([data_adeq, prior], init, max_iter=200)
```

where the list `[data_adeq, prior]`

means that the two objective functions
are summed. The output result is an instance of `OptimizeResult`

.

Note

`BaseObjective`

can be summed returning a `MixedObjective`

that
behave like a list with additional functionalities. The above is equivalent
to

```
fun = data_adeq + prior
result = mmmg(fun, init, max_iter=200)
```

Two algorithms are proposed :

`mmcg()`

that implements a Polak-Ribière Conjugate Gradient.`mmmg()`

that implements a subspace by Memory-Gradient with 2D step (that, therefore, include the conjugacy parameter).

Both algorithms have close form formula for the 1D or 2D step by Majorization-Minimization Quadratic.

In addition a Linear Conjugate Gradient `lcg()`

is implemented for quadratic
objective.