Forward Operators#

In computational imaging, the ability to accurately model the acquisition system is cornerstone to solving inverse problems. The LinOp()๐Ÿ”— class in Pyxu serves as a foundational abstraction for defining linear forward operators, essentially mapping the original signal or image to the data we observe. This guide distills the knowledge needed to implement custom forward operators using the LinOp() class, as well as related classes in the Pyxu library.

Important Note: LinOp and its derivatives are designed as abstract base classes. This means they serve as templates or โ€œblueprintsโ€ for creating specific operators that suit your project needs. Youโ€™re not supposed to instantiate them directly. Instead, you have two main options:

  1. Subclassing: Extend these classes to create customized operators tailored to your problem.

  2. Generic Constructor Routine: Utilize the from_source๐Ÿ”— function to define new operators from their core methods, like apply()๐Ÿ”—, adjoint()๐Ÿ”—, or jacobian()๐Ÿ”—.

Additionally, donโ€™t forget to explore our comprehensive Reference API. It features pre-implemented versions of many commonly used forward operators, serving as both a shortcut for common tasks and a useful learning resource.

The Importance of Matrix-Free Operators#

Computational imaging often involves working with (very) large datasets which poses challenges in terms of scalability. In Pyxu, the strategy for addressing such problems is to employ matrix-free operators. These operators are defined implicitly via methods rather than explicitly via matrices. This strategy saves memory and improves computational efficiency.

For example, the following code snippet implements a matrix-free sum operator operating over 1D inputs:

class Sum(LinOp):
    def apply(self, arr):
        return arr.sum(keepdims=True)

    def adjoint(self, arr):
        return arr * np.ones(self.shape[1])

This code snippet demonstrates how to define a simple matrix-free linear operator that computes the sum of elements in an array. Letโ€™s break down the components:

  • Class Definition:

    class Sum(LinOp):
        pass
    

    Here, a new class called Sum() is defined, inheriting from Pyxuโ€™s LinOp()๐Ÿ”— base class, the primary abstraction for designing real-valued linear operators. This inheritance means that Sum() is expected to implement certain core methods, namely apply()๐Ÿ”— and adjoint()๐Ÿ”—.

  • The apply() Method:

    def apply(self, arr):
        return arr.sum(keepdims=True)
    

    The apply()๐Ÿ”— method takes an array arr as input and returns the sum of its elements. This is the core functionality of the operator. Instead of representing the sum operation as a matrix and performing matrix-vector multiplication, the method directly computes the sum, making it โ€œmatrix-free.โ€

    Note: keepdims=True is required since, as per the LinOp.apply() signature, it must return an NDArray, and not a scalar.

  • The adjoint() Method:

    def adjoint(self, arr):
        return arr * np.ones(self.shape[1])
    

    The adjoint()๐Ÿ”— method also takes an array arr as input. The adjoint operation for this sum operator is effectively a broadcast operation, mapping a scalar to a vector with the same dimension of the original array space.

Demystifying the Adjoint#

The concept of an adjoint operator can be elusive, so letโ€™s use some intuition to understand it. When we talk about a forward operator in computational imaging or signal processing, weโ€™re usually describing a transformation that takes an input (often some form of data or signal) and produces an output (often some form of measurement or transformed data).

Now, what if you wanted to go in the opposite direction? What if you had a blurry image and wanted to reconstruct the original, clear image? This reverse operation is what the adjoint operator aims to approximate.

In mathematical terms, the adjoint operator is formally defined to be the dual of the forward operator with respect to a particular inner product space. Itโ€™s often considered as a kind of inverse to the forward operator. However, it is important to note that it is not necessarily the true inverse. Rather, it can be viewed as a linear approximation to reverse the forward operatorโ€™s action.

In computational imaging and inverse problems, the adjoint operator is frequently used in iterative algorithms to reconstruct the original data from measurements. The forward operator models the measurement process, and the adjoint helps in the reconstruction process.

So, when you see the term โ€œadjoint,โ€ you can think of it as the backward counterpart to your forward operator, designed to undo or approximate the undoing of whatever your forward operator did.

Computing the adjoint of a complex forward operator can be challenging, especially in fields like tomography where even well-established implementations can have mismatched adjoints for the Radon transform. The mismatch often arises due to approximations or discretizations that make the adjoint non-trivial to compute directly.

Divide and Conquer Strategy#

A useful approach to tackle this issue is to divide and conquer: break down the complex operator into a chain of simpler, well-understood operators for which the adjoints are known or easily computed. Once you have this chain, you can leverage the properties of adjoint operators to calculate the adjoint of the entire composition. Mathematically, if you have operators \(A\) and \(B\), the adjoint of \(AB\) is \(B^* A^*\), where \(^*\) denotes the adjoint.

Pyxu excels in this context, thanks to its operator algebra logic. It allows you to compose operators easily, and when you ask for the adjoint, Pyxu will automatically compute it for you by leveraging the adjoint properties and the chain of simpler operators youโ€™ve defined. In effect, Pyxu performs the heavy lifting, letting you focus on defining the core aspects of your operators.

So, instead of getting bogged down with the mathematical intricacies of calculating a complex adjoint operator, you can use Pyxu to implement your forward operator as a chain of simpler ones. Pyxu takes care of the rest, giving you a powerful, yet computationally efficient, way to tackle challenging problems in computational imaging. Divide and conquer! ๐Ÿ› ๏ธ๐Ÿ”

Quality Assurance for Adjoint Operators in Pyxu#

Pyxu goes the extra mile to ensure the accuracy and integrity of the adjoint operators it ships. Our comprehensive test suite rigorously validates the correctness of adjoint implementations for every operator. This gives you the confidence that you are indeed working with mathematically valid adjoints, offering a robust foundation for your computational imaging projects. ๐Ÿ›ก๏ธ๐Ÿ”

Additional Features of LinOp: Batteries Included#

Pyxuโ€™s LinOp()๐Ÿ”— is designed to be as versatile and useful as possible for computational imaging. Here are some of the extra capabilities baked right into the class to facilitate both simple and complex tasks:

  • Lipschitz Constant: Knowing the Lipschitz constant of an operator is invaluable for optimization routines. If you know it, you can directly store it in the lipschitz๐Ÿ”— property. Donโ€™t know it? No problem! The estimate_lipschitz()๐Ÿ”— method will compute it for you, offering various methods to balance accuracy and execution time.

    my_operator.lipschitz = my_operator.estimate_lipschitz()
    
  • Singular Value Decomposition: Ever needed to compute the singular values of your operator? Use the svdvals()๐Ÿ”— method which employs the Arnoldi algorithm to get the job done without breaking the matrix-free paradigm.

  • Trace of Operator: Sometimes you may need to compute the trace of an operator. The trace()๐Ÿ”— method uses a stochastic approximation with the Hutch++ algorithm to efficiently estimate it.

  • Pseudo-Inverse: If youโ€™re looking to solve an inverse problem, the pinv()๐Ÿ”— method helps you evaluate the pseudo-inverse by solving the (dampened) normal equations, making it the most straightforward way to reverse your operations.

    pseudo_inv_result = my_operator.pinv(my_array, damp=value)
    
  • Support for Explicit Matrices: Though LinOp()๐Ÿ”— is primarily built for matrix-free operations, you are not restricted to them. If you prefer, you can work with explicit matrices using the from_array()๐Ÿ”— method, supporting a variety of array formats including numpy, cupy, sparse, and dask.

    op = LinOp.from_array(np.ones((N, N)))
    

With all these tools and methods at your disposal, LinOp()๐Ÿ”— aims to be your go-to solution for implementing linear operators in computational imaging tasks.

DiffMap for Non-linear Forward Operators#

Linear models are often an idealized representation. In some situations forward operators are inherently non-linear. Take phase retrieval or some forms of tomography as examples: these are scenarios where the relationship between the object and its measurements isnโ€™t just a linear transformation. For such cases, Pyxu provides the DiffMap()๐Ÿ”— class to implement these non-linear forward operators.

Core Methods#

  • apply()๐Ÿ”—: This method is where youโ€™ll implement your forward model. Just like in LinOp()๐Ÿ”—, you override this function to define how your specific forward model operates on an input array.

  • jacobian()๐Ÿ”—: In calculus, the Jacobian matrix represents the best linear approximation to a function at a given point. For a function \(\mathbf{f}: \mathbb{R}^{M} \to \mathbb{R}^{N}\), the Jacobian \(\mathbf{J}_{\mathbf{f}}(\mathbf{x})\) at a point \(\mathbf{x}\) is an \(N \times M\) matrix where the entry \((i, j)\) is \(\frac{\partial f_{i}}{\partial x_{j}}\). When the forward operator is non-linear, knowing the Jacobian is crucial for optimization algorithms.

The jacobian() method allows you to evaluate (implicitly) this matrix, giving you the flexibility to implement the best way to approximate your non-linear operator linearly at a specific point.

A Nod to Deep Learning ๐Ÿš€#

When the Jacobian is unknown or too complex to derive analytically, techniques from deep learning come in handy. Libraries like JAX or PyTorch offer automatic differentiation techniques (autograd) to compute derivatives, which you can use to obtain the Jacobian efficiently in matrix-free fashion.

In Pyxu, methods from_torch๐Ÿ”— and from_jax๐Ÿ”— allow you to define operators whose Jacobians are computed via autograd. This means you can utilize the power of these deep learning libraries to implement non-linear operators seamlessly.

With DiffMap()๐Ÿ”—, youโ€™re well-equipped to tackle the challenges posed by non-linear forward operators in computational imaging.

Wrapping Up#

Pyxu offers a robust and scalable framework for computational imaging professionals to deal with both linear and non-linear forward operators. With the abstraction provided by classes like LinOp()๐Ÿ”— and DiffMap()๐Ÿ”—, you can focus on the problem youโ€™re solving rather than computational limitations. These classes are meant to be subclassed to tailor to your specific problem domain. Happy coding! ๐Ÿš€