pyxu.operator.blocks#
- stack(ops, axis, **kwargs)[source]#
Construct a stacked operator.
A stacked-operator \(V: \mathbb{R}^{d} \to \mathbb{R}^{c}\) is an operator containing (vertically or horizontally) blocks of smaller operators \(\{O_{1}, \ldots, O_{N}\}\).
This is a convenience function around
hstack()
andvstack()
.- Parameters:
- Returns:
op – Stacked operator.
- Return type:
- vstack(ops, **kwargs)[source]#
Construct a vertically-stacked operator.
A vstacked-operator \(V: \mathbb{R}^{d} \to \mathbb{R}^{c_{1} + \cdots + c_{N}}\) is an operator containing (vertically) blocks of smaller operators \(\{O_{1}: \mathbb{R}^{d} \to \mathbb{R}^{c_{1}}, \ldots, O_{N}: \mathbb{R}^{d} \to \mathbb{R}^{c_{N}}\}\), i.e.
\[\begin{split}V = \left[ \begin{array}{c} O_{1} \\ \vdots \\ O_{N} \\ \end{array} \right]\end{split}\]- Parameters:
- Returns:
op – Vertically-stacked (c1+…+cN, d) operator.
- Return type:
Notes
All sub-operator domains must have compatible shapes, i.e. all integer-valued
dim
s must be identical.
- hstack(ops, **kwargs)[source]#
Construct a horizontally-stacked operator.
A hstacked-operator \(H: \mathbb{R}^{d_{1} + \cdots + d_{N}} \to \mathbb{R}^{c}\) is an operator containing (horizontally) blocks of smaller operators \(\{O_{1}: \mathbb{R}^{d_{1}} \to \mathbb{R}^{c}, \ldots, O_{N}: \mathbb{R}^{d_{N}} \to \mathbb{R}^{c}\}\), i.e.
\[H = \left[ \begin{array}{ccc} O_{1} & \cdots & O_{N} \end{array} \right]\]- Parameters:
- Returns:
op – Horizontally-stacked (c, d1+….+dN) operator.
- Return type:
Notes
All sub-operator domains must have compatible shapes, i.e. all
codim
s must be identical.
- block_diag(ops, **kwargs)[source]#
Construct a block-diagonal operator.
A block-diagonal operator \(D: \mathbb{R}^{d_{1} + \cdots + d_{N}} \to \mathbb{R}^{c_{1} + \cdots + c_{N}}\) is an operator containing (diagonally) blocks of smaller operators \(\{O_{1}: \mathbb{R}^{d_{1}} \to \mathbb{R}^{c_{1}}, \ldots, O_{N}: \mathbb{R}^{d_{N}} \to \mathbb{R}^{c_{N}}\}\), i.e.
\[\begin{split}D = \left[ \begin{array}{ccc} O_{1} & & \\ & \ddots & \\ & & O_{N} \\ \end{array} \right]\end{split}\]- Parameters:
- Returns:
op – Block-diagonal (c1+…+cN, d1+…+dN) operator.
- Return type:
See also
- block(ops, order, **kwargs)[source]#
Construct a (dense) block-defined operator.
A block-defined operator is an operator containing blocks of smaller operators. Blocks are stacked horizontally/vertically in a user-specified order to obtain the final shape.
- Parameters:
- Returns:
op – Block-defined operator. (See below for examples.)
- Return type:
Notes
Each row/column may contain a different number of operators.
Examples
>>> block( ... [ ... [A], # ABEEGGG ... [B, C, D], # ACEEHHH ... [E, F], # ADFFHHH ... [G, H], ... ], ... order=0, ... ) >>> block( ... [ ... [A, B, C, D], # ABBCD ... [E], # EEEEE ... [F, G], # FFGGG ... ], ... order=1, ... )
See also
- coo_block(ops, grid_shape, *, parallel=False)[source]#
Constuct a (possibly-sparse) block-defined operator in COOrdinate format.
A block-defined operator is an operator containing blocks of smaller operators. Blocks must align on a coarse grid, akin to the COO format used to define sparse arrays.
- Parameters:
ops (tuple[Sequence[OpT], tuple[Sequence[Integral], Sequence[Integral]]]) –
(data, (i, j)) sequences defining block placement, i.e.
data[:]
areOpT
instances, in any order.i[:]
are the row indices of the block entries on the coarse grid.j[:]
are the column indices of the block entries on the coarse grid.
grid_shape (
OpShape
) – (M, N) shape of the coarse grid.parallel (
bool
) –If
true
, use Dask to evaluate the following methods:.apply()
.prox()
.grad()
.adjoint()
.[co]gram().[apply,adjoint]()
- Returns:
op – Block-defined operator. (See below for examples.)
- Return type:
Notes
Blocks on the same row/column must have the same
codim
/dim
s.Each row/column of the coarse grid must contain at least one entry.
parallel=True
only parallelizes execution when inputs to listed methods are NUMPY arrays.
Warning
When processing Dask inputs, or when
parallel=True
, operators which are not thread-safe may produce incorrect results. There is no easy way to ensure thread-safety at the level ofblocks
without impacting performance of all operators involved. Users are thus responsible for executing block-defined operators correctly, i.e., if thread-unsafe operators are involved, stick to NumPy/CuPy inputs.Examples
>>> coo_block( ... ([A(500,1000), B(1,1000), C(500,500), D(1,3)], # data ... [ ... [0, 1, 0, 2], # i ... [0, 0, 2, 1], # j ... ]), ... grid_shape=(3, 3), ... ) | coarse_idx | 0 | 1 | 2 | |------------|--------------|---------|-------------| | 0 | A(500, 1000) | | C(500, 500) | | 1 | B(1, 1000) | | | | 2 | | D(1, 3) | |