## Author's note

Every six months, I have to spend a day re-deriving how Quadratic Error Functions work. This write-up is to save my future self from doing that work again.

This page is based on notes that I took while working to implement "Isosurfaces Over Simplicial Partitions of Multiresolution Grids" in libfive.

It's a fairly raw dump of my research notebook, so it's well-structued to be reloaded in my brain, but may not make as much sense to others. Please reach out via email, Twitter, or carrier pidgeon if you have any feedback or questions.

### Primary sources

- Feature Sensitive Surface Extraction from Volume Data
- Dual Contouring of Hermite Data
- Dual Contouring: The "Secret Sauce"
- Singular Value Decomposition (SVD)
- Dual Marching Cubes
- Constrained Least Squares

## The basics

Let's imagine we have a solid model (shown in grey), and we have sampled it at the surface in a few positions.

- Each sample's position is drawn in red and written as $ p_i $
- Each sample's normal is drawn in blue and written as $ n_i $

Each sampled point + normal implies a plane. The core insight of Dual Contouring is that sharp features on the model (e.g. corners and edges) will be located where these planes intersect.

Here, the two planes intersect at point $ x $, which is a corner:

We want to find the point $ x $ which is on each plane.

For a given plane $p_i, n_i$, the distance to the plane's surface is

$$ n_i \cdot (x - p_i) $$

We construct an error function by taking the sum-of-squares of each distance:

$$ E(x) = \sum_i (n_i \cdot (x - p_i))^2 $$

This is the error function that we try to minimize, solving for $ x $.

## Building the A and B matrices

We can expand the previous equation: $$ E(x) = \sum_i (n_i \cdot x - n_i \cdot p_i)^2 $$

Keeping that in mind, let me show you two matrices:

$ A $ is a $ s \times d $ matrix, where $ s $ is the number of samples and $ d $ is the dimension. It is made from each sampled normal, oriented as a row vector.

$$ A = \left[ \begin{array}{c} –n_0– \\ –n_1– \\ ... \end{array} \right ] $$

$ B $ is a $ s \times 1 $ matrix, with one row per sample.

$$ B = \left[ \begin{array}{c} p_0 \cdot n_0 \\ p_1 \cdot n_1 \\ ... \end{array} \right ] $$

Consider $ Ax - B $, and notice each row represents one term in $ E(x) $.

$$ A x - B = \left[ \begin{array}{c} n_0\cdot x - n_0 \cdot p_0 \\ n_1\cdot x - n_1 \cdot p_1 \\ ... \end{array} \right] $$

To find the sum-of-squares, we multiply by the transpose:

$$ (A x - B)^T (A x - B) = (n_0\cdot x - n_0 \cdot p_0)^2 + (n_1\cdot x - n_1 \cdot p_1)^2 + ... $$

This is the same as our error function, so we can say:

$$ E(x) = (Ax - B)^T(Ax - B) $$

## Solving the QEF

The least-squares solution to $ E(x) $ is found by solving

$$ A^TAx = A^TB $$

It looks like we can multiply both sides by $ (A^TA)^{-1} $, e.g.

$$ x = (A^TA)^{-1}A^TB $$

However, $ A^T A $ isn't always nicely invertible. Consider the following system:

We have $n_0 = n_1 = \left[\begin{array}{cc}0 & 1\end{array}\right]$, $ p_0 = \left[\begin{array}{cc}0 & 0\end{array}\right]$, $ p_1 = \left[\begin{array}{cc}1 & 0\end{array}\right]$.

The combined matrix $A^TA$ is

$$ \left[\begin{array}{cc} 0 & 0 \\ 0 & 2 \end{array}\right] $$

which is not invertible.

Instead of (failing to) take the inverse, we use a more complex process that handles these kind of rank-deficient matrices. For physical intuition, only corners will be of full rank; lower-dimension features are of lower rank.

This algorithm uses the SVD to find the matrix inverse, with an intermediate step to handle the rank-deficient case.

We start by finding the eigenvalues ($v_i$) and eigenvectors ($e_i$, which are a column vectors) of $ A^T A $.

We build a diagonal matrix $ D_0^{-1} $, whose diagonal values are the inverse of $A^TA$'s eigenvalues. (The strange variable name is because this is the inverse of the SVD's singular value matrix $D$, with problematic values set to zero; read on for details)

In the 3D case, $ D_0^{-1} $ looks like this:

$$ D_0^{-1} = \left[ \begin{array}{ccc} \text{check}(1 / v_0) & 0 & 0 \\ 0 & \text{check}(1 / v_1) & 0 \\ 0 & 0 & \text{check}(1 / v_2) \\ \end{array}\right] $$

The $ \text{check} $ function is what protects us against rank deficiencies:

$$ \text{check}(v) = \begin{cases} 0&\text{if }v \lt \epsilon \\ \frac{1}{v}&\text{otherwise.}\end{cases} $$

There are various ways to select the cutoff value $ \epsilon $. If normals are guaranteed to be of length 1, then you can use a fixed value (as shown above); in other cases, it's common to clamp based on the ratio of largest to smallest eigenvalue.

Then, we construct the eigenvector matrix, with each vector spanning a row:

$$ U = \left[\begin{array}{c} – e_0 - \\ – e_1 – \\ – e_2 – \end{array}\right] $$

The pseudo-inverse of $A^TA$ is then

$$ (A^TA)^{-1} = U^T D_0^{-1} U $$

And the least-squares solution is

$$ x = (A^TA)^{-1} A^TB $$

## Compact representation

$A$ and $B$ can be large, especially as we accumulate more and more samples.

However, we only need $A^TA$, $A^TB$ and $B^TB$ to solve the QEF.

It's typical to store only these three terms, instead of the full matrices. To combine QEFs (e.g. when merging cells in an octree), we sum the matrices.

## Finding the error value

Remember, our error function is

$$ E(x) = (Ax - B)^T(Ax - B) $$

We can expand this:

$$ E(x) = x^TA^TAx - 2x^TA^TB + B^TB $$

This uses only our compact matrices, so we can evaluate it for a given $x$ and get back the error value.

## Zero-allocation construction

Building the $A$ and $B$ matrices may require dynamic allocation, because we don't know how many rows they'll have until run-time.

However, we can instead iterate over samples and construct the compact representation directly. With $n_i$ and $p_i$ as row vectors,

$$ A^TA = \sum_i n_i^Tn_i $$ $$ A^TB = \sum_i n_i (n_i \cdot p_i) $$ $$ B^TB = \sum_i (n_i \cdot p_i)^2 $$

We initialize each compact matrix to $ 0 $, then accumulate.

## Whither to place points?

Using the strategy above, $ x $ will minimize our error function. However, when the system isn't fully determined (e.g. placing 3D points on an edge or plane, rather than on a corner), it will pick the point closest to 0.

This is usually not what we want.

We can work around this by subtracting some center $ c $ from each position $ p_i $, then adding $ c $ back to the solution. However, if we're passing around the compact representation, we don't have the original position values to modify.

Let's do walk through the math in detail: consider such a shifted matrix $ B' $:

$$ \begin{align*} B' & = \left[\begin{array}{c} n_0 \cdot (p_0 - c) \\ n_1 \cdot (p_1 - c) \\ ...\end{array}\right] \\ & = \left[\begin{array}{c} n_0 \cdot p_0 - n_0 \cdot c \\ n_1 \cdot p_1 - n_1 \cdot c \\ ...\end{array}\right] \\ & = B - \left[\begin{array}{c} n_0 \cdot c \\ n_1 \cdot c \\ ...\end{array}\right] \\ & = B - \left[\begin{array}{c} —n_0— \\ —n_1— \\ ...\end{array}\right] \left[\begin{array}{c} | \\ c \\ | \end{array}\right] \\ & = B - A c \end{align*} $$

Then, the compact matrix $ A^TB' $ is

$$ A^TB' = A^TB - A^TAc $$

And the positioned vertex is at

$$ x = (A^TA)^{-1}\left(A^TB - A^TAc\right) + c $$

## Sharp features on the distance field itself

Here's where things start getting harder to visualize.

In all of the previous drawings, I've shown a solid 2D shape, which is the region of some function $ f(x, y) $ where $ f(x, y) \leq 0 $.

The function also implies a *3D* surface, with $ z = f(x, y) $.
This 3D surface is the shape of the distance field itself.

Certain algorithms – like Dual Marching Cubes
and Isosurfaces Over Simplicial Partitions of Multiresolution Grids
(IOSPMR for short) –
require placing vertices on sharp features
of the *distance field*, rather than the model's surface.

To visualize this, let's drop a dimension.

Consider a 1D shape function, e.g. $f(p) \leq 0$. This implies a 2D distance field with $ f(p) $ as its height.

Here's an example:

- Positions along the axis are labelled $ p_i $ in red
- Normals are labelled $ n_i $ in blue
- Values are labelled $v_i$ in green (and $v_i = f(p_i)$ )

Notice that the normals are the same as the function's slope, e.g. $\frac{df(p)}{dp} $.

By inspection, one term of our error function will be $$ (x - p_0) \cdot n_0 + v_0 - f(x) $$

We'll add an extra dimension to our solution as well, defining $ w = f(x) $ for some solution point $ x $. Then, our closed-form error function is

$$ E(x, w) = \sum_i \left( n_i \cdot (x - p_i) + v_i - w \right)^2 $$

Solving this QEF will give you both the position of a sharp feature in the distance field, and the estimated distance field value $w$ at that point.

(It's usually wise to re-evaluate $f(x)$, instead of trusting the estimate $w$ )

This form generalizes to higher dimensions without any changes.

However, to run through our previous process of QEF solving, we do a slight re-arrangement of the equation:

$$\begin{align*} E(x, w) &= \sum_i \left( n_i \cdot x - w - (n_i \cdot p_i - v_i) \right)^2 \\ &= \sum_i \left( [\begin{array}{cc}n_i & -1\end{array}] \cdot [\begin{array}{cc}x & w\end{array}] - ([\begin{array}{cc}n_i & -1\end{array}] \cdot [\begin{array}{cc}p_i & v_i\end{array}]) \right)^2 \end{align*}$$

We've created augmented versions of $n$ and $p$, i.e.

$$ \begin{align*} n'_i &= \left[\begin{array}{cc}n_i & -1\end{array}\right] \\ p'_i &= \left[\begin{array}{cc}p_i & v_i\end{array}\right]\end{align*} $$

These augmented vectors can be used as-is for the rest of the QEF solving process (constructing $A$ and $B$, then $A^TA$, $A^TB$, etc).

## Solving with constraints

In IOSPMR, we want to solve a QEF constrained to a particular space (e.g. a cell, face, edge, or corner).

Note that this is different from building a reduced-dimensionality QEF and solving it on a subspace, which is discussed in the next section; here, we want to solve the full-dimension QEF, but make sure that the solution point doesn't escape a particular region.

We do this by solving the QEF in a particular dimension, returning immediately if the vertex is within the region, and otherwise reducing the dimension then re-solving with constraints on the ignored axes.

For example, in 2D, we'd first try to place the vertex on a face, then try the four edges (and pick the one with minimum error), then try the four corners (again picking the one with minimum error).

If we want to apply constraints on $ m $ axes, then we build augmented versions of $ A^TA$ (with $m$ extra rows and columns, marking which axes should be constrained) and $A^TB$ (with $m$ extra rows containing the constraint values).

For example, to apply a constraint on the Y axis of a 3D QEF, we'd construct

$$ \begin{align*} (A^TA)_\text{constrained} &= \left[\begin{array}{cccc}(A^TA)_{xx} & (A^TA)_{xy} & (A^TA)_{xz} & 0 \\\\ (A^TA)_{yx} & (A^TA)_{yy} & (A^TA)_{yz} & 1 \\\\ (A^TA)_{zx} & (A^TA)_{zy} & (A^TA)_{zz} & 0 \\\\ 0 & 1 & 0 & 0 \end{array}\right] \\\\ (A^TB)_\text{constrained} &= \left[\begin{array}{c}(A^TB)_{x} \\\\ (A^TB)_{y} \\\\ (A^TB)_{z} \\\\ y_\text{fixed}\end{array}\right] \end{align*}$$

Then, we solve with the usual technique.

### Caveat: matching matrix scale

As discussed earlier, when solving for the pseudo-inverse of $ A^T A $, it's common to discard eigenvalues which are below a certain fraction of the largest eigenvalue. This helps to detect and compensate for rank-deficient matrices.

However, this can backfire when finding the pseudo-inverse of $ \left(A^T A\right)_\text{constrained} $: the newly injected eigenvalues can be small, get discarded, and mess up the constrained solution.

One useful trick is to scale all of the constraints to the mean absolute value of terms in $ A^T A $, which we'll call $ s $. Then, pad $ A^T A$ with $s$ instead of $1$, and multiply the constrained positions by $s$ before appending them to $A^TB$.

## An alternate strategy for constrained solving

I've recently developed an alternate strategy for solving constrained QEFs. I haven't seen it in the literature, but it emerges naturally from the raw matrices.

It's best explained with an example. Remember, we're solving $$ A^TA x = A^TB $$ which, in the 3D case, expands to $$ \left[\begin{array}{ccc} a_{00} & a_{01} & a_{02} \\ a_{10} & a_{11} & a_{12} \\ a_{20} & a_{21} & a_{22} \end{array}\right] \left[\begin{array}{c}x_0 \\ x_1 \\ x_2 \end{array}\right] = \left[\begin{array}{c}b_0 \\ b_1 \\ b_2 \end{array}\right] $$ (where $a$ and $b$ are terms in $A^TA$ and $A^TB$ respectively).

Applying a constraint is setting one or more of $x_i$ to a fixed value. For example, let's constrain $x_1$ to a fixed value $x_{1f}$. Looking only at the top row, we have

$$ a_{00}x_0 + a_{01}x_1 + a_{02}x_2 = b_0 $$

Applying this constraint, we can rephrase as

$$ a_{00}x_0 + a_{02}x_2 = b_0 - a_{01}x_{1f}$$

We can think of this as removing a row and column from $A^TA$, and subtracting an offset from $A^TB$ to compensate.

Dropping the middle row and column from $A^TA$, we end up with

$$ \left[\begin{array}{cc} a_{00} & a_{02} \\ a_{20} & a_{22} \end{array}\right] \left[\begin{array}{c}x_0 \\ x_2 \end{array}\right] = \left[\begin{array}{c}b_0 - a_{01}x_{1f} \\ b_2 - a_{21}x_{1f} \end{array}\right] $$

The left and right matrices can then be used in the same solver algorithm as before (i.e. through finding the pseudo-inverse of the left-hand matrix).

I haven't worked out a proof for why this works; fuzzing it with Python, it seems to consistently give the same answers as the algorithm above.

This form has a major advantage: solutions get easier to compute as we apply more constraints. This can be a serious speed boost when we're solving gazillions of QEFs for vertex placement.

## Saving and merging subspace QEFs

*Note*: This is closely related to implementing IOSPMR,
and is where we start going off the edge of the map.
The IOSPMR paper uses top-down construction,
so doesn't need to merge QEFs from different subspaces.

In IOSPMR, instead of building one QEF per cell (a 3D spatial region), we end up with one QEF per subspace, where a subspace is a cell, face, edge, or corner (in descending order of dimensionality).

There are a few ways do this. I'll present the **broken way** first.

For a given leaf cell (i.e. a minimum-size cell in the quad/octree), we'll sample all four corners, storing their positions and normals, and values.

Then, for each subspace, we'll build a QEF based on the corners that touch that subspace. We'll strip out unused dimensions, e.g. a edge QEF will only have 1 dimension.

Here's how samples are used in the 2D case. Sampled corners (on the left) are combined into QEFs for corners, edges, and the face itself.

Why is this approach incorrect? Look at the edge cells, and think about what happens if you merge two cells by summing their QEFs: summing the edge QEFs for two adjacent cells will double-count the middle vertex.

The **correct solution** is to make higher-dimension stored QEFs
non-inclusive, e.g. an edge QEF doesn't contain its two corners.

Then, when we want to solve for a particular subspace's vertex, we can construct the inclusive version by summing the non-inclusive QEFs for each subspace of the target. For an edge, we would sum that edge's and the two corner's QEFs.

This means that a (lowest-level) leaf node will only contain QEFs at the corners, with the edge / face / cell QEFs all being zero.

## Semi-compact representation for dropping dimensions

*Note*: This is fully off the edge of the map.
I haven't seen this idea discussed in the literature,
though I'm sure it's out there somewhere.
Send me a link if it sounds familiar.

If you're paying very close attention to the last section, you may notice a subtle problem:

To construct an inclusive QEF by summing all of the subspace QEFs, those subspace matrices must be of full dimensionality.

However, we also need a way to solve a reduced-dimension QEF for the subspaces, e.g. solving for a position along the edge using that edge's axis.

For example, if we're solving for an edge vertex, we may end up using only $[n_x]$ and $[p_x]$, instead of $n_i = \left[\begin{array}{ccc}n_x & n_y & n_z \end{array}\right]$ and $p_i = \left[\begin{array}{ccc}p_x & p_y & p_z \end{array}\right]$.

It's easy to ignore an axis in $A^TA$: simply construct a smaller square matrix, skipping that axis's row and column.

Unfortunately, it's not immediately obvious how to do this with the other matrices; remember, each row of $B$ is $n_i \cdot p_i$, which mixes the axes in a way that isn't reversible.

The solution is to construct a different matrix, which we'll call $\beta$:

$$ \beta = \left[\begin{array}{ccc} n_{0x}p_{0x} & n_{0y}p_{0y} & n_{0z}p_{0z} \\ n_{1x}p_{1x} & n_{1y}p_{1y} & n_{1z}p_{1z} \\ n_{2x}p_{2x} & n_{2y}p_{2y} & n_{2z}p_{2z} \\ ... & ... & ...\end{array}\right] $$

Notice this looks a lot like $B$, skipping the summing stage of the per-row dot product. We can reconstruct $B$ from $\beta$:

$$ B = \beta \left[\begin{array}{c} 1 \\ 1 \\ 1 \\ \end{array}\right] $$

We don't want to store $\beta$ directly, because it grows linearly with the number of sampled points. What's the equivalent of $A^TB$ and $B^TB$?

Clutching at the simplest possible solution, it turns out that we can store $A^T\beta$ and $\beta^T\beta$ instead. These matrices have the same property as $A^TA$, where we can reduce their dimensionality by skipping rows and columns.

From these matrices, we can reconstruct our usual QEF matrices when it comes time to solve the equations:

$$ A^TB = A^T\beta\left[\begin{array}{c} 1 \\ 1 \\ 1 \\ \end{array}\right] $$ $$ B^TB = \left[\begin{array}{ccc} 1 & 1 & 1 \end{array}\right]\beta^T\beta\left[\begin{array}{c} 1 \\ 1 \\ 1 \\ \end{array}\right] $$

This takes a bit more space: $A^T\beta$ and $\beta^T\beta$ are both $d \times d$, rather than $ d \times 1$ and $1 \times 1$ respectively (where $d$ is the number of dimensions).

## Fin

That's all for now! Stay tuned for the pretty pictures of IOSPMR in action.

Send me an email, tweet, or telegram if you have any questions or spot any errors.

## Changelog

- 2018-11-25: Added details on matching constraint scale to $ A^TA $ size
- 2018-12-02: Wrote up a new strategy for constrained QEF solving