2
$\begingroup$

In John Hart's paper, at page 5-94 he mentions that - "The surface normal function can be avoided by using a general six-sample numerical gradient approximation of the distance bound gradient". I have some difficulty in understanding how a distance bound (as opposed to an exact distance) function can be used in this way.

Suppose for example, that we apply a non-uniform scaling, scaling along the x-axis. Distance along the y-axis remain unaffected, while those along the x-axis are scaled. In general, the scaling depends on the inclination with respect to the x axis.

If we only have the original distance function (before non-uniform scaling) at hand, we can divide uniformly by the scaling factor to get a distance bound. But this function would serve poorly as an approximation for the gradient, as distance in different directions are essentially in different "units", so the normal would come out skewed wouldn't it? (Even if we sample points very close to the surface)

Is it that only specific kinds of distance bounds can be used for approximating the normal in this way? What technique is generally used for computing normals when you only have a distance bound?

$\endgroup$
7
  • $\begingroup$ My guess is that since $f(p)=0$ must still hold at the points of the surface, and that $f$ is continuous, as you get closer to the boundary the normal must converge to the actual one. $\endgroup$ Commented Sep 29, 2024 at 10:45
  • $\begingroup$ Hmm, I see. This is quite enlightening, thanks for the comment, that makes sense. I was reasoning this way - if f approximates g, then it doesn't necessarily follow that the derivative of f approximates that of g. Which is true of course, but approximation at one point, and approximation over a surface are quite different. $\endgroup$ Commented Sep 30, 2024 at 9:07
  • $\begingroup$ To be precise you probably need a bit more than continuity. $\endgroup$ Commented Sep 30, 2024 at 10:15
  • $\begingroup$ Yeah, it would also need to be differentiable, at least $\endgroup$ Commented Sep 30, 2024 at 14:42
  • $\begingroup$ Actually that's not necessary with the numerical approximation, it would still work for functions that are almost everywhere differentiable (you can see this from the box sdf for example), and I think those constraints can be relaxed further. I would prefer using the analytical gradient most of the time though, as the step size in the numerical approximation is notoriously hard to get right. $\endgroup$ Commented Sep 30, 2024 at 16:05

1 Answer 1

3
$\begingroup$

Let $f:\mathbb{R}^n\to \mathbb{R}$ be a signed distance function for the set $\mathbb{R}^n \setminus \Omega$, meaning: $$f(p) = \begin{cases} d(p, \partial\Omega), & p \in \Omega, \\ -d(p,\partial\Omega), & p \not\in \Omega,\end{cases}$$ where $d(p, A) = \inf_{q\in A} \|p-q\|_2$ is the shortest euclidean distance between $p$ and $A$. In other words the surface $\partial\Omega$ is given as the set of zeroes of $f$: $$\partial\Omega = f^{-1}(0) = \{p\in \mathbb{R}^3\,:\,f(p)=0\}.$$

Now consider a signed distance bound function $g$ for $f$. This means that $g^{-1}(0) = \partial\Omega = f^{-1}(0)$, $g$ is continuous, and $|g|$ is upper bounded by $|f|$: $|g(p)|\leq |f(p)|$.

If you assume that $f$ is continuously differentiable almost everywhere on some open set covering $\partial\Omega$ then you can unambiguously define the normal as $n(p) = \nabla f(p)$ for $p\in\partial\Omega$ almost everywhere (note that $\|\nabla f(p)\|_2=1$ so the normal is unit length). For what you asked to work one needs that $g$ is differentiable almost everywhere on $\partial\Omega$, and moreover $\nabla g(p) = \lambda \nabla f(p)$ for $p\in\partial\Omega$ and $\lambda>0$, i.e. it is a first order approximation of $f$ around $\partial\Omega$ up to a positive constant $\lambda >0$: $$g(p+v) = g(p) + v\cdot \nabla g(p) + o(\|v\|^2) = v\cdot (\lambda\nabla f(p)) + o(\|v\|^2),\quad p\in\partial\Omega.$$

Now consider $g(p) = -f(p)$. The function $g$ clearly satisfies all of the conditions for a signed distance bound but has a normal that points in the opposite direction. That's one possible fail case. A condition to disallow this would be to require that $0\leq g(p) \leq f(p)$ for $p\in\Omega$ and $0\leq -g(p) \leq -f(p)$ for $p\not\in\Omega$, instead of $|g(p)|\leq |f(p)|$ in the definition of signed distance bound function. Then it cannot happen that $\nabla g(p)$ is outside of the hemisphere around $\nabla f(p)$ as otherwise we could move along it and get that $g(p)$ is negative where $f(p)$ is positive (i.e. in $\Omega$) but then $g(p) <0$ which violates $g(p)\geq 0$ for $p\in\Omega$. So this case is taken care of by a small modification of the definition in Hart's paper.

Now let $\nabla g(p)\ne 0$ and $\nabla g(p) \not \parallel \nabla f(p)$. Consider the vector $w = \nabla g(p) - \bigl(\nabla f(p) \cdot \nabla g(p)\bigr) \nabla f(p)$ which is perpendicular to $\nabla f(p)$ but has a positive dot product with $\nabla g(p)$. The function $\psi(h) = g(p+hw)$ grows faster than $\phi(h) = f(p+hw)$ with $h$ increasing (at least for $h$ sufficiently close to $0$), which means that we can make $g(q) > f(q)$ for $q=p+hw$, $h$ small enough and $p\in\partial\Omega$ since: $$g(p+hw) \approx g(p) + hw \cdot \nabla g(p)> 0 = f(p)+h w \cdot \nabla f(p) \approx f(p+hw).$$ This means that allowing non-parallel and non-zero $\nabla g(p)$ for $p\in\partial\Omega$ would also violate the signed distance bound conditions.

Finally, if $\nabla g(p) = 0$ for every point $p\in\partial\Omega$ (or at least for a subset that is not of measure zero) we have a problem. This can occur in practice for example for a half-space SDF: $f(x,y,z) = z$ and the following SDBF: $$g(x,y,z) = \begin{cases} z, & |z|\geq 1, \\ z^3, & |z|<1.\end{cases}$$ So we additionally need to assume that the SDBF function $g$ is chosen such that $\nabla g(p) \ne 0$ almost everywhere on $\partial\Omega$.

TLDR: You can use the normalized gradient $\frac{\nabla g(p)}{\|\nabla g(p)\|_2}$ of a signed distance bound function $g$ (whose derivative is non-zero) as $\pm$ the normal $n(p)$ at a surface point $p\in\partial\Omega$ since the signed distance bound conditions guarantee that it is parallel to the gradient $\nabla f(p) \equiv n(p)$ of the corresponding signed distance function $f$. Replacing the signed distance bound condition $|g(p)|\leq |f(p)|$ from Hart's paper with $0\leq g(p)\leq f(p)$ for $p\in\Omega$ and $0\leq -g(p)\leq -f(p)$ for $p\not\in\Omega$ allows you to dispense away with the $\pm$ (i.e. you get the correctly facing normal).

Practical considerations: If $g$ deviates strongly from $f$ near the boundary then the numerical approximation of the gradient may require smaller step sizes. But as already mentioned in my comments, getting the step sizes right is a difficult problem even when using the sdf $f$ and not a signed distance bound function (sdbf) $g$.

$\endgroup$
2
  • $\begingroup$ Thanks for writing up a formal argument, a pleasure to read. (Though I admit that having done only so much math, it takes effort for someone like me to understand well, but I got the key idea of the argument) $\endgroup$ Commented Oct 4, 2024 at 15:19
  • $\begingroup$ @jmichael I have updated the answer since, notably my assumption that $\nabla g(p) = 0$ cannot occur on regions of non-zero measure was false and I have provided a counterexample. The initial counterexample was not provided by me: math.stackexchange.com/a/4980330/463794 This means that you have to make sure that your SDBF has a non-zero gradient almost everywhere on $\partial\Omega$, otherwise you can run into degenerate cases w.r.t. the normal. $\endgroup$ Commented Oct 4, 2024 at 23:10

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.