2
$\begingroup$

I've done the Least square method of ellipse fitting. But I want a fitting method with less point. and convex optimization may be a strong one. From my study: $ax^2+bxy+cy^2+dx+ey+f=0$,

so $(a\ b \ c \ d \ e\ f)*(x^2 \ xy \ y^2 \ x \ y \ 1)=0, st.b^2-4*ac=1$, simplified as $\theta*z=0,st. \theta^TC\theta=1$, C = [0 0 2;0 -1 0;2 0 0], and the others is zeros.

Then, convert to unconstrained optimization $min_{\theta} \ |\theta z| + \lambda(\theta^TC\theta-1) $, I search goole, find a method called split Bregman. The equation becomes $min_{d,\theta} \ |d| + \lambda(\theta^TC\theta-1) +\frac{\mu}{2}|| d - \theta z -b ||_2^2, st. d=\theta z$

It's splited two part. One is $\theta^{k+1} = min_{\theta} \ \lambda (\theta^TC\theta-1)+\frac{\mu}{2}||d^k-\theta z-b^k||_2^2$, setting the derivative to zeros, the solution is $ \theta^{k+1} = \frac{\mu \theta^T(d^k-b^k)}{2\lambda C+\mu \theta^T \theta } $.

The other is $d^{k+1}=min_d \ |d^k| + \frac{\mu }{2} ||d^k - \theta^{k+1}z-b^k||_2^2$, according to the papaer, $d_j^{k+1} = shrink(\theta_j^{k}z + b_j^k,\frac{1}{\mu})$, and shrink operator is defined as $shrink(x,r)=\frac{x}{|x|}max(|x|-r,0)$.

The rest is loop. I write it as matlab, and find it produce hyperbolic, even I gives enough point.

I've checked the dimension error with exhaustion method.Also, I write the derivation so many times(I think it has no obvious errors). None of the code gives a ellipse. Below is my code with no running error.

Can you give me any advice? Thanks!

Refs: *Goldstein and Osher, The split Bregman method for L1 regularized problems SIAM Journal on Imaging Sciences 2(2) 2009

enter image description here enter image description here

clear(); clc(); t = 0:0.1:4; x = 3*sin(2*t+pi/4); y = 2*cos(2*t); f1 = [x;y]'; D = [f1(:,1).^2 f1(:,1).*f1(:,2) f1(:,2).^2 f1(:,1) f1(:,2) ones(size(f1,1),1)]; c0 = [0 0 2;0 -1 0;2 0 0]; c = [c0 zeros(3);zeros(3) zeros(3)]; n_max = 300;lambda = 1000;b=randn(size(D,1),1); d=2*randn(size(D,1),1); mu=1; ii=1; alpha_k0 = zeros(6,1); alpha_k1 = pinv(lambda*c + mu*D'*D)*mu*D'*(d-b); while abs(sum(D*alpha_k1)) > abs(sum(D*alpha_k0)) && (ii<n_max) alpha_k0 = alpha_k1; d = shrink(D,alpha_k1,b,mu); b = b - d + D*alpha_k1; alpha_k1 = pinv(lambda*c + mu * (D.' * D))*mu*D'*(d-b); ii = ii+1; end function out=shrink(D,alpha,b,mu) x = abs(D*alpha + b); out = max(abs(x)-1./mu,0).*(x./(abs(x))); out(isnan(out))=0; end 
$\endgroup$
2
  • $\begingroup$ For ellipses the discriminant should be $b^2-4ac < 0$ so in your case $b^2-4ac=-1$ $\endgroup$ Commented Sep 8, 2021 at 12:53
  • $\begingroup$ @Cesareo I want to use split Bregman method, so I need to transfer constrained optimization to unconstrained optimazation first and add $\lambda$ replace the original constraint. Maybe I'm wrong in this? But $b^2-4ac < 0$ converted to unconstrained condition is $\lambda((b^2-4ac)-0)$, then takes the derivative to 0, it disapear as well. $\endgroup$ Commented Sep 9, 2021 at 7:26

1 Answer 1

0
$\begingroup$

For a point ${\left[ x, y \right]}^{\top} \in \mathbb{R}^{2}$ to be on ellipse parameterized by $\boldsymbol{s} = {\left[ a, b, c, d, e, f \right]}^{\top}$ it must obey:

$$ a {x}^{2} + b x y + c {y}^{2} + d x + e y + f = 0 $$

Given a set $\left\{ {\left[ {x}_{i}, {y}_{i} \right]}^{T} \right\}_{i = 1}^{n}$ of points on an ellipse, one could formulate the problem:

$$ \arg \min_{\boldsymbol{s}} \frac{1}{2} {\left\| \boldsymbol{Z} \boldsymbol{s} \right\|}_{2}^{2} $$

Where the $i$ -th row of $\boldsymbol{Z}$ is given by ${\left[ {x}_{i}^{2}, {x}_{i} {y}_{i}, {y}_{i}^{2}, {x}_{i}, {y}_{1}, 1 \right]}^{\top}$.

In order to guarantee the 2nd order polynomial matches Ellipse (Avoid Parabola / Hyperbola) one can enforce $ {b}^{2} - 4 a c < 0 $.

Which yields:

$$ \begin{align*} \arg \min_{\boldsymbol{s}} \quad & \frac{1}{2} {\left\| \boldsymbol{Z} \boldsymbol{s} \right\|}_{2}^{2} \\ \text{subject to} \quad & \begin{aligned} \boldsymbol{s}^{\top} \boldsymbol{C} \boldsymbol{s} & = 1 \\ \end{aligned} \end{align*}, \quad \boldsymbol{C} = \begin{bmatrix} 0 & 0 & 2 & 0 & 0 & 0 \\ 0 & -1 & 0 & 0 & 0 & 0 \\ 2 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} $$

Since the solution for $\boldsymbol{s}$ holds up to scaling, one could force any positive value on $\boldsymbol{s}^{\top} \boldsymbol{C} \boldsymbol{s}$ to force the inequality.

The above is not a Convex Problem (Though it can be solved efficiently) as the equality constraint is not linear and $\boldsymbol{C}$ is indefinite.
On top of that, as known, using Least Squares formulation is highly sensitive to outliers.
One way to make it robust is using a robust formulation by utilizing the ${L}_{1}$ norm (See Least Absolute Deviation (LAD) Line Fitting / Regression (Robust Regression):

$$ \begin{align*} \arg \min_{\boldsymbol{s}} \quad & {\left\| \boldsymbol{Z} \boldsymbol{s} \right\|}_{1} \\ \text{subject to} \quad & \begin{aligned} \boldsymbol{s}^{\top} \boldsymbol{C} \boldsymbol{s} & = 1 \\ \end{aligned} \end{align*} $$

You may formulate it as:

$$ \begin{align*} \arg \min_{\boldsymbol{s}} \quad & {\left\| \boldsymbol{Z} \boldsymbol{s} \right\|}_{1} \\ \text{subject to} \quad & \begin{aligned} 1 - \boldsymbol{s}^{\top} \boldsymbol{C} \boldsymbol{s} & \leq 0 \\ \end{aligned} \end{align*} $$

Formulate it as a Lagrangian and use ADMM based method solve.
The issue is this direct solution is numerically not stable for the case all points are practically noiseless as in your case.
If you check your code, the condition number of $\boldsymbol{D}$ is extreme.

One way to overcome it is to use the formulation in Radim Halir, Jan Flusser - Numerically Stable Direct Least Squares Fitting of Ellipses.

Remarks:

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.