4
$\begingroup$

I've recently begun experimenting with LPC, and while I understand that it works, I'm still slightly lost on why it works.

Specifically, I understand that LPC involves finding coefficients $a_1, a_2, \ldots a_p$ such that for a signal $h$,

\begin{align} h(n) &\approx \sum_{i=1}^{p} a_i h(n-i) \\ &= a_1 h(n-1)+a_2 h(n-2)+\ldots+a_p h(n-p)\end{align}

In other words, we want to find $a_1, a_2, \ldots a_p$ which will let us linearly approximate the next sample given the previous $p$.

However, when LPC is actually used, these coefficients $a_1, a_2, \ldots a_p$ are usually treated as the coefficients of a polynomial:

$$x^p + \sum_{i=1}^{p} a_i x^{p-i} = x^p + a_1 x^{p-1} + a_2 x^{p-2} + \ldots + a_p$$

Matlab's LPC documentation refers to this as "the prediction filter polynomial", and it's very useful for e.g. finding the formants of a signal (which are simply the zeroes of this polynomial).

But I don't understand the connection between these two uses. Why is it useful to turn the linear prediction coefficients into a polynomial like this? And why does it work?

$\endgroup$

2 Answers 2

2
$\begingroup$

The prediction polynomial is the representation in the $z$-domain of the first equation you wrote.

When performing LPC, one assumes that the filter consists of only poles and the transfer function is $\frac{1}{A(z)}$ where

$$A(z)=1 - \sum_{i=1}^{M} a_i z^{-i}$$

That's the polynomial you are talking about. Note that it should have only negative powers of $z$, not positive.

As you wrote in your question, one estimates the signal $h(n)$ based on past values of that same signal. The $z^{-i}$ factors in the polynomial represent those delays in the $z$-domain, while the coefficients $a_i$ are just constants and, due to the $z$-transform linearity, they just stay as they are.

$\endgroup$
1
$\begingroup$

That's the polynomial you are talking about. Note that it should have only negative powers of z, not positive.

No. Let $A(z)=1 - \sum_{i=1}^{p} a_i z^{-i}$. then $$z^pA(z)=z^p- \sum_{i=1}^{p} a_i z^{p-i}= z^p - a_1z^{p-1} - a_2z^{p-2} - \ldots - a_p = \prod_{i=1}^{p}(z - b_i)$$ for some $b_i.$ Therefore we can refactor $A(z)$ to $$A(z)=\prod_{i=1}^{p} (1-b_iz^{-1})$$ where $b_i$ are the roots of $x^p - \sum_{i=1}^{p} a_i x^{p-i} = x^p - a_1 x^{p-1} - a_2 x^{p-2} - \ldots - a_p.$ The point to note is that Matlab's lpc function actually returns 1 -a_1 -a_2 -a_3 ...

But I don't understand the connection between these two uses. Why is it useful to turn the linear prediction coefficients into a polynomial like this? And why does it work?

It is useful to rewrite in this form because you can see the zeros of $A(z)$ (and hence the poles of $\frac{1}{A(z)}$) are given by the $b_i$s. The formants are then the angles corresponding to the $b_i$s.

$\endgroup$
1
  • $\begingroup$ The note "Matlab's lpc function actually returns 1 -a_1 -a_2 -a_3 ..." is very important and I've struggled on this for hours. It worth noting that the output of librosa.core.lpc is in the same format ("filter denominator polynomial"). $\endgroup$ Commented Apr 5 at 9:31

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.