1
$\begingroup$

Consider the space of polynomials of degree equal or less than 2. The canonical basis of this space is $<1,x,x^2>$ and another basis can be $<1, 1+x, 1-x+x^2>$. Calculating the change of basis matrix $A_{b1,bcan}$ and apply it to the vector $2+2x+3x^2$

So I reached to the matrix

$ M = \begin{bmatrix} 1 & -1 & -2 \\[0.3em] 0 & 1 & 1 \\[0.3em] 0 & 0 & 1 \end{bmatrix} $

Well now I applied the matrix to $2+2x+3x^2$ in the canonical basis i.e. the vector:

$ a = \begin{bmatrix} 2 \\[0.3em] 2 \\[0.3em] 3 \end{bmatrix} $

And reached to the vector

$ b = \begin{bmatrix} -6 \\[0.3em] 5\\[0.3em] 3 \end{bmatrix} $

Well if I take the components of this vector and multiply each of them to the correspondent component in the basis we want I reach to the same polynomial (am I making myself clear) $2+ 2x + 3x^2 $.

Is that supposed to happen?

Then I didn't even need to apply the matrix in the vector in the canonical basis right? I could simply try to write the coordinates vector of the polynomial using the given basis.

But is that valid if my initial basis was different than the canonical?

I mean if the polynominal was in a basis b2 and I wanted to write it in a basis b1, I needed to left multiply the change of basis matrix right?

Sorry a bit confused about this topic, need some clarification...

Thanks!

$\endgroup$

2 Answers 2

1
$\begingroup$

You ask, "Is that supposed to happen?"

Yes, it is. The polynomial is still the same polynomial, but it can be written in a different way in the different basis.

As you say, to find the new vector $b= \begin{bmatrix}p \\q\\r \end{bmatrix}$ you could have tried to solve the equation:

$p(1)+q(1+x)+r(1-x+x^2)=2(1)+2(x)+3(x^2)$

Using matrix transformations is just one way to do that.

It doesn't matter whether your initial basis is the canonical one or not.

You could, for example have initial basis $<1+x^2,x-1,x^2>$ and new basis $<2x+1,x+3,x^2>$.

Consider the polynomial $f=3+4x+5x^2$.

In the initial basis it has vector $b= \begin{bmatrix}7 \\4\\ -2 \end{bmatrix}$ and in the new basis it has vector $b= \begin{bmatrix}1.8 \\0.4\\ 5 \end{bmatrix}$

You can work that out without the change of basis matrix, but the matrix is useful if you are going to do lots of changes between the bases for many different polynomials.

$\endgroup$
1
$\begingroup$

1) You have the polynomial $p(x) = 3x^2+2x+2$. This doesn't depend on what basis you choose. Informally, basis is just "a way to look at" polynomials; no matter how you look, $p(0) = 2$, $p(5) = 87$ and so on.

2) A basis of a vector space lets you to represent your vector $p$ as a set of numbers. In basis $<1,x,x^2>$, $p = 2*1 + 2*x + 3*x^2$, thus $p$ is represented by $(2,2,3)^T$. In basis $<1, 1+x, 1-x+x^2>$, $p = -6*1+5*(1+x)+3*(1-x+x^2)$, thus $p$ is represented by $(-6,5,3)^T$. Note that in both cases, the facts like $p(0)=2$, $p(5)=87$ don't change - you merely shift from one representation to another.

3) When you are given representation of a vector in basis $b_1$ (let it be $(a,b,c)^T$) and want to find its representation in basis $b_2$ (let's write it as $(x,y,z)^T$), you can use basis change matrix $M_{b_2,b_1}$. Basis change matrix has this useful property (hence the name): whatever $a,b,c$ are, $(x,y,z)^T = M_{b_2,b_1}(a,b,c)^T$.

4) If you want to solve reverse problem of finding $(a,b,c)^T$ by $(x,y,z)^T$, you need another matrix, $M_{b_1,b_2}$. It can be found from $M_{b_2,b_1}$ by the condition $M_{b_1,b_2}M_{b_2,b_1} = M_{b_2,b_1}M_{b_1,b_2} = I$ where $I$ is identity matrix. Or, shorter, $M_{b_1,b_2} = M_{b_2,b_1}^{-1}$. Note that in general case $(a,b,c)^T = M_{b_1,b_2}(x,y,z)^T \neq [(x,y,z)M_{b_2,b_1}]^T$ (check for your vectors).

$\endgroup$

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.