So far in class we've been setting up hypothesis tests using confidence intervals, which we have been able to calculate because we're given a distribution of the data.
My question is about how I would approach this:
The true radius of a piece of wire is $x$, which is known from a very accurate but slow test.
A new test is created, one which is much faster but does not give as accurate results. We have $n$ samples drawn from this test to an identical wire. The results were: sample mean = $\bar x$, sample variance = $\sigma^2$.
We want a hypothesis test which can determine whether the new, faster test gives as average value different from the true value.