Timeline for Finding Chi^2 of a Non-Linear Fit Model
Current License: CC BY-SA 4.0
18 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Mar 9, 2021 at 17:12 | comment | added | Julien Kluge | Well, it all depends on the uncertainty. Usually when reporting estimates with their error, the rounding is based off the first significant digit of the uncertainty (this what happens with the 0.0078+-...). This has an exception for when the uncertainty starts with the digits 1 or 2. Then the precision is usually extended to two significant digits. At least thats how DIN handles it. Mathematica however, puts this border at digits 1 to 3.5. Why 3.5 instead of 2? I don't know. Thanks. | |
| Mar 9, 2021 at 16:00 | vote | accept | Epideme | ||
| Mar 9, 2021 at 15:27 | vote | accept | Epideme | ||
| Mar 9, 2021 at 16:00 | |||||
| Mar 9, 2021 at 15:27 | comment | added | Epideme | That's a nice tip to know. What point does mathematica decide it's close enough to just show '88.' instead of say '87.9' or '87.99'? Sorry, completely forgot, I'll do that now. Thank you for the answer and help | |
| Mar 9, 2021 at 15:18 | comment | added | Julien Kluge | I do get the same values. Maybe i did not copy the whole dataset the first time. Btw, it's not 88 +/- 12 exactly. Mathematica, behind the scenes, tracks the precision to its fully extend. It just displays a rounded off variant to you. You can force it to show the full precision by utilizing FullForm[]. Also: if you want to, you can accept the answer above. | |
| Mar 9, 2021 at 14:47 | comment | added | Epideme | Might it be rounding error based? Maybe different versions of mathematica round to different amounts at high number of significant figures? It's just odd that the difference should be so small | |
| Mar 5, 2021 at 10:07 | comment | added | Epideme | Okay, that makes sense. I think I was counting t as well as a parameter. Mine are: 88 +/- 12, and 0.0078 +/- 0.0008 exactly. | |
| Mar 4, 2021 at 15:06 | comment | added | Julien Kluge | Well, I can confirm what I've wrote so far, chi^2=90.7548 and chi^2/dof=3.94586. Maybe you took slightly other datapoints? What is your output of Around@@@Transpose[{fit["BestFitParameters"][[All,2]],fit["ParameterErrors"]}]? Mine is List[Around[87.89476991568526,12.334923681777726],Around[0.0077664529581594425,0.000816278519064779]] | |
| Mar 3, 2021 at 17:25 | comment | added | Julien Kluge | The degrees of freedom is the number of datapoints minus the number of parameters. Since you have two parameters, A and k, you reach dof=25-2=23. for the discrepancy: I rounded to two decimal digits after the point while giving you the results. I'll check if I did something wrong there when I can. | |
| Mar 3, 2021 at 16:07 | comment | added | Epideme | I see, thank you, that's much clearer. Why does it use 23 as the DOF here, when there are 25 points? I've recreated your answer (editted the question), it's almost exactly the same, but do you know why there is a slight difference? For example, X^2/dof = 3.94 and X^2 = 90.68 | |
| Mar 3, 2021 at 14:24 | comment | added | Julien Kluge | I've seen worse. 3.95 isn't great, but not the worst either. Weighting is dependent on what you want to archieve with it. The 1/u you suggested is called statistical weighting. For measured data, instrumental weighting is the de facto standard which is one over the standard deviation and thus 1/u^2. The relevant metric for judging the goodness of fit is chi^2/dof. Its expectation value is n/(n-p) with n the amount of data points and p the number of parameters. For n>>p you thus expect chi^2/dof = 1. | |
| Mar 3, 2021 at 12:35 | comment | added | Epideme | Yeh, it's not great data is it? Might I ask why the weighting 1/uncertainty^2 is used rather than 1/uncertainty? And whether chi^2 or chi^2/dof is the most relevant metric here? | |
| Mar 3, 2021 at 10:21 | comment | added | Julien Kluge | I edited the answer accordingly. The weighted answer of 3.95 for the chi^2/dof is realistically looking at your data. | |
| Mar 3, 2021 at 10:21 | history | edited | Julien Kluge | CC BY-SA 4.0 | Answer to the edit |
| Mar 3, 2021 at 9:10 | comment | added | Epideme | In the case of origin and python it's definitely not log data that's fitted because I haven't written any code that does that yet. I'd like to get the chi^2 of the exponential decay curve on the graph shown, and it's probanly safe to say I'm out my depth mathematica wise. I've included the full real data set as requested. | |
| Mar 2, 2021 at 19:55 | comment | added | Julien Kluge | If you look into the documentation for PearsonChiSquareTest, you see that the function does something totally different. Namely, using the Pearson X^2 test of checking for an underlying distribution of given data. Not evaluate the X^2 value for a fit. The X^2 value is only dependent on the fit result, so if origin and python came to the same parameter set, then all values should be the same. Maybe you compared your 'logData' fit with afit of the real data? If you post the real data this would be easy to show. | |
| Mar 2, 2021 at 16:12 | comment | added | Epideme | Thank you. Why was the pearsonchisquared command not working by the way? I'm getting the result of 0.000125, which seems like a very low figure, suggesting a big overfitting, but it doesn't look that way. And when I've fitted this graph on other things such as python and origin it's returned figures in the 10^2 range (which is still not great), do you know why there might be such a large difference? | |
| Mar 2, 2021 at 15:21 | history | answered | Julien Kluge | CC BY-SA 4.0 |