2

Is it necessary to scale double values, or are they more precise when using real world values? Which is the best range to do calculations with double to get the best precision?

I.e. I've a cubic bezier curve defined. Should I use real world position values for the cubic curve or should I use normalized sizes of the values when calculating, and then scaling them up when I want to read real world values?

I'll give an example, look at this code:

void CCubic::getTAtDistance(const CCubicPoint& pFrom, const double& distance, double& t) { CPoint pEnd = getP(t); double cLength = (pEnd - pFrom).getLength(); int compare = GLOBAL::DoubleCompare(cLength, distance); if(compare > 0) { t-=(t - pFrom.t)*0.5; getTAtDistance(pFrom, distance, t); } else if(compare < 0) { t+=(t - pFrom.t)*0.5; getTAtDistance(pFrom, distance, t); }//else if } 

This method calculates a point on a cubic curve at the distance from another point on the cubic curve.

  • pFrom is the point to calculate the distance from.
  • t will be incrementally calculated and the define the new point on the curve at the specified distance when incremention is finished.
  • The method getP calculates and returns a point on the cubic curve at the specified t.

Initially when the method is called t is set to 1.0 (end of curve). As the cubic bezier is not linear, I need to incrementally calculate the closest point at specified distance from point pFrom.

Code for creating all points on the curve looks like this:

void CCubic::initEvenPointList(double distance, double offset) { //TODO: Check if r can be 0 in the 1/r code, and how to handle/show it. lPoints.clear(); minRadius = DBL_MAX; double t = 0; CCubicPoint ccP = getCubicPoint(t); lPoints.push_back(ccP); if(ccP.radius<minRadius) minRadius = ccP.radius; if(offset>0) { t = 1.0; getTAtDistance(getCubicPoint(0), offset, t); ccP = getCubicPoint(t); lPoints.push_back(ccP); if(ccP.radius<minRadius) minRadius = ccP.radius; }//if std::cout << "CCubic::initEventPointList -- Starting loop\n"; while(t<1.0) { double newT = 1.0; getTAtDistance(ccP, distance, newT); if(newT>1) break; t = newT; ccP = getCubicPoint(t); lPoints.push_back(ccP); if(ccP.radius<minRadius) minRadius = ccP.radius; } ccP = getCubicPoint(1.0); lPoints.push_back(ccP); if(ccP.radius<minRadius) minRadius = ccP.radius; std::cout << "P(" << 0 << "): t = " << lPoints[0].t << "\n"; double d = 0; for(int i=1; i<lPoints.size(); i++) { d+= (lPoints[i] - lPoints[i-1]).getLength(); std::cout << "P(" << i - 1<< "): t = " << lPoints[i].t << ", d = " << d*400 << "\n"; }//for } 

If I define a cubic bezier curve in realworld values:

  • A(-400, 0, 0) (Start point)
  • B(0, -200, 0) (Control point for A)
  • C(0, -200, 0) (Control point for D)
  • D(400, 0, 0) (End point)

and set the distance to 25.

I get about 34 points with the correct distance between. Everything is okay.

Then I noticed that if I define the cubic bezier curve as normalized and scaling it up (max values of 1.0), i.e.:

  • A(-1, 0, 0) (Start point)
  • B(0, -0.5, 0) (Control point for A)
  • C(0, -0.5, 0) (Control point for D)
  • D(1, 0, 0) (End point)

And then I set the distance to 25/400 (scale of 400). If calculate whole the curve I get only about 4 points after scaling it up. This should not happen in the mathematics. So there should be a rounding error, or my faulty code.

I give you the code for getCubicPoint and getP, as well as DoubleCompare:

CPoint CCubic::getP(double f) const { CPoint rP = pA*pow(1-f, 3) + pB*3*f*pow(1-f, 2) + pC*3*(1-f)*pow(f, 2) + pD*pow(f,3); return rP; } CCubicPoint CCubic::getCubicPoint(double f) const { CPoint cP = pA*pow(1-f, 3) + pB*3*f*pow(1-f, 2) + pC*3*(1-f)*pow(f, 2) + pD*pow(f,3); CPoint pI = (pB - pA)*3 + (pA + pC - pB*2)*6*f + (pD + pB*3 - pC*3 - pA)*3*pow(f,2); CPoint pII = (pA + pC - pB*2)*6 + (pD + pB*3 - pC*3 - pA)*6*f; double r = (pI.x*pII.y - pII.x*pI.y) / pow((pow(pI.x,2) + pow(pI.y, 2)), 3.0/2.0); r = 1/r; if(r<0) r = -r; pII = pI.getNormal(true); //Right normal pII = pII.getNormalized(); return CCubicPoint(cP, pII, r, f); } int GLOBAL::DoubleCompare(double A, double B) { if(abs(A - B) < std::numeric_limits<double>::epsilon()) return 0; if(A < B) return -1; return 1; } 
0

1 Answer 1

1

A double has 11 bits of exponent and 53 bits of precision. This means that any finite double has the same precision, whether its magnitude is around 4, 400, or 4e300. Normalizing versus "real" ranges versus any other magnitude shouldn't matter.

The one caveat is if the numbers you're working with are vastly different magnitudes. For example, in floating point path, 1e300 + 1 == 1e300, because there aren't enough bits of precision to represent 1.

I think this difference in magnitude is causing your problem. Within DoubleCompare:

if(abs(A - B) < std::numeric_limits<double>::epsilon()) return 0; 

epsilon is defined as the smallest representable floating point difference at 1.0. I understand that your intent is to allow for floating point error, but different magnitudes will require different relative error. Bruce Dawson's "Comparing Floating Point Numbers" has more detail and a review of other techniques.

(The use of abs here makes me a bit nervous too, because abs in C only takes integers. As a result, whether or not you get a floating point absolute value depends on what headers you've included and whether or not you did using namespace std; or using std::abs; previously in your code. Maybe I'm just being paranoid, but I'd prefer C's fabs or an explicit std::abs.)

Your code isn't complete enough to compile, so I'm not certain, but I think that improving this comparison with DoubleCompare will give you consistent results.

Sign up to request clarification or add additional context in comments.

1 Comment

Thanks, I missed that one, fabs! Though it seems to be too exact with my iterations, as in some way I don't reach below epsilon. I had to set my "own" epsilon to 0.0001, as otherwise I would never reach close to 0. And I think it is not practicable as this will make me loose precision and "fog" any infinite loops. Also I had to rewrite the code in getTAtDistance as it does not work optimally at all and falls easy at infinite loops. But I noticed that precision was much better, but still lesser than real world values, probably caused by my custom epsilon.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.