So the Remez algorithm is an algorithm for finding optimal polynomial approximations or at the very least for converging towards them. To find an approximation of $f$ by an $N$th degree polynomial the algorithm samples $N+2$ points and solves a particular system of equations using those points to create a first approximation. That's all fairly straightforward. The next step is however vague and confusing to me. The next step asks you to find the "points of local maximum error" defined as $|f(x) - p(x)|$ where $p$ is the approximating polynomial. Finally it asks that you use those as your samples in the next iteration (or if a certain condition on the error is met you can quit) which implies that you need $N+2$ such samples.
If I'm using single precision floating points then I'm sure I could loop over every possible value on the range I care about but that's very inefficient. Assuming the function I'm trying to approximation is twice differentiable then I can probably find at least one critical point using Newton's method. But I need to find N+2 points of maximum local error not just one!
I know that if I started the Newton's algorithm from several points then I should likely be able to find N-1 points of maximum error but I'm not entirely sure how that should be done. Are there any algorithms for achieving this?