I am solving optimization problems where I am trying to find the minimum of a function over some sample space $\mathcal{X}$, i.e., $\min\,f(x):x\in\mathcal{X}$. Now the optimization algorithm I am using is based on trial points $x'$ which are sampled from $\mathcal{X}$. For sake of argument, let's say $\mathcal{X}\in[0,1]$ is the unit interval.
Now I have been solving some problems where the solution lies along the boundary, i.e., $x=0$ or $x=1$ could be the solution to the minimization problem. Now, the way I have been picking my potential solutions (trial points) is to sample $x'$ from a Uniform(0,1) distribution.
No, what my question really is, is whether or not I will ever sample 0 or 1 from that Uniform distribution. From a practical point of view I don't think it will occur, however, from a theoretical point of view I am also not sure. Because isn't the probability of sample any one single number from a continuous distribution equal to exactly 0? Or is there some positive probability that I will sample the endpoints of the interval?
However, running some R code sampling from a Uniform(0,1) distribution 100,000,000 times I am able to sample 1, but not 0 (well maybe in machine precision it is?)
> x = runif(100000000) > min(x) [1] 2.142042e-08 > max(x) [1] 1 >
1-max(x)? $\endgroup$