In astronomy, the Doppler effect is used to estimate approximate distances to stars and other celestial bodies.
Using the same principle, lets say we divide an image (photograph) in pixels and that we know the frequency of the light corresponding to the color of every pixel (As pointed out in some answers and comments, no camera can do this now, so assume that somehow we have this information)
We could then sort the pixels by frequency, choose a base frequency (the median or the mode perhaps), and then conceptually interpret the other frequencies as "shifts" to the red or blue, from the base frequency.
Using that information together with the Doppler formulas, we could calculate a "virtual speed difference" for each pixel, which would depend on the frequency of the color of the pixel's light.
Additionally, if we also know the image was captured in a certain amount of time (for example the shutter speed of the camera that was used to capture it), we could calculate a "virtual distance" for each pixel. The "virtual distance" for each pixel would be equal to the pixel's "virtual speed difference" times the capture time.
Combining the above with the real distance from the camera to any object on the image, then all other real distances, for all pixels of the image, could be estimated/calculated relative to the one that we know.
Could this work?
If not, how could the process/algorithm be improved to make it work?
Assume we only have one photograph/image and can't use ML models.
Clarification
I understand that whatever actual movement of the objects/subjects in the image, and hence real Doppler effect, will be too small to measure/detect. What I intended to convey with the question is: what happens when we interpret colors of an image as if they were the pure effect of a Doppler shift from an arbitrary base color we pick from the image? (hence the term "virtual Doppler")
I know this doesn't actually happen (ie. all light travels at c), but what if we interpret it that way?