I'm trying to find a way to read any any .png, .jpg or .tiff, and return the coordinates of all black or grey pixels in that image.
I'm thinking of having a certain threshold grey color, and writing out the coordinates of every pixel that is darker than that. I'm not sure how to manage the aspect of reading the image, however. I'm aiming to have my result be lists of all black pixels in the image, as such:
[x-coord, y-coord, black]
I've looked into using cv.imread to read out the coordinates of pixels, but as far as I've been able to tell, it works exactly opposite to the way I want it - it takes coordinates as a parameter, and returns the RGB values. Does anyone have tips / methods of making this work?
For anyone with similar questions, I solved this using the answer below, then I turned the numpy-array into a list using np.ndarray.tolist(). Additionally, since I only got a truncated version of the results, i used:
import sys
np.set_printoptions(threshold=sys.maxsize)
Now it was simple to print any element from the list using indices.



