I am making an input dataset which will have couple of thousands of images which all don't have same sizes but have same number of channels. I need to make these different images into one stack.
orders = (channels, size, size) Image sizes = (3,240,270), (3,100,170), etc I have tried appending it to axis of 0 and one and inserting too.
Images = append(Images, image, axis = 0) File "d:/Python/advanced3DFacePointDetection/train.py", line 25, in <module> Images = np.append(Images, item, axis=0) File "C:\Users\NIK\AppData\Roaming\Python\Python37\site-packages\numpy\lib\function_base.py", line 4694, in append return concatenate((arr, values), axis=axis) ValueError: all the input array dimensions except for the concatenation axis must match exactly Ideal output shape is like (number of images, 3) 3 for number of channels and it contains different shapes of images after that.
np.concatenate, and the relatedstackandappend, creates a multidimensional array, for example(n_images, height, width, channels)shape. Each image in that array has to have same dimensions. There's no way around it. And it seems that most, if not all, ML packages assume a similar consistency in size. You could, of course, have a list of images with different sizes, and even make a object dtype array from that list, but what's the point if you can't train or test with such a list?