A file provides me a 400x400x200x1 array and a shape. Depending on the data transmitted by the array, the shape changes. My task is to adapt the 400x400x200x1 array to the data its contains.
For example:
shape = np.array([20,180,1,1]) b= [] l = np.load("testfile.npy") d = (np.reshape(l[:shape[0],:shape[1],:shape[2],:shape[3]],(shape[0],shape[1]))).transpose() append(d) The idea is to create a new array, with the size adapted to its data. Now comes the problem: I have to do this process several times, but each time I do it, my RAM-load factor increases.:
shape = np.array([20,180,1,1]) b= [] for j in range(9): l = np.load("testfile.npy") d = (np.reshape(l[:shape[0],:shape[1],:shape[2],:shape[3]],(shape[0],shape[1]))).transpose() time.sleep(2) b.append(d) 
Is that just because the appendet arrays are so big? the output array which I'm append has the size of 180x20... but the RAM load-factor increases 0,12GB each time.Is there a more efficient way to store arrays, without tempfiles?
thanks and sorry for my english.