7

I'm trying to do raster algebra with two rasters, but when I transform the rasters to array I can't do the operation because the files do not have the same extent

example:

from osgeo import gdal import numpy as np IMG1 = gdal.Open('C:/IMG1.TIF') IMG2 = gdal.Open('C:/IMG2.TIF') img1 = IMG1.ReadAsArray() img2 = IMG2.ReadAsArray() img1+img2 >>>>> ValueError: operands could not be broadcast together with shapes (5353,3352) (5548,3232) 

So.. if I see the gt of every img show that img1 and img2 have the same size of the pixel (100) but different extent

g1 = IMG1.GetGeoTransform() gt1 >>>>> (482084.999999999, 100.0, 0.0, 5652715.00001374, 0.0, -100.0) g2 = IMG2.GetGeoTransform() gt2 >>>>> (475184.999999999, 100.0, 0.0, 5643715.00001535, 0.0, -100.0) 

So.. I used this operation in order to get the intersect extent.. but I don't know how to apply

r1 = [gt1[0], gt1[3],gt1[0] + (gt1[1] * IMG1.RasterXSize), gt1[3] + (gt1[5] * IMG1.RasterYSize)] r2 = [gt2[0], gt2[3],gt2[0] + (gt2[1] * IMG2.RasterXSize), gt2[3] + (gt2[5] * IMG2.RasterYSize)] intersection = [max(r1[0], r2[0]), min(r1[1], r2[1]), min(r1[2], r2[2]), max(r1[3], r2[3])] 
1
  • You need to get those coordinates into row, column addressing for both images (they will be different as they have different origins), then use ReadAsArray with parameters to read the common extent. When you write out as a new raster you need to adjust the geoTransform to the intersection extent created with the rows and columns of the intersection extent. Have a look at pcjericks.github.io/py-gdalogr-cookbook/raster_layers.html in the section Calculate zonal statistics for an example of parameter reading as array. Commented Mar 31, 2017 at 3:18

1 Answer 1

7

Found the solution

 #IN ORDER TO CLIP BY EXTENT EVERY IMAGE gt1=IMG1.GetGeoTransform() gt2=IMG2.GetGeoTransform() if gt1[0] < gt2[0]: #CONDITIONAL TO SELECT THE CORRECT ORIGIN gt3=gt2[0] else: gt3=gt1[0] if gt1[3] < gt2[3]: gt4=gt1[3] else: gt4=gt2[3] xOrigin = gt3 yOrigin = gt4 pixelWidth = gt1[1] pixelHeight = gt1[5] r1 = [gt1[0], gt1[3],gt1[0] + (gt1[1] * IMG1.RasterXSize), gt1[3] + (gt1[5] * IMG1.RasterYSize)] r2 = [gt2[0], gt2[3],gt2[0] + (gt2[1] * IMG2.RasterXSize), gt2[3] + (gt2[5] * IMG2.RasterYSize)] intersection = [max(r1[0], r2[0]), min(r1[1], r2[1]), min(r1[2], r2[2]), max(r1[3], r2[3])] xmin = intersection[0] xmax = intersection[2] ymin = intersection[3] ymax = intersection[1] # Specify offset and rows and columns to read xoff = int((xmin - xOrigin)/pixelWidth) yoff = int((yOrigin - ymax)/pixelWidth) xcount = int((xmax - xmin)/pixelWidth)+1 ycount = int((ymax - ymin)/pixelWidth)+1 srs=IMG1.GetProjectionRef() #necessary to export with SRS target_ds = gdal.GetDriverByName('MEM').Create('', xcount, ycount, 1, gdal.GDT_Byte) target_ds.SetGeoTransform((xmin, pixelWidth, 0,ymax, 0, pixelHeight,)) img1 = IMG1.ReadAsArray(xoff, yoff, xcount, ycount) img2 = IMG2.ReadAsArray(xoff, yoff, xcount, ycount) 
1
  • 1
    I'm not qute sure, what are you doing with the target_ds? With the script as stanalone the srs and target_ds lines are leading to nothing. Probably you can add the rest of the script to make it complete. Commented Feb 16, 2021 at 15:55

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.