I'm trying to do raster algebra with two rasters, but when I transform the rasters to array I can't do the operation because the files do not have the same extent
example:
from osgeo import gdal import numpy as np IMG1 = gdal.Open('C:/IMG1.TIF') IMG2 = gdal.Open('C:/IMG2.TIF') img1 = IMG1.ReadAsArray() img2 = IMG2.ReadAsArray() img1+img2 >>>>> ValueError: operands could not be broadcast together with shapes (5353,3352) (5548,3232) So.. if I see the gt of every img show that img1 and img2 have the same size of the pixel (100) but different extent
g1 = IMG1.GetGeoTransform() gt1 >>>>> (482084.999999999, 100.0, 0.0, 5652715.00001374, 0.0, -100.0) g2 = IMG2.GetGeoTransform() gt2 >>>>> (475184.999999999, 100.0, 0.0, 5643715.00001535, 0.0, -100.0) So.. I used this operation in order to get the intersect extent.. but I don't know how to apply
r1 = [gt1[0], gt1[3],gt1[0] + (gt1[1] * IMG1.RasterXSize), gt1[3] + (gt1[5] * IMG1.RasterYSize)] r2 = [gt2[0], gt2[3],gt2[0] + (gt2[1] * IMG2.RasterXSize), gt2[3] + (gt2[5] * IMG2.RasterYSize)] intersection = [max(r1[0], r2[0]), min(r1[1], r2[1]), min(r1[2], r2[2]), max(r1[3], r2[3])]