I'm working on a project rendering water simulated through SPH particlessmoothed-particle hydrodynamics (SPH) with a non-photorealistic look to use in games.
In the actual stage of the project everything about this rendering is done through OpenGL and GLSL in C++, with many shader steps that stores intermediate textures in FBOs, which are later used in a final shader that brings everything together.
Everything is working fine, except for the framerate, which lies below between 25 and 30 FPS in the better case, rendering everything, including the intermediate textures, in 1024x768 pixels. So, since some of these intermediate steps involve image smoothing algorithms that are time consuming, I thought of reducing the dimensions of the textures generated in intermediate steps to make them faster, trying to find a sweet spot between the level of reduction/performance (divided by 2, 4, 8...) and the rendering final visual.
My question lies precisely here: How can I work with smaller textures in intermediate steps and use them on a final shader step to render to a image in a bigger dimension?