Skip to main content
clarified what SPH is
Source Link
concept3d
  • 12.8k
  • 4
  • 46
  • 57

I'm working on a project rendering water simulated through SPH particlessmoothed-particle hydrodynamics (SPH) with a non-photorealistic look to use in games.

In the actual stage of the project everything about this rendering is done through OpenGL and GLSL in C++, with many shader steps that stores intermediate textures in FBOs, which are later used in a final shader that brings everything together.

Everything is working fine, except for the framerate, which lies below between 25 and 30 FPS in the better case, rendering everything, including the intermediate textures, in 1024x768 pixels. So, since some of these intermediate steps involve image smoothing algorithms that are time consuming, I thought of reducing the dimensions of the textures generated in intermediate steps to make them faster, trying to find a sweet spot between the level of reduction/performance (divided by 2, 4, 8...) and the rendering final visual.

My question lies precisely here: How can I work with smaller textures in intermediate steps and use them on a final shader step to render to a image in a bigger dimension?

I'm working on a project rendering water simulated through SPH particles with a non-photorealistic look to use in games.

In the actual stage of the project everything about this rendering is done through OpenGL and GLSL in C++, with many shader steps that stores intermediate textures in FBOs, which are later used in a final shader that brings everything together.

Everything is working fine, except for the framerate, which lies below between 25 and 30 FPS in the better case, rendering everything, including the intermediate textures, in 1024x768 pixels. So, since some of these intermediate steps involve image smoothing algorithms that are time consuming, I thought of reducing the dimensions of the textures generated in intermediate steps to make them faster, trying to find a sweet spot between the level of reduction/performance (divided by 2, 4, 8...) and the rendering final visual.

My question lies precisely here: How can I work with smaller textures in intermediate steps and use them on a final shader step to render to a image in a bigger dimension?

I'm working on a project rendering water simulated through smoothed-particle hydrodynamics (SPH) with a non-photorealistic look to use in games.

In the actual stage of the project everything about this rendering is done through OpenGL and GLSL in C++, with many shader steps that stores intermediate textures in FBOs, which are later used in a final shader that brings everything together.

Everything is working fine, except for the framerate, which lies below between 25 and 30 FPS in the better case, rendering everything, including the intermediate textures, in 1024x768 pixels. So, since some of these intermediate steps involve image smoothing algorithms that are time consuming, I thought of reducing the dimensions of the textures generated in intermediate steps to make them faster, trying to find a sweet spot between the level of reduction/performance (divided by 2, 4, 8...) and the rendering final visual.

My question lies precisely here: How can I work with smaller textures in intermediate steps and use them on a final shader step to render to a image in a bigger dimension?

Split into paragraphs. Removed thankyous.
Source Link
Anko
  • 13.5k
  • 10
  • 56
  • 82

I'm working on a project where the goal is to renderrendering water simulated through SPH particles with a non-photorealistic look to use in games. 

In the actual stage of the project everything about this rendering is done through OpenGL and GLSL in C++, with many shader steps that stores intermediate textures in FBOs, which are later used in a final shader that brings everything together. 

Everything is working fine, except for the framerate, which lies below between 25 and 30 FPS in the better case, rendering everything, including the intermediate textures, in 1024x768 pixels. So, since some of these intermediate steps involve image smoothing algorithms that are time consuming, I thought of reducing the dimensions of the textures generated in intermediate steps to make them faster, trying to find a sweet spot between the level of reduction/performance (divided by 2, 4, 8...) and the rendering final visual. 

My question lies precisely here: howHow can I work with smaller textures in intermediate steps and use them on a final shader step to render to a image in a bigger dimension?

Thanks!

I'm working on a project where the goal is to render water simulated through SPH particles with a non-photorealistic look to use in games. In the actual stage of the project everything about this rendering is done through OpenGL and GLSL in C++, with many shader steps that stores intermediate textures in FBOs, which are later used in a final shader that brings everything together. Everything is working fine, except for the framerate, which lies below between 25 and 30 FPS in the better case, rendering everything, including the intermediate textures, in 1024x768 pixels. So, since some of these intermediate steps involve image smoothing algorithms that are time consuming, I thought of reducing the dimensions of the textures generated in intermediate steps to make them faster, trying to find a sweet spot between the level of reduction/performance (divided by 2, 4, 8...) and the rendering final visual. My question lies precisely here: how can I work with smaller textures in intermediate steps and use them on a final shader step to render to a image in a bigger dimension?

Thanks!

I'm working on a project rendering water simulated through SPH particles with a non-photorealistic look to use in games. 

In the actual stage of the project everything about this rendering is done through OpenGL and GLSL in C++, with many shader steps that stores intermediate textures in FBOs, which are later used in a final shader that brings everything together. 

Everything is working fine, except for the framerate, which lies below between 25 and 30 FPS in the better case, rendering everything, including the intermediate textures, in 1024x768 pixels. So, since some of these intermediate steps involve image smoothing algorithms that are time consuming, I thought of reducing the dimensions of the textures generated in intermediate steps to make them faster, trying to find a sweet spot between the level of reduction/performance (divided by 2, 4, 8...) and the rendering final visual. 

My question lies precisely here: How can I work with smaller textures in intermediate steps and use them on a final shader step to render to a image in a bigger dimension?

Tweeted twitter.com/#!/StackGameDev/status/433540538412048384
Source Link

How to work with smaller intermediate textures in OpenGL and GLSL?

I'm working on a project where the goal is to render water simulated through SPH particles with a non-photorealistic look to use in games. In the actual stage of the project everything about this rendering is done through OpenGL and GLSL in C++, with many shader steps that stores intermediate textures in FBOs, which are later used in a final shader that brings everything together. Everything is working fine, except for the framerate, which lies below between 25 and 30 FPS in the better case, rendering everything, including the intermediate textures, in 1024x768 pixels. So, since some of these intermediate steps involve image smoothing algorithms that are time consuming, I thought of reducing the dimensions of the textures generated in intermediate steps to make them faster, trying to find a sweet spot between the level of reduction/performance (divided by 2, 4, 8...) and the rendering final visual. My question lies precisely here: how can I work with smaller textures in intermediate steps and use them on a final shader step to render to a image in a bigger dimension?

Thanks!