In my game, I have a 3D shader that does lighting using color ramps on models with very low resolution textures. Basically, the vertex shader calculates color values for direct light and ambient occlusion, and then I look up into a color ramp what color to draw, and multiply this by the texture map.
Here's some pseudocode for that:
// vertex shader: color_vert.r = ambient_occlusion(vertex) color_vert.g = direct_lighting(vertex) color_vert.b = shadow_lighting(vertex) // fragment shader: // First, the color is whatever the direct lighting was color_fragment = direct_lighting_ramp_lookup(color_vert.g) // Multiply by ambient occlusion and shadow color_fragment *= ambient_lighting_ramp_lookup(color_vert.r) color_fragment *= shadow_lighting_ramp_lookup(color_vert.b) // Mutiply by the surface texture color_fragment *= texture_lookup(color_vert.uv) Now, this kind of lighting would be fine if I had very high resolution textures, but unfortunately my textures are very low resolution. So each texel (that is, the block of pixels corresponding to one pixel of the texture) is quite large, and lighting gets interpolated across the texel.
Here's an image of what that looks like:
And here's a closeup to show you what I mean: 
Notice how the color is smoothly getting interpolated accross the low-resolution texels? How can I write my shader such that the lighting gets clamped on a per-texel basis? Is such a thing even possible? The only solution I can think of is to first generate a lightmap for the whole scene with 1-1 mapping to texels, but I'm afraid that would be too slow / use too much memory.

