Background:
I'm trying to render lightning within a volumetrically rendered cloud. I'm currently using a series of line-segments to represent lightning.
At first I used a distance field rendered into a 128^3 texture to calculate lighting from the strike on each cloud segment. However I wanted to render the actual core lighting bolt, and using a distance check from this distance field resulted in a blocky blobby lightning bolt with lots of artifacts.
I then transitioned to use IDs for the closest segment in a 128^3 buffer, then analytically determine the distance from the current sample point to the segment. While this isn't 100% accurate, I can generally get accurate distances and actually can render the core lightning bolt with out nearly as many artifacts.
The problem is that I need to be able to do this for an arbitrarily large scene, and I'd like to do this for mostly arbitrary number of segments.
At 128^3, that's 2MB for 1 byte per cell (256 segments). 1024^3? That's 1 gigabyte at 1 byte per cell.
This scaling is unsustainable. But the actual data required to merely represent a bunch of line-segments is quite small.
It's quickly calculating the distance for billions of points that's the issue. Because the clouds are volumetric, I have to make 3D samples with ray marching, for each pixel. I can't afford to add another N to my complexity per ray and still get real time framerates.
But given the simplicity of the object I'm trying to find the distance to (the distance to the closest linesegment, or even more generally "something that looks like lightning"), and how specific I'm trying to be and the sheer amount of blank space, I get the feeling there should be something I can do to simplify the whole structure, reduce the amount of memory it takes up, with out majorly impacting performance.
Question:
Is there any method to reduce memory usage of a signed distance field or closest object map given it represents distance to a series of connected linesegments?
There's only a few things I can think of and all have major issues, what's more None of these solutions are really specific to lightning in the way I have described:
maybe I could put a bunch of boxes around the core of the lightning, and try to some how use that to reduce the blank space, then use the closest distance to the boxes instead, and grab the associated edge cell's closest linesegment. This has issues that would require me to test distances several times, I'm not sure If I can limit how many time I test. I would still need to some how traverse these boxes as well.
maybe I could create some sort of list of lists, where each pixel corresponds to an index into another array which contains the list of segment ID's that could physically have overlapped in the viewport given a certain distance. I'd have to re-create the list every frame lightning struck, and this could potentially be time consuming, but basically I'd do the following:
- for each line segment:
- 1.a project AABB, accounting for radius, onto viewport
- 1.b fill in pixels with in capsules.
- 1.c for each pixel within capsule distance
- 1.d increment value for pixel
sum up pixel increments
allocate total space for pixel increments (probably a pre-allocated size), offsets are now calculated for each pixel. Possibly use stream compaction.
for each line segment:
- 4.a project AABB, accounting for radius, onto viewport
- 4.b fill in pixels with in capsules.
- 4.c for each pixel within capsule
- 4.d set linesegment ID to current pixel location (will need atomic ops per pixel).
within each linesegment list, sort by depth to screen.
when rendering use sorted list to figure out which segment distance to use, may have to compare multiple. hypothetically very few closest values most of the time.
This solution seems like a lot of effort, and I'm afraid of uncapped memory usage and just bugs in general.