Say I have a parametric shape, WLOG a Coon's patch.
I would like to generate the level of resolution of this parametric shape based on the rendering parameters. E.g. I send the AABB of the patch + the camera position.
This will tell me how far away the patch is from the screen and thus I can estimate the level of resolution needed to sample the shape.
I would like, if possible, to use no more resources than strictly necessary per frame.
One option is to first precompute the number needed on the CPU, then do a vertex-less render where we generate the vertex positions on the vertex shader based on the integer id.
However, this requires pre-computing the topology (index) buffer and passing it. Given the topology is trivially known (i.e. given the index of a triangle I can immediately tell you the three vertices that it touches), is it possible to avoid using an index buffer at all? Can mesh shaders be used for this?