Timeline for Unity - Creating light that's visible to a shader but not to the camera/player?
Current License: CC BY-SA 3.0
10 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Apr 27, 2017 at 22:06 | comment | added | Draco18s no longer trusts SE | Think I'm going to create a shader later that does something like this because it would look cool. (Lights would still be lights, I mean using that same data to deform the mesh). | |
| Apr 27, 2017 at 18:43 | vote | accept | papathor | ||
| Apr 27, 2017 at 13:55 | answer | added | Zebraman | timeline score: 3 | |
| Apr 27, 2017 at 6:41 | comment | added | papathor | Just added more explanation. This is for a hobby project so I don't have anything set in stone, I'm mostly just interested in creating a reactive shader that can dynamically morph based on the objects around it. | |
| Apr 27, 2017 at 6:40 | history | edited | papathor | CC BY-SA 3.0 | added 343 characters in body |
| Apr 27, 2017 at 5:37 | comment | added | DMGregory♦ | Edit your question to tell us more about how this data is being used, what end goal or effect you're trying to achieve. You might not need to access this data all in a single shader pass — a lot of graphics techniques are based on flipping the problem around in some way (eg moving from forward to deferred rendering, changing geometry x lights to geometry + lights) | |
| Apr 27, 2017 at 5:34 | comment | added | papathor | The main problem I'm having is blending from a potentially large number of different sources. The few examples I've found hard code x number of properties (e.g. "source0.. source25). Is there a better way you know of than hard coding a few dozen properties, having CPU logic to switch between them, and then some calculations in the shader to blend them? Thanks for the help :) | |
| Apr 27, 2017 at 3:55 | comment | added | DMGregory♦ | Think about this: how does the GPU know where the light is pointing? Because the CPU told it. So you still have the same data flow from the CPU to the GPU, you haven't necessarily made it any cheaper by calling that bundle of data a "light". While you're right that coupling the CPU and GPU too tightly can be a bottleneck, setting a few uniform vectors each frame is not an unreasonable cost. So, rather than going by what you've heard is slow, profile it and see if this particular case is really a problem that needs solving. | |
| Apr 27, 2017 at 3:33 | review | First posts | |||
| Apr 27, 2017 at 15:03 | |||||
| Apr 27, 2017 at 3:28 | history | asked | papathor | CC BY-SA 3.0 |