This work was done for the Unity Neon Challenge, a contest challenging Unity developers to create a real-time environment using the tools that Unity developed for the Adam series. As part of the team, I handled the cloud blanket effect seen in the video.
The clouds used in our environment are rendered using a technique very familiar to the industry at this point: raymarching. Our initial concept came from the infamous Clouds entry on ShaderToy, where Inigo Quilez utilizes value noise sampling and uniform raymarching to sample out clouds on the GPU. Of course, raymarching is not natively supported on Unity, so we needed to bring the solution over and adapt it. This ended in lukewarm results:
After some more research, we found a SIGGRAPH presentation on fast and stable volume rendering, with source code on GitHub. Combining this with the previous work done on the Clouds shader, we achieved a less complex but equally accurate representation of the clouds:
Unfortunately, this came at a cost. A huge one on the GPU, to be exact. Volumetric rendering isn’t exactly fast, and doing so without a highly optimized or intelligent solution will always bring a simulation down to detestable framerates. Some more optimizations were made with respect to the volume itself and added a much needed speedup, but we were still facing a good three-quarters of our render time being eaten up by this render.
Fortunately, we had prior experience with a GitHub library that uses upsampled rendering, and porting over the upsampling scripts was straightforward. As one might expect, upsampling the clouds from a smaller resolution did wonders for the framerate since we only had to render a quarter or half of the pixels. On top of this, the clouds looked almost exactly the same!
Thanks to the fluffy, undetailed nature of clouds, most parts of the render looked equivalent to the full resolution version. Two pieces remained: the empty edges of the clouds, and the shared edges with geometry. The empty edges aren’t terrible and tend to be hidden in the distance, so we left that as is, but we could not go on without fixing the shared edges. After a couple of attempts at fudging the results with some blur, we altered the cloud rendering script to make two rendering passes, one of full resolution clouds and one of less-than-full resolution clouds. The trick to keeping our performance up was to only render the full resolution clouds where there is geometry, then overlay that on top of the quarter- or half-resolution render. Since our raymarched sampling is deterministic, no strange breaks would happen between pure clouds and the geometry:
At this point, the properties of the cloud renderer started to get unwieldy, so we ported all of the fields over to a ScriptableObject that our artists or programmers could check out and change without issue. It would have been nice to be done with it here, but some parts of the render were still bothersome. For instance, one of our shots pans the camera over the vast cloudscape at a point fairly high above the clouds, but in the distance the clouds become a flat mess of pixels. To fix this, we introduced a distance-based “cloud granularity” property which would influence how the clouds grow larger as they get further from the camera. As one might expect, this causes problems when that factor grows too large, as the samples will have wider steps and eventually break altogether. Fortunately for our case, we never allow this problem to occur.
Alongside the distant cloud property, we also optimized one more portion of the code. We still weren’t getting the breadth that we wanted (i.e. clouds to the horizon), so we made the step size of the raymarcher dependent on its distance from the camera. It worked like a charm and clouds could render all the way to the horizon, only at the cost of slightly sparser clouds and sampling error at large step-multiplier values or for very, very distant clouds.