Screen-Space Subsurface Scattering

Screen-Space Subsurface Scattering

Image rendered in real-time

The new generation of game engines is out there! They are full of new rendering techniques, including classic topics like shadows, water rendering and post-processing, and also more recent techniques such as deferred shading and post-processing anti-aliasing. But there is a rendering topic that has been dormant for a few years, and at last, is being awaken:

Subsurface Scattering

Unreal Engine 3 features Subsurface Scattering (SSS), and so does CryEngine 3 and Confetti RawK. Specifically, these last two engines are using what is called Screen-Space Subsurface Scattering (update: it seems Unreal Engine 3 also uses SSSSS), an idea we devised for the first book in the GPU Pro series two years ago. Given the importance of human beings in storytelling, the inclusion of some sort of subsurface scattering simulation will be, in my opinion, the next revolution in games. Bear with me and discover why.

Let’s start the story from the beginning. What is SSS? Simply put, SSS is a mechanism of the light transport that makes light travel inside of objects. Usually, light rays are reflected from the same point where the light hits the surface. But in translucent objects, which are greatly affected by SSS, light enters the surface, scatters inside of the object, and finally exits all around of the incident point. So making a long story short, light is blurred, giving a soft look to translucent objects: wrinkles and pores are filled with light, creating a less harsh aspect; reddish gradients can be seen on the boundaries between light and shadows; and last, but not at least, light travels through thin slabs like ears or nostrils, coloring them with the bright and warm tones that we quickly associate with translucency.

Now you may be thinking, why should I care about this SSS thing? Every year, advances in computer graphics allow offline rendering to get closer to photorealism. In a similar fashion, real-time rendering is evolving, trying to squeeze every resource available to catch up with offline rendering. We all have seen how highly detailed normal maps have improved the realism of character faces. Unfortunately, the usage of such detailed maps, without further attention to light and skin interactions, inevitably leads the characters to fall into the much-dreaded uncanny valley. The next time you see a detailed face in a game, take your time to observe the pores, the wrinkles, the scars, the transition from light to shadows, the overall lighting… Then, ask yourself the following question: is what you see skin… or… maybe a piece of stone? If you manage to abstract the colors and shapes you observe, you will probably agree with me that it resembles a cold statue instead of a soft, warm and translucent face. So, it’s time to take offline SSS techniques and bring them to the real-time realm.

Some years ago, Henrik Wann Jensen and colleagues came up with a solution to the challenging diffusion theory equations, which allowed practical subsurface scattering offline renderings. His technology was used for rendering the Gollum skin in The Lord of the Rings, for which Henrik received a technical Oscar (awarded for the first time in history). This model evolved to accurately render complex multilayered materials with the work of Craig Donner and Henrik Wann Jensen, and was even translated to a GPU by Eugene d’Eon and David Luebke, who managed to obtain photorealistic skin images in real-time for the first time. Unfortunately, it took all the processing power of a GeForce 8800 GTX to render a unique head.

It was then when Screen-Space Subsurface Scattering was born, aiming to make skin rendering practical in game environments. From its multiple advantages, I would like to mention its ability to run the shader only on the visible parts of the skin, and the capability to perform calculations in screen resolution (instead of texture resolution). Subsurface scattering calculations are done at the required resolution, which can be seen as a SSS level of detail. The cost depends on the screen-space coverage: if the head is small, the cost is small; if the head is big, the cost will be bigger.

The idea is similar to deferred shading. The scene is rendered as usual, marking skin pixels in the stencil buffer. After that, skin calculations are executed in screen space for each marked skin pixel, trying to follow the shape of the surface, which is inferred from the depth buffer. This allows to render photorealistic crowds of people with minimal cost. One thing I love about our approach is that even given the amazing math complexity behind the skin rendering theory, in the end, it’s reduced to executing this simple post-processing shader only six times (three horizontally and three vertically):

float4 BlurPS(PassV2P input, uniform float2 step) : SV_TARGET {
    // Gaussian weights for the six samples around the current pixel:
    //   -3 -2 -1 +1 +2 +3
    float w[6] = { 0.006,   0.061,   0.242,  0.242,  0.061, 0.006 };
    float o[6] = {  -1.0, -0.6667, -0.3333, 0.3333, 0.6667,   1.0 };

    // Fetch color and linear depth for current pixel:
    float4 colorM = colorTex.Sample(PointSampler, input.texcoord);
    float depthM = depthTex.Sample(PointSampler, input.texcoord);

    // Accumulate center sample, multiplying it with its gaussian weight:
    float4 colorBlurred = colorM;
    colorBlurred.rgb *= 0.382;

    // Calculate the step that we will use to fetch the surrounding pixels,
    // where "step" is:
    //     step = sssStrength * gaussianWidth * pixelSize * dir
    // The closer the pixel, the stronger the effect needs to be, hence
    // the factor 1.0 / depthM.
    float2 finalStep = colorM.a * step / depthM;

    // Accumulate the other samples:
    [unroll]
    for (int i = 0; i < 6; i++) {
        // Fetch color and depth for current sample:
        float2 offset = input.texcoord + o[i] * finalStep;
        float3 color = colorTex.SampleLevel(LinearSampler, offset, 0).rgb;
        float depth = depthTex.SampleLevel(PointSampler, offset, 0);

        // If the difference in depth is huge, we lerp color back to "colorM":
        float s = min(0.0125 * correction * abs(depthM - depth), 1.0);
        color = lerp(color, colorM.rgb, s);

        // Accumulate:
        colorBlurred.rgb += w[i] * color;
    }

    // The result will be alpha blended with current buffer by using specific 
    // RGB weights. For more details, I refer you to the GPU Pro chapter :)
    return colorBlurred;
}

The crusade of rendering photorealistic skin does not end here for me. We have been researching the ability of rendering very fine skin properties, including translucency, wrinkles, or even color changes due to emotions and other conditions, which was presented in ACM SIGGRAPH Asia 2010. I will be commenting on every one of these projects, so stay tuned!

I truly believe we are getting closer and closer to overcoming the uncanny valley. In the very next future, I think we will see astonishing skin renderings in games, which will make characters more human. This will lead to stronger connections between their emotions, and those of the players, allowing storytellers to reach the gamers’ feelings and ultimately, their hearts.

Project Page
  1. sb:

    Nice article !
    Just wondering how well (or not) would that work on snow and ice ?
    Also I believe this technique wouldn’t work to simulate backscattering on a earlobe for example ?
    In this case would you just suggest to fake it with some sort of back diffuse lighting combined to a texture mask ?
    thanks !

  2. Jorge Jimenez:

    Hi, I’m glad you liked it!

    For snow and ice, it’s probably more important to simulate backscattering, as you pointed out. For simulating backscattering, we developed this follow-up technique:
    http://www.iryoku.com/translucency/

    It’s actually back diffuse lighting; the key difference is that it makes some assumptions that enable the use of the accurate diffusion theory for transmittance calculations.

  3. Hey Jorge,

    I have been really inspired by your article ‘Screen-Space Perceptual Rendering of Human Skin’

    I am in the process of attempting to create a custom SSS skin shader of my own. I have reached the point where it is time to begin the collection of empirical data.

    You mention that each of the participants fill out both a consent form and a questionnaire?

    I understand if you cant share the test results but would it be possible for you to maybe point me in the right direction as to what questions I should ask, please?

    Thanks for your time,

    Mike

  4. Jorge Jimenez:

    Mike, for this article we worked with a very talented psychophysics expert, Veronica Sundstedt, I’ll send a message to her with you on CC. Cheers!

  5. dasen:

    I guess there will be a time when seeing doesn’t necessarily mean believing…

  6. a:

    nice one

  7. Vectrotek:

    Fantastic!

  8. Can I somehow implement that inside for Unity ?

  9. Jorge Jimenez:

    Totally, I think it has been already implemented by some devs (even if perhaps not publicly available).

  10. Al:

    Hi Jorge,

    In the function SeparableSSS::gaussian(float variance, float r) there’s this line that evaluates the gaussian:

    exp((-(rr * rr)) / (2.0f * variance)) / (2.0f * 3.14f * variance);

    Shouldn’t the denominator be sqrt(2.0f * 3.14f * variance)?

    I was comparing our own importance-sampled kernel to your (excellent as always) reference implementation and our kernel was resulting in a much softer blur, which seems to come down to the difference in that line of code.

    Curious if I’m just missing something.
    Thanks,
    Al

  11. Jorge Jimenez:

    We initially used the 2D gaussian definition, so we employed 2.0 * 3.14f * variance. We later on switched to 2.0 * 3.14f * sqrt(variance), which is what we used in the separable SSS paper.

    Both produced very similar results (as the kernel is normalized anyway), but the fitting parameters change. For the parameters shown in http://iryoku.com/separable-sss/, the 1D Gaussian version should be used (2.0 * 3.14f * sqrt(variance)).

  12. Mike:

    Great article!

    Is there an example somewhere of the `BlurPS()` function being called in a post-processing shader? I’m attempting to recreate this effect, but a bit confused at its intended use- namely how to calculate “step”, “gaussianWidth”, and how to call “BlurPS()”.

    I became confused after reading ” in the end, it’s reduced to executing this simple post-processing shader only six times (three horizontally and three vertically).” As far as I can tell, yes you’re blurring the color with 6 samples, but it’s not 3 vertically and 3 horizontally. Is that happening in the main function where we intend to call BlurPS?

    Thanks

  13. Jorge Jimenez:

    Thanks! I would recommend to use the newer version of this, which is also easier to use:
    http://iryoku.com/separable-sss/
    https://github.com/iryoku/separable-sss