Next Generation Post Processing in Call of Duty: Advanced Warfare

Next Generation Post Processing in Call of Duty: Advanced Warfare

Proud and super thrilled to announce that the slides for our talk “Next Generation Post Processing in Call of Duty: Advanced Warfare” in the SIGGRAPH 2014 Advances in Real-Time Rendering in Games course are finally online. Alternatively, you can also download them in the link below.

Post effects temporal stability, filter quality and accuracy are, in my opinion, one of the most striking differences between games and film. Call of Duty: Advanced Warfare art direction aimed for photorealism, and generally speaking, post effects are a very sought after feature for achieving natural looking, photorealistic images. This talk describes the post processing techniques developed for this game, which aim to narrow the gap between film and games post FX quality and to help building a more cinematic experience. Which is, as you can imagine, a real challenge given our very limited time budget (16.6 ms for a 60 fps game, which is orders of magnitude less of what can be typically found in films).

In particular, the talk describes how scatter-as-you-gather approaches can be leveraged for trying to approximate ground truth algorithms, including the challenges that we had to overcome in order for them to work in a robust and accurate way. Typical gather depth of field and motion blur algorithms only deal with color information, while our approaches also explicitly consider transparency. The core idea is based on the observation that ground truth motion blur and depth of field algorithms (like stochastic rasterization) can be summarized as:

  • Extending color information, according to changes in time (motion blur) and lens position (depth of field).
  • Creating an alpha mask that allows the reconstruction of accurate growing and shrinking gradients on the object silhouettes.

This explicit handling of transparency allows for more realistic depth of field focusing effects, and for more convincing and natural-looking motion blur.

In the slides you can also find our approaches to SSS and bloom, and as a bonus, our take on shadows. I don’t want to spoil the slides, but for SSS we are using separable subsurface scattering, for bloom a pyramidal filter hierarchy that improves temporal stability and robustness, and for shadow mapping a 8-tap filter with a special per-pixel noise A.K.A. “Interleaved Gradient Noise”, which together with a spiral-like sampling pattern, increases the temporal stability (like dither approaches) while still generating a rich number of penumbra steps (like random approaches).

During the actual talk in SIGGRAPH, I didn’t had time to cover everything, but as promised every single detail is in the online slides. Note that there are many hidden slides, and a bunch of notes as well; you might miss them if you read them in slide show mode.

Hope you like them!

Powerpoint [407.8 MB]
Tweet
  1. Stephanus:

    This is amazing work!

  2. Pingback: Interesting Links - David's Web Corner

  3. Erik Faye-Lund:

    Nice work, thanks a lot! But I have one question – slide 90 says “Background reconstruction – See online slides”, “How to use bilinear filtering? – See online slides” and “Half-res rendering – See online slides”. Slide 38 also contains a similar remark. Which slides does those bullets refer to? I can’t seem to find any course notes or similar…

  4. Jorge Jimenez:

    Thanks for the comments!

    That “See online slides” remark was meant for the actual presentation itself in SIGGRAPH, what you downloaded are the online slides. This extra information is in hidden slides, so be sure to not trigger the slideshow mode or you might miss them!

  5. sizwk:

    This is great work!But I have two questions.

    About function DepthCmp2(float depth, float tileMaxDepth) in slide 99, is the tileMaxDepth correct?
    I think I should use tileMinDepth instead of tileMaxDepth.

    And is function SampleAlpha(float sampleCoc) in slide 99 correct?
    I think the following equation is correct.
    return rcp(PI * max(sampleCoC * sampleCoC, DOF_SINGLE_PIXEL_RADIUS * DOF_SINGLE_PIXEL_RADIUS));

    Sorry if I’m wrong.

  6. Jorge Jimenez:

    Thanks!

    In your case it might me tileMinDepth, it will depend on how the depth values are setup. It should be the closest depth to the camera. I’ve updated the slides, renaming that variable to closestDepth.

    Regarding the alpha calculation, I think they both yield the same results, but probably yours is more legible!

  7. sizwk:

    Thank you for the answer!
    I understand the first answer.

    But I’m still little confused about the alpha calculation.
    I think both yield different result.
    (yours)=min(
    rcp(PI * sampleCoC * sampleCoC),
    PI * DOF_SINGLE_PIXEL_RADIUS * DOF_SINGLE_PIXEL_RADIUS)
    (mine)=rcp(PI * max(
    sampleCoC * sampleCoC,
    DOF_SINGLE_PIXEL_RADIUS * DOF_SINGLE_PIXEL_RADIUS))
    =rcp(max(
    PI * sampleCoC * sampleCoC,
    PI * DOF_SINGLE_PIXEL_RADIUS * DOF_SINGLE_PIXEL_RADIUS))
    =min(
    rcp(PI * sampleCoC * sampleCoC),
    rcp(PI * DOF_SINGLE_PIXEL_RADIUS * DOF_SINGLE_PIXEL_RADIUS))

    Why didn’t you use rcp() for (PI*DOF_SINGLE_PIXEL_RADIUS*DOF_SINGLE_PIXEL_RADIUS)?

  8. Jorge Jimenez:

    I see what you mean and you are right! It should be:
    min(
    rcp(PI * sampleCoC * sampleCoC),
    rcp(PI * DOF_SINGLE_PIXEL_RADIUS * DOF_SINGLE_PIXEL_RADIUS))

    The difference is from 1.57->0.63. Fortunately, it is not a terrible bug visually (as the very important thing is to avoid the division by zero).

    I’ll update the slides, thanks a lot for pointing it out!

  9. MagicMike:

    Thank you for posting the presentation. Its great to be able to learn from production techniques!

    I’m a little confused about how the background/foreground categorization works. Looking at the background image under “Big Picture”, all the pixels appear to be categorized as background. Considering the character is standing in front of a flat wall, shouldn’t all those pixels have been considered foreground, but with very small circle of confusion?

    The foreground image does make sense, as tiles with high depth disparity have missing pixels as they contribute from the background.

  10. Jorge Jimenez:

    Sorry, these screenshots were incorrectly captured! I’ve fixed them and uploaded new slides. However, even the new captures might still look confusing, so I’ve added some explanations to the slide 96:

    Whereas it is possible to have no foreground accumulation (and hence have black pixels), it’s much harder to find the same situation for the background (not impossible but almost).

    The reason is that big DepthCmp2 deltas turn the pixel into background completely (see slide 100), which explains the black tiles in this foreground image.

    On the other hand, any pixel that is just a bit further away from the closest depth will be classified as foreground, but it will still have a small background weight. And because the background color accumulation is renormalized at the end of the loop, the color will be recovered in that areas (foreground and background colors are normalized by dividing by their respective total accumulated weights). In other words, unless all samples have exactly the same depth as the tile’s closest depth (and hence the background weight is exactly 0.0), there will be “leak” into the background.

    Note that this is not an issue in practice, given that the alpha values take care of properly eliminating the background where needed.

  11. Giacomo:

    Congratulations for the amazing work! so close to photorealism. did you contribute to the new cod:aw graphics? if yes, it’s a movie like experience like no other

  12. Jorge Jimenez:

    Thanks for the nice comments! My contribution to the game was more or less what I presented in SIGGRAPH, which is contained in these slides!

  13. caq:

    Great Work!
    I have one question on bloom part.
    As mentioned in doc, there is no threshold as input, how did you control the non-bright part in the scene image? If we don’t care about it, does it cause the whole image blurred? Thanks!

  14. Jorge Jimenez:

    Thanks!

    When using PBR the dynamic range is usually very high, so you don’t need to threshold. The blurred bloom layer is set to a low value (say 0.04), which means that only very bright pixels will bloom noticeably. That said, the whole image will receive some softness, which can be good or bad, depending on the artistic direction. But in general, it leads to more photorealistic results.

  15. caq:

    Hi,

    Tons of thanks for your reply. One more question:

    A fix for firefly is mentioned in the doc, weight = 1 / (1 + luma) is used when mip0 to mip1 downsample. I tried to follow this, but the color channel’s range is shrink to 0 ~ 1 if I choose the max color channel as luma, after that, there is no very bright pixels, and the bloom effect is very weak. I tried to apply the inversion function weight = 1 / (1 – luma) to the bloom layer when add bloom layer to the original lighted scene image, but there is no obvious effect, the bloom is still weak. Is there anything I missed?

    Thanks!

  16. caq:

    Thanks for the reply! It’s really helpful!
    Another question on bloom:
    In the doc, 3×3 tent filter is mentioned when upsample, I want to know how many passes of tent filter to converge to a Gaussian? Only one pass? And if scaled radius (use this to control kernel size, right?) is used, how to solve the holes?

    Thanks in advance!

  17. Jorge Jimenez:

    Regarding the fireflies:

    Sorry, that slide was possibly not too clear. You need to renormalize afterwards, it’s a weighted average:

    float4 sum = 0.0;
    for each sample: sum += (1.0 / (1.0 + luma)) * float4(sampleColor.rgb, 1.0);
    sum.rgb /= sum.w;

  18. Jorge Jimenez:

    Regarding the tent filter:

    You need a few iterations to converge to a Gaussian, but given that the lower mips (the ones that need more filtering) will be successively upscaled (say mip 6 will be scaled to mip 5 size, then to mip 4 size, and so on), they will pretty much look like a Gaussian in the end.

    If you push the radius too much you might find undersampling issues due to the holes, as you mentioned. But the downsample is already blurring the input (that is, removing high frequencies to a substantial degree), which means that it is hard to find undersampling problems in practice even when using such a small 3×3 kernel size.

  19. caq:

    Great!
    I think I understand how to fix fireflies finally.For each block of 4 samples, do the sum with weight, right?

    For tent filter, I should use different radius on different mip levels (mip6 is large, mip5 is smaller, mip4 is smaller…), right? On mip0, I have to choose a very small radius?

    Thanks for your great help! I think I am close to the target!

  20. Xiangming:

    Hi Jorge,
    I don’t understand the following part of your slides.
    E’ = blur(E,b5)
    D’ = blur(D,b4) + E’
    C’ = blur(C,b3) + D’
    B’ = blur(B,b2) + C’
    A’ = blur(A,b1) + B’
    Could you please give more explanation about it?
    What do b1,b2,b3,b4,b5 and A,B,C,D,E represent?
    Thank you very much,

  21. Jorge Jimenez:

    @caq: that is correct, we apply the weighted average on each 2×2 block. We use a custom per-mip radius for the tent filter, but by default it uses the same radius in UV space for each level (as opposed to same radius in pixels) which works great. That is, a fixed scale in texture coordinates.

    @Xiangming: that is from Unreal 4 slides (The Technology Behind the Elemental Demo). A, B, …, E are different mip levels (where E is lowest resolution one). You first blur E into E’ using the radius b5, then you blur D using radius b4 and add E’, and so on.

  22. MagicMike:

    One thing I still don’t understand about the DoF technique is how the final blend between the half res buffer and full res buffer is computed.

    As I understand it the “Background Factor” derived from the full res coc means – identify the pixels where the focus pixels should sit sharply on top. i.e. Focus pixels will have a sharp silhouette against the blurry background.

    However, the “Foreground Factor” I’m confused about. Obviously we want the blurry foreground to partially obscure the in focus pixels. How can you compute this from the tile max CoC?

    Thanks again

  23. Jorge Jimenez:

    In a sharp pixel, the maximum tile circle of confusion gives you an intuition about if there might be other pixels bleeding into it.

    However, using the maximum tile circle of confusion alone would lead to blocky foreground areas on top of sharp areas (given the tiled nature of the max tile circle of confusion).

    With the UpscaledAlpha() lerp, this blockiness goes away. Also, this lerp makes the maximum tile circle of confusion to only affect to foreground pixels (turning it foreground, in a sense).

  24. J.M.:

    Hi Jorge. I don’t know how to contact you, so I’ll post a comment here and hope for you to see this.
    Some guy just released this mod of yours on the Nexus:
    http://www.nexusmods.com/skyrim/mods/71535/?
    Honestly, I doubt this was done with your permission. Otherwise I’ll apologize for the inconvenience.

    Gr8 Work, mate! Keep going !

  25. Jorge Jimenez:

    Sorry for the delay, just seen this message; as long as they comply with the license, it should be fine!

  26. mag:

    Hi Jorge, I have a question about how you incorporated translucency into your DOF results. Was the effect applied post composite using opaque depth values, or perhaps nearest(opaque depth,trans depth)? Or did you run a separate translucency DOF pass and composite after the fact?

  27. Jorge Jimenez:

    That is still a problem. We had a checkbox that allowed to select when to render a transparent object: before or after post effects. But this was really a workaround.

    The ideal solution is to run a separate translucency DOF pass as you pointed out, but that was too expensive for us.

    Another idea could be to render transparent objects after DOF, and then do in-shader DOF blurring while rendering them by reading multiple times from the texture maps (or using mip mapping for faster results).

  28. bruce:

    How do you deal with neighboring objects that have orthogonal velocity vectors (but similar magnitudes)? The one that wins the max tile will look great, but the other ends up blurring the wrong way until you pop into its dominant tile.

    McGuire’s alternating tile/center vel helps, but it’s still pretty noticeable. I feel like I must be missing something…

  29. leo:

    Hi Jorge,
    Your great presentation is very useful to me(I am plagued by fireflies).
    I have one questions about bloom;
    Should order of expose,bloom and tonemap be below?
    1.expose the original picture
    2.do bloom on the exposed picture
    3.integrate the exposed picture and bloomed picture
    4.tonemap the integrate picture

  30. maxest:

    Just a short comment regarding the:
    E’ = blur(E,b5)
    D’ = blur(D,b4) + E’
    C’ = blur(C,b3) + D’
    B’ = blur(B,b2) + C’
    A’ = blur(A,b1) + B’
    part.
    We got more stable and faster results doing it this way:
    E’ = E
    D’ = D + blur(E’)
    C’ = C + blur(D’)
    B’ = B + blur(C’)
    A’ = A + blur(B’)

  31. Jorge Jimenez:

    @bruce: we remove camera rotation motion blur, so this didn’t happened often for us in a first person shooter, but I can see it being problematic in some cases.

    @leo: the order you wrote above sounds good to me. We expose after bloom, but it is equivalent to your proposal.

    @maxest: this is what we actually do both for both the downsample and upsample (filter on the flight), but the slides are totally misleading right now. For some reason, I understood [Mitring2012] approach as also doing the blur on the flight, but looking at you what you wrote above, they blur each mip and afterwards perform the upsample. I’ll fix the slides, thanks for the comment!

  32. Yaro:

    Hi Jorge,

    Sorry for digging out an old project but I can’t find a way to properly test your incredible DoF algorithm in OpenGL.
    Firstly, I’ve tried looping/sampling using the tile_max_CoC as well as current_pixel_CoC and neither seems to work as intended:
    1) when using tile_max_CoC the Background becomes too blurry,
    2) using the current_pixel_CoC gives better results but the foreground isn’t bleeding onto focused background (the foreground object’s blurry edge “shrinks down” when background becomes focused).

    I’m certain I need to loop over the tile_max_CoC since it contains data about the current pixel’s neighborhood but the background is always too blurry (or isn’t reconstructed properly). I figure this has something to do with the SpreadCmp function which is supposed to throw away samples that aren’t meant to be used for reconstructing the background but I can’t seem to find proper values to feed it – simply using the distance between the current sample and the main pixel as well as the sample’s CoC value doesn’t do anything specific, I can just as well return a “1.” from this function and it will look exactly the same. Does this have something to do with the fact that my CoC values are in view space (0..1) instead of pixels? I’ve tried changing this as well but the results weren’t good either (a bunch of other artefacts on both layers).

    Or maybe I’m thinking about it wrong and the background shouldn’t be reconstructed but the alpha should start fading from the point where background still has some real data and move outwards, thus revealing more of the foreground.. BUT the SpreadCmp function doesn’t allow me to achieve this either.

    I’ve created a test scene similar to what you’ve used in your slides (although much less complex) so here’s my input data and results:

    INPUT:
    Tile Max CoC (normalized for viewing) – http://oi67.tinypic.com/mtkwas.jpg
    Current Pixel CoC (normalized for viewing) – http://oi64.tinypic.com/2dulab7.jpg
    Background + Foreground Classification – http://oi67.tinypic.com/m8fznn.jpg
    Color Input – http://oi64.tinypic.com/2pysw0k.jpg

    LAYERS:
    When sampling using the current pixel CoC:
    Background – http://oi67.tinypic.com/vmysqs.jpg
    Foreground – http://oi66.tinypic.com/11mc01z.jpg
    Alpha – http://oi66.tinypic.com/dc39ft.jpg
    Final – http://oi63.tinypic.com/2mzzbif.jpg

    When sampling using the tile max CoC:
    Background – http://oi68.tinypic.com/21b3tkj.jpg
    Foreground – http://oi63.tinypic.com/2lutht0.jpg
    Alpha – http://oi64.tinypic.com/ckhlz.jpg
    Final – http://oi63.tinypic.com/2zr3qso.jpg

    Help please!

  33. Pingback: Motion BLUR & ORDER INDEPENDENT TRANSPARENCY | kosmonaut games

  34. brazzjazz:

    Thanks for sharing all this with us! I love in-depth computer graphics stuff (I’m a gamer).

  35. brazzjazz:

    P.S. Fooling around with ReShade a lot though.
    https://sfx.thelazy.net/users/u/brazzjazz/

  36. Puscasu Robert:

    Can someone post the source code that implements this bloom? Ive tried for the last 2 months to replicate this, and i have multiple problems.

    The way i understood the algoritm was like this:
    First you render the scene to a framebuffer. Then you downsample 6 times(you render the scene each time to a framebuffer that is 2 times smaller in width and height, and at each render, for every pixel, the formula liniar combination neighbours found in the slides). Then, you unsample 6 times(you render to a framebuffer 2 times larger in width and height then the current buffer. The formula for each pixel is the tent filter made out of 9 elements, found in the slides. You also multiply this result by a radius, meaning that you multiply with a radius each time you unsample).
    After each unsampling, you apply a box filter.

    My problem is when i move the camera in a way that the bloom is outside partially outside of reach. When i do this, the bloom curves in a weird way. You can see this here:

    https://youtu.be/JBhpJZ5_RLU

    Did i misunderstood the algorithm presented here?

  37. Pingback: Анализ графики Red Dead Redemption 2 / Хабр

  38. Pingback: Graphics Study: Red Dead Redemption 2 — FLMarket

  39. Pingback: Rendering in Real Time with Spatiotemporal Blue Noise Textures, Part 2 | NVIDIA Developer Blog

  40. Alexander:

    Hey if you are still reading comments in here – do you have a list of references used in the slides?

  41. Thanks for sharing these slides. I was wondering, there are references such as [McGuire2010] and [Karis2013], but there doesn’t appear to be a bibliography of any sort. I’ve read your earlier comments about hidden slides and I’m looking at the slides with notes, but still nothing. I would really like to read more about this Karis averaging used in bloom. Thank you.

  42. Jorge Jimenez:

    Hi, the references were right before the bonus slides. I have moved them now to the end of the slides instead to make it easier to find them.