Attention
Unity has discontinued selling and supporting Deep Compositing effective February 7, 2024.
About Deep Compositing
How was it developed?
Two Siggraph papers and a Sci-Tech submission trace the evolution of deep alpha compositing:
Duff (Siggraph 1985) described deep compositing using z-buffers, which store a single depth value for each pixel. However, this only works if each element has a single object, at a well-defined depth. Otherwise, if a pixel covers multiple objects at different depths, it can't represent them all, and is said to be "confused". Also, z-buffers for different elements cannot be merged accurately. (This wouldn't have worked with Avatar, with multiple layers of dense motion-blurred foliage!)
Lokovic and Veach (Siggraph 2000) addressed the confused-pixel issue with deep shadow maps, using Pixar's dtex file format. These files can represent the depth and opacity of arbitrarily many objects that intersect each pixel.
Hillman (Sci-Tech 2010) extended deep shadow maps using deep alpha maps, which store lists of alpha samples at increasing depths, for all objects a pixel covers. With deep alpha maps, you can composite all samples, back to front, to recover a regular alpha channel. The renderer creates separate deep alpha maps for each pass (alongside the regular RGBA image).
At Wētā:
- Colin Doncaster and Areito Echevarria developed a prototype solution for computing holdouts from deep shadow maps inside Shake for Day The Earth Stood Still.
- We used this solution for the initial compositing in Avatar.
- Then Matt Welford and Peter Hillman developed deep alpha maps, and the ODZ file format, slightly changing the way opacity was represented internally, to make it easier to handle in compositing, and to make volumetric compositing possible.
- Matt and Peter also implemented a full deep compositing workflow, first in Shake (for Avatar), then in Nuke.