This post also appeared as part of a longer article on fxguide.

Christophe took us on a journey through the history of Renderman. From the basics of RSL (Renderman Shading Language) with its empirical shading to RSL 2.0 and the first steps in physically plausible shading to the new RIS path tracing engine that allows physically accurate image creation.

We started with RSL 1.2, which basically was state of the art in big budget productions up to 2005. It had the hallmarks of traditional computer graphics: light sources without any size or falloff (in fact, ILM usually didn’t use falloff on lights at all even though an option for realistic light falloff existed) and of course idealized materials that had lots of dials for creative control without any regards to physically accuracy or energy conservation. One could easily create a surface that was brighter then any of the lights hitting it for example.

With that creative freedom comes a lot of complexity though to get something that can be considered “correct” when it comes to realism, a required goalpost when you want to integrate your renderings into filmed footage.

To get to an acceptable result quicker and with less complications, Pixar started to implement physically plausible rendering which was first used in Iron Man and later in Monster University to great effect. This proved that both realistic and stylized results were possible with this physically more accurate model despite it having less dials.

While RSL 2.0 was quite successful it still had a few problems. Mainly it slowed down in complex raytracing setups due to Ray Hit Shading taking a lot of time and the memory usage exploded with complex scenes. This lead Pixar to develop a path tracing approach and a new render engine called RIS. For an in depth overview see last year’s article on RIS.

RIS allows for even simpler controls, while rendering more accurately, faster and with less memory then the RSL approach. Shaders are now fully integrated but very modular, which allows for easy debugging or switching out whole block for quickly getting different results.

Finding Dory, which is currently in production at Pixar, is the first movie that is using RIS. In fact, Dory is the first use of what Christophe calls the RUKUS pipeline: RIS, USD a new interchange format developed at Pixar similar to Alembic and Katana. Christophe joked that he still hasn’t figured out what U and S stand for.

Since RIS has the ability to do progressive rendering, the look development and final rendering has accelerated quite a bit. An artist can set up a complex shot, put it on the farm and then “check in” after say 20 minutes and get a good idea what the final images are going to look like as the whole image is already visible, albeit very noisy. Then they can just let it “cook” on the farm until an acceptable noise level is reached. The remaining noise residue is being filtered away as a last step. A technology that Pixar was able to use from Hyperion, Disney’s new Path Tracing engine and it will be part of the public Renderman release in the future.

This means there is less human (expensive) time needed to set up a shot as there is much less trial and error. Artist always have a really good idea of what the final result is going to look like.

Sadly, Christophe wasn’t able to show us final imagery of Dory, as production isn’t that far along yet, but we did see a few shots of Dory and Nemo in their new look and they do look much improved from their old selves. We saw them being shaded, lit and look dev’ed in realtime in Katana and the experience looked very smooth and seamless.