Woodville Walkthrough

The fifth Pixar Art Challenge was about “Woodville” – a tree village. Pixar provided the assets that traditionally come from the Pixar Undergraduate Program (PUP). Content, look and feel were basically optional. But you had to open a work-in-progress thread in the forum and share your progress visually and in text form.
The forum was accompanied by the jurors and the contributions were commented by them. That means that feedback and help came directly from Pixar.

The further specifications were: In the end the rendering should be done with Renderman, the provided asset, modified or unchanged, had to be staged. Additional 3D objects were allowed, but had to contribute to the image.
Furthermore: a format of 16:9 landscape or portrait (letterboxing allowed), maximum 4K, only color correction and grading in the post and of course no photobashing. Background pictures were allowed as long as you have the copyright for them.
The rating distribution consisted of 20% concept share (main idea and implementation), 40% artistic share (implementation of the main idea, color correction and composition) and another 40% technical share (implementation of texturing, shading, lighting, rendering and compositing). Besides the creative challenge, fame and glory, Pixar lured with lucrative prizes.

Since I could only actively participate in the evenings and weekends, the Challenge accompanied me (and my family) from the end of August until mid-November.

 

 

Concept

Usually I have something in mind and then finalize it step by step in the later process. Not really standard, not really efficient. But for things you do for yourself, it was mostly sufficient until now. For this project I made it the same out of habit, and although 20% of the evaluation went into the concept phase, I took the risk.

First Step:
Moodboard at Pinterest

At first I invested some time to search for mood pictures at Pinterest, which basically proved to be a pool of inspiration and ideas for me. So I created a pinboard and diligently and without much thinking collected my ideas as pictures. I roughly considered colors and moods, then filtered them down to about 10 pictures and created a mood board. Later I moved away from the moodboard a bit, especially concerning the idea of the rocks, and got some additional inspiration and checked the plausibility at the famous rock formations of the Zhangjiajie National Park in China and the Elbe Sandstone Mountains in Germany.

Second Step:
Concept in Houdini

From that point on, I continued the concept creation or blocking completely in Houdini. Lessons learned here was that I wouldn’t do it that way anymore. In the future I would create a final concept stand classically in 2D before.
Several drafts were created in relation to the story. In the beginning I tried to accompany a traveller who climbs great heights and arrives at a dubious hotel or passes a gloomy tollbooth. In the end I completely abandoned the depiction of a character and found it more interesting to focus more on the house. Someone lived there in the most adventurous environment and was apparently so important that one took the dangers of the ascent, as a visit was worthwhile.

Visual Storytelling

Just producing a nice picture would not be enough, which could be seen in the comments of Leif Pedersen of Pixar on my WIP pictures or those of my fellow campaigners. The eye wanted to be guided, the asset should play a central role and composition, light and all the different elements of the picture had to work together and tell a story for the viewer. In the following sections I will try to explain some decisions with reference to this.

Camera

I changed the perspective a few (many) times during the competition, mainly because I was too much in the blocking phase at the beginning and the implementation of the story was at the beginning. I was faced with the challenge of wanting to make breadth and depth comprehensible at the same time. I made many attempts to realize the picture in landscape format, whatever Leif’s wish was. But there was either too much unimportant environment in the picture, which contributed nothing to the story and was therefore wasteful, or, on the other hand, there was no depth, only width perceptible. In the end, therefore, I decided on the portrait format. This might have led to some point prints, but I stuck with the decision.

Software Considerations

For more then four years I have been working intensively with Houdini and try to use it in my daily work as often as possible. At home I have an indie license, so it was clear relatively quickly that most of the work should be done in Houdini. Blender was also briefly on the list, but I had hardly worked with it yet, 2.8 was not yet released and I didn’t want to take any risks. I reserved the right to use it for any sculpting or other supporting activities, but it ended up staying with Houdini on the 3D side. As a texturing tool Substance Painter was on top of the list, because I liked the license model and Mari seemed too complex for the task. For the post I used Nuke in the 30 Day trial version. Here I (and probably many others) still miss the Indie option.

Rocks

From the beginning I wanted to ensure full flexibility and fast iterations. For the rocks, a procedural approach was obvious, since I didn’t want to do any sculpting and already had relatively exact ideas in my head. Some technical approaches were based on Saber Jlasti’s lecture at Siggraph 2017 and Hugo Beyer’s Rock-Butte-Tutorial at Gumroad (see links at the end).
This is how I proceeded: First of all, control the rough shape by a ramp, which was then filled up with randomly placed boxes, spheres and tubes. Always with the option to add own geometry manually, e.g. to create certain plateaus or incisions.

Next, I deleted some elements manually (but that could easily be done automatically) after Voronoi fracturing the geometry. This way cuts were made in the rock. Then I converted the whole thing to VDB for the first time.

A little reshaping and smoothing, and now it was time for the really exciting part.

Several successive Noise VOPS created the actual rock look. In principle, you can stack them as often as you want to create the desired look, but it can become relatively computationally intensive, depending on how you push it or repeat the operations. If you promote the parameters to the VOP node, you have them all conveniently at hand. After a bit of trial and error you will get a feeling for your tool and get the results very quickly. And now let’s cache.

The last step was to create the horizontal and vertical boulders. Here I used the Voronoi Fracture again, but this time to create cut blocks, once over the whole geo and then a second time in a loop over each block created before.

Then this result was transferred with the VDB Fracture to the VDB coming in from above and converted back to Polys. Now the edges were highlighted, the geometry was cleaned up a bit and converted back to a VDB, a quick reshaping, converted back to Polys and “reduced” one last time. Ready are the rocks. I subdivided the front rocks after a second cashing, left the ones in the middle ground like this and even further reduced the distant ones to save as much memory as possible.

Hanging Bridges Asset

From the very beginning it was clear to me that suspension bridges would play a prominent role in the story. I quickly realized that, due to the frequent moving of the rocks in the beginning, I was dependent on an asset that could handle all this without beeing a pain in the ass.
I wanted a tool that was easy to use. Now there are entries about suspension bridges on the Internet like sand at the sea, but I wanted something of my own. Here now a short overview: You only need three curves to build a bridge. The main curve describes the way itself, and two more above serve as support curves that create the horizontal and vertical suspension cables.

The asset has a preview mode to show only the curve skeleton, then it is much faster. Basically I built a curve cage, which is meshed afterwards.

I wanted the ropes to be real mesh, so I took a closer look at the mathematics of circles in a circle and wrote this together in a Vex-Wrangle.

The wooden planks were in principle fully procedural, but in the end had relatively long cooking times. That was too long for me personally, and you couldn’t really see the differences. So I simply cached 25 different planks and then distributed them randomly. Only the Plank Randomness feature didn’t work, because they are not created procedurally. In the end I added fake knots (which could have been made nicer), cable lugs and holes in the planks.

Of course, it could have been extended and refined further and further, but for my purposes of the Challenge it was perfectly adequate. If you want to know more: At the end of the article you can download everything about the UVs and suspension bridges. And if you’re in Munich at a Houdini regulars’ table (MUCHUG), I can show it to you directly – just like the UV tool, which we’ll get to in a moment.

Environment Layout

I have prepared my static assets in a separate Houdini file. On the one hand the trees, the moss and other plants I bought, as well as the balloon which I also bought. There the geometries were gutted if necessary and brought to the same scale and to zero, material names were created as “shop_materialpath” or a name attribute and the whole thing was then exported as Alembic. The terrain consists of Height Fields, which I used relatively classically, i.e. I created Height Field and layered different Height Field Noises. I used the camera frustum as cutout (but I couldn’t check if it worked in terms of memory), then added details with the Height Field Erode Node and cached the whole thing.

Then some masks added together and also masked the rocks that were put up later. This mask controls the scattering of the trees – it went into a Height Field Scatter Node, which was fed by different Alembics. The individual file nodes got different class attributes to ensure randomness. In total I distributed nearly 180.000 trees.

I actually repositioned the rocks several times during the course of the trip (really very often). When I was satisfied, I wanted to add vegetation to the rocks. Again, everything should be as efficient as possible. First I fetched the respective rock and then generated UVs from the camera. After that, I used a Vex-Wrangle to delete everything that was outside the camera frustum (UVs were generated from the camera perspective). In the next Wrangle I also deleted all faces that still existed after a first Ray Hit using the Intersect function. Now I only had what I actually see in the picture.

For the actual distribution of the vegetation I removed everything that does not point upwards and does not occupy a certain area. I assigned a density attribute to both strands, which controlled the scatter node. The generated points still got a random orient value. Now I checked in another Vex-Wrangle whether these vectors met in a certain radius, and so on. On some rocks I deleted depending on a ramp, which worked from top to bottom. On these points I copied Alembic files again, as above with the terrain.

House-Asset

The Hero element, if you want to call it that, was provided by Pixar, but without textures, with more geometry than I actually needed, and above all (painfully) without UVs. So housekeeping was the first thing to do.

I had little interest in UVs – I pushed this problem back as a task – which led to me posting relatively long WIP images without textures, which certainly bored the audience out there. But back to the topic.
First of all, everything not needed was deleted. Now came the UVs: On the one hand, they should be grouped together material-related (or component-related), on the other hand, they should be in the same scale to each other and above all, they should be more or less straight aligned.

My luck was that there was a reasonably clean name structure and hierarchy in the Alembic. I could make use of that. My cleaned up geometry consisted of 63.496 poly soups, where I would have had to create UVs.
No fun job.
So I put everything into a For Loop, where I first put everything into a group. Then the orientation was recorded with the Bound Node (Oriented Bounding Box). The Bound Node returns the Xform attribute, which was inverted in the Transform-by attribute in the next step. So I had the geometry at the zero point and quite well straightened, at least sufficient for my purposes. Then I converted the Poly Soups to polygons, and now came the Unwrap. Afterwards I transformed back to the starting point and deleted the contained bounding box geo. This took about 30 seconds on my machine. If you packed the loop into a Compiled Block, it took just 4.6 seconds to apply UVs to the whole asset. Nice!
Then the prims got a Name, a TopName and a LowLevel-
attribute by cutting a specific part out of the Path attribute (i.e. the name structure). In a subsequent loop, which is called the Piece attribute
“name”, the matching meshes were then
in UV space. This can actually take a few minutes or more. After that it was definitely a good idea to cache the geo.

Now the materials were still coming towards me. First, I built the “shop_materialpath” by means of another loop of groups, which contained as their name the two previously generated TopName and Name attributes. I then put them together in a Wrangle. I had to create 94 shaders afterwards. Python was used for this, and a colleague helped me with the coding. The script created the shaders and then linked the paths of the textures created in the Substance Painter to the respective file nodes. The rest consisted of exporting the individual geometries for Substance Painter as well as various material overrides and late clean-up on things I didn’t notice at first.

Way-UP-Asset

Up to this point I was already quite satisfied with the results. But Leif made a quite correct remark regarding the visual storytelling, namely that in the lower half of the picture there was actually not much story taking place. I already had the same problem finding the camera angle in landscape format. While having a beer with a colleague and friend, I had the idea that there must be another way besides the suspension bridges. A much more difficult and dangerous one. The spiral path was born.
First the Hero Rock was “reduced” and converted into a VDB and back. Using the SideFx Labs tool “sop_spiral” and a Ray Node I projected a spiral onto the rock.
After cleaning up the ray result using the Ray Hit Group and adding normals, I created the cut-outs for the short suspension bridge elements with multiple carve nodes.

In the next two loops, the prims were redivided and then connected to the Skin Node to form a path, and then in the third loop the individual boards were created. Everything is actually quite straightforward. Afterwards the automatic UV tool followed again. From the top, a separate line went off to create the railings. These were kept quite simple. The lower border was simply created by an expanded curve with my drilled sweep tool. Then I copied the vertical railing posts to the curve which had been subdivided again before. I had added a separate point to the post source geometry above where I later instantiated the lights. The ropes originated from the same curve as above, which I redivided again and then let it sag with a little Vex. Last but not least, I distributed lamps above the small suspension bridges, which should secure this crossing, put up some signs and copied lamp geometry to the railing posts.

Texturing in Painter and Shading

I had never used Substance Painter before, but I was very quickly impressed by the workflow. It all felt a bit like Photoshop. You may or may not like it, but the handling is simple and the possibilities are endless. You quickly find your way around, which is something I was afraid of with Mari.
I first started to develop the most important areas such as the roof shingles, the different woods and the tree trunk, and found the possibility of filing and reusing finished materials incredibly efficient. So I had the entire house asset textured in about 4 hours.

As mentioned before, I had a Python script that linked the nearly 500 textures from Painter with the PxrTexture Nodes in the 94 shaders. The shaders themselves were all PxrSurface shaders, except the volumes.
The PxrVary Node, which can randomly change individual parameters such as the HSV value in PxrSurface, controlled by an attribute, also proved its worth. Only the weighting for the individual values is specified. This is a quick and easy way to get variance in the look, for example of the wooden planks or the vegetation.

Waterfall

I saved the waterfall for the end, as I wanted to keep the option open to leave it out if I didn’t finish other important things in time. The plan was then to let the water wheel be covered by plants alternatively and thus put it out of operation. But I had the time, and a little atmosphere couldn’t hurt.
There were two emitters, one coming out of a fictitious pipe, and one a bit further down just above the wheel, just to add some more water. I aligned the normal emitter spheres with simple vex in the desired direction and gave them each an initial velocity. The tube, the water wheel and the rock were added as VDB collider. Everything else was actually relatively straight out of the box.

I enabled reseeding and droplet detection and also set volume limits in the solver.
As particle separation I had 0.2 in the final run.
First the particles were cached. The sim didn’t take long, I was quite surprised. I tried around with lower particle separations, but came to the conclusion that I didn’t notice any improvement in the ratio.
So I kept the initial values and converted the result into geometry with the Particle Fluid Surface Node. Except for the Voxel Scale everything was standard again. I reduced the geo with a Peak Node by -0.01. Then I cached again and finished. If only real sanitation work would be that easy!

Additional Props and Plant Covering of Bridges

Especially in relation to the story something essential was missing, especially because I didn’t want to use a recognizable character (in contrast to my first ideas as described above). The Uncanny Valley, my non-existent skills regarding character and the time aspect played a big role here.
That’s when the idea with the balloon came to me. At the beginning, it was still connected to the hotel / toll station idea, where it should hang down on the rock rather unused and dilapidated, but then I had the idea with Aunt Betty, who would never have the idea to take on the arduous ascent. She uses other means of transport and prefers a more comfortable journey.
To be honest, I was a bit too lazy for modelling and besides that the project evenings were limited besides work and family. So I decided to buy one and found a nice balloon for 15 US dollars. Quickly provided with material attributes in my prep-scene, a bit gutted and cleaned and prepared for export, I quickly found the right location for it. Then there were props missing, like suitcases, the board from the balloon to the balcony, ropes and brooms (I refrained from using chairs, tables and other terrace decoration like I had thought of in the beginning because of clear imagery).
I also wanted to work a little bit more on the age of the bridges and found a great Houdini Digital Asset at Gumroad called Ivy Taming, which made it easy to grow ivy or something similar.

Environment Fog /Clouds

It needed a little more atmosphere, so fog and clouds. For the clouds a deformed sphere was the base, to which I copied several other random looking spheres. I converted them once into VDBs and back again, then repeated the process with smaller spheres as fill-in. I converted the result to a Fog VDB and ran it through several volumes VOPS similar to the rocks, applying different noises to the volume. So I got a result that was sufficient and interesting for me. The Houdini Cloud Tool / Rig I considered in the beginning – but whatever I did, it didn’t want to render with Renderman. Since the bugfixing would have taken too long, I decided to go the above way. The fog was basically a large volume starting from a large, flat box with the above technique.

Lighting

In the beginning I rather wanted to light up a twilight or sunset situation. I first tried some HDRs, but finally found moonlight to be the best solution. A PxrDistantLight supported the HDR as keylight, slightly bluish colored. Additionally, a number of PxrRectLights or PxrDiskLights were used as brighteners or rim lights in the environment. Special places that should lead the eye, such as areas of the rocks to the left, the bridges in the lower part of the image and the tree in the foreground were also separately illuminated.
Additionally, I instantiated about 200 small PxrSphereLights at the railing of the spiral path. The house got lighting at the verandas and the spiral staircase. I also placed support lights in areas where light shone out of the windows. The balloon fire also received a light to extend its effect.

Rendering

It was actually my first time with Renderman – I actually come from the V-Ray corner. I didn’t worry too much, though, and it wasn’t really a problem. The documentation was sufficiently good and I was able to solve other small hurdles quite well with the help of various forums or short internet searches.
A bigger bug that struck me was that the cryptomat assignment was completely new after a new rendering. This meant that I had to keep the old Cryptomatte in Nuke and find a workaround. Otherwise I didn’t make a big science out of it – I rendered the foreground tree, the waterfall and the volumes separately. Beyond that, I kept the beauty with the various usual AOVs. I used the checkpoint rendering feature, which proved to be very useful, especially if you wanted to check the intermediate state in between during batch rendering. The Beauty rendered about 11 hours in a little more than 4K. Sounds quite long, but I have to say that Renderman is very robust considering the scene and I really like his look.

Finishing

As mentioned at the beginning, one of the Challenge’s requirements was not to use retouching. Therefore the post was limited in the creative installation and adaptation of the AOVs. I used Nuke and basically limited myself to Grade and Color Correct Nodes, only at the end I added subtle chromatic aberrations and a little glow in the lights.

That’s it – a tough run.

The Big Help

A big thank you goes to my family and all my friends and colleagues who had to endure weeks of ideas, strategies, my mental and physical absence and many requests for feedback, especially my wife and daughter, Oliver Koch, Oliver Kortemeier, Oliver Markowski, Michael Meier, Klaus Scherwinski and Christian Schnellhammer.

And the End?

Unfortunately it was not enough for the victory, only for an “Honorable Mention” – click here for all placements: https://renderman.pixar.com/news/renderman-woodville-art-challenge-final-results. And in the next challenge? That remains to be seen.

Concept by Vasylina Holod.
Model by Alex Shilt © Disney / Pixar –
RenderMan „Woodville“ Art Challenge

Procedural Rock Formations for
UE4 | Saber Jlassi | Houdini HIVE
at SIGGRAPH 2017
https://vimeo.com/228238370

Hugo Beyer and his Houdini Tools
at Gumroad
https://gumroad.com/hugobeyer

Das Efeu-Tool auf Gumroad
https://gumroad.com/l/ivyTaming

And the tools and assets for this
article for download:

http://bit.ly/woodville_assets