Introduction to the Houdini Interface

Introduction to the Houdini Interface

Introduction to the Houdini interface will demystify the Houdini interface for the new user. Most importantly, we will examine what makes Houdini’s interface so powerful. Much of this power comes from the fact that the interface operates much like a file system. Therefore, as a result, we can easily move data through the network. Firstly,  we will cover the basics of navigation and the scene view. After this, we will look at what are points and vertices, faces and edges. =Then we will look at how to set display preferences.

This is as a follow up to a lecture for a class I teach in Houdini so the students can re-cap the in-class lecture. I apologize for any lack of polish. Hopefully, there will be some good information in there.

You can download a free version of Houdini here

Other Houdini Tutorials:

Houdini Terrains In Unreal Engine

Terrain Generation with Houdini Heightfields

Houdini Terrains in Unreal Engine. Since Houdini 16 we can do our Unreal terrain generation with Houdini Heightfields. Heightfields are 2D volumes that are commonly used in modern game engines like Unreal Engine or Unity for terrain creation.  I use the terrain tools in Houdini to both create a realistic terrain, but also for the ability to embed splat masks. As an example, I can use the curvature of the surface as in the image above. There are multiple ways of masking heightfields in Houdini. Even more so, I can use geometry to drive shapes in the terrain.

Watch a tutorial on terrain generation in Houdini here

Anyone can download a free learning edition of Houdini on the Sidefx Website

Houdini Terrains in Unreal Engine
Houdini Terrains in Unreal Engine

Terrain Shader

Using Houdini terrains in Unreal Engine I can embed masks that I can access in a material. We can use the landscape blend node for this. Moreover, we can use any mask created with the erode node or with the mask by feature. The masks will come into Unreal with the correct naming. As an example, the erode node will give us debris, water, bedrock, etc.

Houdini Terrains in Unreal Engine

Adapative Tesselation

For the shader, I created an adaptive tessellation shader. As a result, the model will increase in resolution the closer that the viewing camera gets to the surface.

Houdini Terrains in Unreal Engine
Houdini Terrains in Unreal Engine

Procedural Foliage Spawner

Any mask that I create using Houdini terrains in Unreal Engine will be automatically available. Therefore, I can quickly create and prototype. As an example, I am using masks to drive the procedural foliage.  Moreover, All of the terrain and foliage generation is handled procedurally through Houdini’s heightfields.

Other Houdini Tutorials:

Opening Multiple Instances of Adobe After Effects

Opening Multiple Instances of Adobe After Effects

I am a long-time After Effects user from back in the Cosa days. This was before it was bought out by Adobe. This is a little trick that even when I show it to some power users, their jaw will drop in surprise. Yes, you can open and run multiple instances of After Effects. You can do this on the same computer and at the same time. This trick is so lame and easy to set up. It is crazy that After Effects does not do this out of the box and you have to set this flag.

after effects properties window

Set-Up

All you need to do is go in the properties window. You do this by right-clicking on the icon and choosing properties (get info on Mac OSX). After the command to open After Effects you can see that you can enter your own text. In this area, append a -m. This is the flag for multiple. Now you will now be able to open up several After Effects projects all at once.

CoSA After Effects

One of the funny things that I encounter is that most people think that Adobe has always owned After Effects. I started using After Effcts somewhere around 1995. In 1995, Photoshop was at version 2.5 and did not yet have a layering system. You could save selection sets, but once that image was deselcted that was that. I had been doing all these animations fopr CD Rom games in Photoshop. This was laborious. Around that time someone showed me CoSA After Effects which was like “Photoshop with a timeline.” It took me a few months to get up to speed.

CoSA After Effects

Other Houdini Tutorials:

Filming Stan Lee with Stereo Red Weapons in 8k

Filming Stan Lee with Stereo Red Weapons in 8k

Last week I had the amazing experience of filming Stan Lee with Stereo Red Weapons in 8k. This was for a virtual reality interview with Kevin Smith. This was a stereo shoot and took place in Stan’s small office in his home above Sunset Plaza. The shoot was for Legion Entertainment. They wanted to document important people in the comic book industry so that we could view them in Virtual Reality in the future. Which is why we went with the new Red Weapons in 8k. Stan was there with his wife and daughter and was very engaging and friendly with everyone.

red dragon 8k stereo

Behind the Scenes Filming Stan Lee

Derrin Turner, our resident VR Director and Head of Production, frames the camera.

On set with Stan Lee
on set with Stan Lee and his wife

Technical Details

As the Technical Director for VR Playhouse I designed the camera system that we used. For this shoot, we used the new Red Weapon 8K. We shot the cameras as stereo pairs on an offset nodal rig. We made sure to keep all action in one shot. This was why we had to use such a wide angle lens. This way we could shoot a clean plate, then the whole interview from one position. After the interview was finished, we then shot the remaining angles of the office.

Stitching

This way we were able to blend between the interview footage and the rig removed clean plates. In the end we shot eight wedges at 8k resolutions. We were able to get decent stereo stitches out of Cara VR for Nuke. It will be challenging to rectify the lighting between the final wedges, and will probably require some finessing.

Red Weapon Stereo Arii 8mm Primes
Stereo Red Stan Lee

Other Houdini Tutorials:

Volumetric Video for Virtual Reality

Volumetric Video for Virtual Reality

Volumetric Video for virtual reality using Google Jump, Houdini, and Nuke. I love the visual aesthetic of point clouds and their ability to represent and recreate three-dimensional space. I used to bake out final gather points out of Mental Ray. I would ombine them to re-create a static visual cue of the entire animation sequence. At the time, I would also use them as combined elements for lighting calculations. With Houdini, working with point clouds is just so much fun. Currently, point clouds are everywhere,. We use them from everything from storing values, geometric reconstruction, and volumetric video. I can also just enjoy their pure visual beauty.

Google Jump Point Clouds in Post Production

Depth Maps

We can now get depth maps from the Google Jump Assembler in resolutions up to 8k. This allows us to implement some more advanced film techniques into the world of virtual reality production. As well, we can begin to see how we will start to create new production workflows. In the near future there will be tools for handling volumetric video. We will use this for 3d reconstruction in order to create a more immersive experiences. While not “light-field” in that this won’t represent correct light changes that are view-dependent, it will allow for greater depth and parallax. As well, it allows for better integration with CGI in the world of Stereo Virtual Reality. This is very tricky and inherently has problems.  In these examples, I am using the Foundry’s NukeX, Cara VR, and Ocula on the front end. Houdini from Side Effects on the back end.

Google Jump Point Clouds in Post Production

Point Clouds

This image shows the perspective of the viewing camera of the scene with both point clouds from Nuke and Houdini merged using deep and viewing through nukes viewer. The 3D tree is rendered through Mantra with a deep pass using the point cloud generated out of Nuke as reference geometry. This is important for correct spatial alignment of depth, as the depth, in this case, comes from the Google Jump Stitcher. This is not a spatially accurate representation of real-world depth, rather a spherically projected approximation. At this time, the biggest caveat is the 8-bit banding. This can be solved with a special plug-in that would incorporate a bilateral filter that would process the points while preserving edge detail.

Google Jump Point Clouds in Post Production
Google Jump Point Clouds in Post Production

Photogrammetry

Reality Capture  is my current go-to program for photogrammetry. I am just getting the hang of how it functions but it is incredibly fast, especially if you compare it Agisoft. These photos were a test using my studio space. This is with only 377 photographs. I can see where I need more camera positions to capture the whole space and will shoot it again soon. The maximum amount of images that I can do with the version of Reality capture that I have is 2500, I am not sure that I need that much, but I would like to test with around 1500.

Other Relevent Projects:

Experiments with Photogrammetry

These are some of my initial tests with photogrammetry, testing Agisoft Photoscan. Through my testing, we eventually settled on Reality Capture, a subscription based service with an indie option.

These are some landscape tests that I shot out at Vasquez Rocks.

Tracking and Stabilizing in a Spherical World for Virtual Reality.

Tracking and Stabilizing for Virtual Reality.

Recently, I have the need for tracking and stabilizing footage in a spherical world for virtual reality. As a result, I started building some tools and methodologies for dealing with stitched spherical footage. As well, the footage would be coming from a variety of go-pro rigs. In addition, there is a need for 3D development and integration with 360 stitched spherical lat-long footage. In this case, all shot footage is usually coming from a six, ten, or sixteen camera rig with Autopano doing the initial stitching.

There is surprisingly little information out there on this subject, but there are some great videos done by Frank Reuter that got me on my way. As well as a few posts on the Foundry’s forum.

The idea is pretty straightforward. We want to convert our spherical stitch into a rectilinear cube map or 6 pack. In Nuke this is done with the Spherical Transform node. Through rotating along the Y we can get the four sides, and with a 90 and -90 rotations along X, we can get the top and the bottom.

Now that we have the six rectilinear images we can pipe them each into a project3D node, which in turn gets piped into their respective cameras. Create one camera with a 45-degree FOV and a 90 degree horizontal and vertical aperture. Duplicate the camera six times, one for reach axis and rotate them into position. I then project those cameras onto a sphere, re-creating my initial image perfectly. I then chose one sequence out of the six that would be best for performing a 3d track. Once the footage has an acceptable track, I then link the tracking data to an axis node that drives my camera rig.

I then create my render camera and pipe in the same tracking data and then invert the rotational curves for a nodal setup.

Other Houdini Tutorials:

Vertex Colors in Maya From a Houdini Alembic File

Vertex Colors in Maya From a Houdini Alembic File

Since I have been using Redshift as my need speed, go to render engine when I just won’t have time for Mantra.  I have been beefing up my Houdini to Maya Alembic pipeline. While straightforward, there are a few caveats. In this particular case, I was wanting to get my point/vertex colors from Houdini into Maya. One needs this if you want to render geometry with the baked-in colors from Houdini. A few notes, you will need to have a Cd attribute in Houdini of type vertex. In Houdini, Cd will usually be a point attribute, so a simple attribute promote will do the trick here. Just promote your Cd attribute from point to vertex and you should be good to go.

Importing into Maya

Unfortunately, the GUI Maya alembic importer does not do the trick, so we just need to import via the script editor with some very simple Mel. All you need to do is make sure you have the alembicImport plug-in on and type the following Mel command:

AbcImport -mode import -rcs “myPathToMyAlembic\myAlembicFile.abc”;

Obviously, you will substitute your own path and file name. You should now see your alembic file in Maya with the vertex colors in the viewport.
Now that we have our geometry imported correctly and we can see the vertex colors, we need to set up our Redshift shader, though we are using Redshift in this example, this method works for Mental Ray and Vray. In the modeling context, under Mesh Display you can see the Color Set Editor, which you can open to find the name of your color set. In our case, coming from Houdini, it’s just Cd.

Next, just create a redshiftVertexColor and under General. In the box labeled Vertex Set, just add in the name of our color set. In this case this is Cd. Now you just conncet the out Color to whatever slot you want, and hit the render button.

Other Houdini Tutorials: