Filming Stan Lee | Stereo Red Weapons in 8k

Last week I had the amazing experience of being able to be on location while we filmed Kevin Smith interviewing Stan Lee and his wife over several hours. As the Technical Director for VR Playhouse, I do not always need to be onsite during shoots and often have too much on my plate that I skip them knowing that DJ, our Head of Production, will take care of everything. This shoot was different on many levels. First off it was Stan Lee being interviewed in his home office, and then technically we needed an extra set of eyes. For the shoot we used the very new Red Weapon 8K as stereo pairs on an offset nodal rig, with 8mm Arri Primes.
red dragon 8k stereo
Derrin Turner, our resident VR Director and Head of Production, frames the camera.
On set with Stan Lee
Stan Lee looks at Kevin Smith
on set with Stan Lee
We shot clean plates before the shoot and then locked off the camera for the interview. After the interview we then did another batch of clean plates where we removed the lighting from each 45 degree wedge and used a general fill. This way we hope to be able to blend between the interview footage and the rig removed clean plates. With eight wedges at 8k resolutions, stitching is going to be interesting. In our tests we were able to get decent stereo stitches out of Cara VR for Nuke but it will be challenging to rectify the lighting between the final wedges and will probably require some finessing. Then we move onto the stereo integration of the interview plate…
Red Weapon Stereo Arii 8mm Primes
Stereo Red Stan Lee

Volumetric Video | Google Jump | Nuke | Houdini | Photogrammetry

I love the visual aesthetic of point clouds and their ability to represent and recreate three dimensional space. I used to bake out final gather points out of Mental Ray and combine them to re-create a static visual cue of the entire animation sequence, or as combined elements for lighting calculations. With Houdini, working with point clouds is just so much fun. Now point clouds are everywhere, and we use them from everything from storing values, geometric reconstruction, volumetric video, and just pure visual beauty.
Google Jump Point Clouds in Post Production
With the ability to now get depth maps from the Google Jump Assembler in resolutions up to 8k, we are now able to implement some more advanced film techniques into the world of virtual reality production. As well, we can begin to see how we will start to create production workflows that in near future will be tools for handling volumetric video for 3d reconstruction in order to create a more immersive experience. While not “lightfield” in that this won’t represent correct light changes that are view dependent, it will allow for greater depth and parallax. As well, it allows for better integration with CGI in the world of Stereo Virtual Reality, as this is very tricky and inherently has problems.  In these examples I am using the Foundry’s NukeX, Cara VR, and Ocula on the front end, and Houdini from Side Effects on the back end.
Google Jump Point Clouds in Post Production
This image shows the perspective of the viewing camera of the scene with both point clouds from Nuke and Houdini merged using deep and viewing through nukes viewer. The 3D tree is rendered through Mantra with a deep pass using the point cloud generated out of Nuke as reference geometry. This is important for correct spatial alignment of depth, as the depth, in this case, comes from the Google Jump Stitcher. This is not a spatially accurate representation of real world depth, rather a spherically projected approximation. At this time, the biggest caveat is the 8-bit banding. This can be solved with a special plug-in that would incorporate a bilateral filter that would process the points while preserving edge detail.
Google Jump Point Clouds in Post Production
Google Jump Point Clouds in Post Production
Reality Capture (www.capturingreality.com) is my current go-to program for photogrammetry. I am just getting the hang of how it functions but it is incredibly fast, especially if you compare it Agisoft. These photos were a test using my studio space. This is with only 377 photographs. I can see where I need more camera positions to capture the whole space and will shoot it again soon. The maximum amount of images that I can do with the version of Reality capture that I have is 2500, I am not sure that I need that much, but I would like to test with around 1500.

Tracking and Stabilizing in a Spherical World for Virtual Reality.

Recently I have the need to start building some tool sets and methodologies for dealing with stitched spherical footage coming off a variety of go-pro rigs, as well as 3D development, and integration with 360 stitched spherical lat-long footage. In this case, all shot footage is usually coming from a six or ten camera rig with Autopano doing the stitching.
There is surprisingly little information out there on this subject, but there are some great videos done by Frank Reuter that got me on my way, as well as a few posts on the Foundry’s forum.
The idea is pretty straightforward. We want to convert our spherical stitch into a rectilinear cube map or 6 pack. In Nuke this is done with the Spherical Transform node. Through rotating along the Y we can get the four sides, and with a 90 and -90 rotations along X, we can get the top and the bottom.
Now that we have the six rectilinear images we can pipe them each into a project3D node, which in turn gets piped into their respective cameras. Create one camera with a 45-degree FOV and a 90 degree horizontal and vertical aperture. Duplicate the camera six times, one for reach axis and rotate them into position. I then project those cameras onto a sphere, re-creating my initial image perfectly. I then chose one sequence out of the six that would be best for performing a 3d track. Once the footage has an acceptable track, I then link the tracking data to an axis node that drives my camera rig.
I then create my render camera and pipe in the same tracking data and then invert the rotational curves for a nodal setup.