Volumetric Video for Virtual Reality

Volumetric Video for Virtual Reality

Volumetric Video for virtual reality using Google Jump, Houdini, and Nuke. I love the visual aesthetic of point clouds and their ability to represent and recreate three-dimensional space. I used to bake out final gather points out of Mental Ray. I would combine them to re-create a static visual cue of the entire animation sequence. At the time, I would also use them as combined elements for lighting calculations. With Houdini, working with point clouds is just so much fun. Currently, point clouds are everywhere. We use them from storing values, geometric reconstruction, and volumetric video. I can also enjoy their pure visual beauty.

Point Clouds in Nuke from a Google Jump Camera

Depth Maps

We can now get depth maps from the Google Jump Assembler in resolutions of 8k. The depth maps allow us to implement some more advanced film techniques into virtual reality production. We can also begin to see how we will start to create new production workflows. Shortly there will be tools for handling volumetric video. We will use this for 3d reconstruction to create more immersive experiences. While Stereo VR is not “light-field,” stereo panoramas will not represent correct view-dependent light changes but allow for greater depth and parallax. Stereo VR also allows for better integration with computer-generated images. Rectifying shot stereo with CGI is very tricky and inherently has problems. In these examples, I am using the Foundry’s NukeX, Cara VR, and Ocula on the front end. Houdini from Side Effects on the back end.

Disparity Map Generated from Depth from Cara VR in Nuke
Point Clouds Generated from Depth with Depth to Points` in Nuke

This image above the perspective of the viewing camera of the scene with both point clouds from Nuke and Houdini merged using deep and viewing through nukes viewer. Mantra rendered the 3D tree with a deep pass using the point cloud generated out of Nuke as reference geometry. This is important for correct spatial alignment of depth, as the depth, in this case, comes from the Google Jump Stitcher. The depth representation is not spatially accurate to real-world depth rather a spherically projected approximation. The most significant caveat is the 8-bit banding which can be helped by using a plug-in that would incorporate a bilateral filter that would process the points while preserving edge detail.