Volumetric Video for Virtual Reality Using Google Jump, Houdini, and Nuke
Volumetric Video for Virtual Reality | Google Jump | Houdini | Nuke
I love the visual aesthetic of point clouds and their ability to represent and recreate three-dimensional space. I used to bake out final gather points out of Mental Ray. I would combine them to re-create a static visual cue of the entire animation sequence. At the time, I would also use them as combined elements for lighting calculations. With Houdini, working with point clouds is just so much fun. Currently, point clouds are everywhere. We use them for storing values, geometric reconstruction, and volumetric video. I can also enjoy their pure visual beauty.
Point Clouds Generated from Depth with Depth to Points in Nuke
Creating Depth Maps with Disparity with Nuke
We can now get depth maps from the Google Jump Assembler in resolutions of 8k. The depth maps allow us to implement some more advanced film techniques into virtual reality production. We can also begin to see how we will start to create new production workflows. Shortly there will be tools for handling volumetric video. We will use this for 3d reconstruction to create more immersive experiences. While Stereo VR is not “light-field,” stereo panoramas will not represent correct view-dependent light changes but allow for greater depth and parallax. Stereo VR also allows for better integration with computer-generated images. Rectifying shot stereo with CGI is very tricky and inherently has problems. In these examples, I am using the Foundry’s NukeX, Cara VR, and Ocula on the front end. Houdini from Side Effects on the back end.