Volumetric Video for Virtual Reality
Volumetric Video for virtual reality using Google Jump, Houdini, and Nuke. I love the visual aesthetic of point clouds and their ability to represent and recreate three-dimensional space. I used to bake out final gather points out of Mental Ray. I would ombine them to re-create a static visual cue of the entire animation sequence. At the time, I would also use them as combined elements for lighting calculations. With Houdini, working with point clouds is just so much fun. Currently, point clouds are everywhere,. We use them from everything from storing values, geometric reconstruction, and volumetric video. I can also just enjoy their pure visual beauty.
We can now get depth maps from the Google Jump Assembler in resolutions up to 8k. This allows us to implement some more advanced film techniques into the world of virtual reality production. As well, we can begin to see how we will start to create new production workflows. In the near future there will be tools for handling volumetric video. We will use this for 3d reconstruction in order to create a more immersive experiences. While not “light-field” in that this won’t represent correct light changes that are view-dependent, it will allow for greater depth and parallax. As well, it allows for better integration with CGI in the world of Stereo Virtual Reality. This is very tricky and inherently has problems. In these examples, I am using the Foundry’s NukeX, Cara VR, and Ocula on the front end. Houdini from Side Effects on the back end.
This image shows the perspective of the viewing camera of the scene with both point clouds from Nuke and Houdini merged using deep and viewing through nukes viewer. The 3D tree is rendered through Mantra with a deep pass using the point cloud generated out of Nuke as reference geometry. This is important for correct spatial alignment of depth, as the depth, in this case, comes from the Google Jump Stitcher. This is not a spatially accurate representation of real-world depth, rather a spherically projected approximation. At this time, the biggest caveat is the 8-bit banding. This can be solved with a special plug-in that would incorporate a bilateral filter that would process the points while preserving edge detail.
Reality Capture is my current go-to program for photogrammetry. I am just getting the hang of how it functions but it is incredibly fast, especially if you compare it Agisoft. These photos were a test using my studio space. This is with only 377 photographs. I can see where I need more camera positions to capture the whole space and will shoot it again soon. The maximum amount of images that I can do with the version of Reality capture that I have is 2500, I am not sure that I need that much, but I would like to test with around 1500.