Photogrammetry and Volumetric Capture

Volumetric Video | Point Clouds | Deep Compositing

I have been fascinated with volumetric reconstruction since the early days of photogrammetry. I have developed tools and workflows for dealing with large scale data sets from lidar, photogrammetry, and real-time solutions like the Zed Camera or Kinect. I was one of the first people to work with Lytro’s light-field camera and worked with Google on depth-based volumetric reconstruction for virtual reality with the Google Jump camera. This work culminated in a Siggraph Publication and masterclass in 2017 titled  “Video for Virtual Reality”.
Recently I have been able to some testing with the Ouster real-time lidar unit, I have the first write up here.

Point Clouds as Volumetric Capture

Using techniques that I developed, and some Amazing footage that Ian Forester captured in Africa we partnered with Nuralize and Atom View, a real-time point-cloud player.  Point clouds were initially processed through Nuke and then filtered in Houdini. Below I demonstrate how we can use the point clouds as collision surfaces for simulations.

Architectural Details

I use photogrammetry to capture architectural details for use as both high-resolution models and decimated versions that can be used for modern game engines. I also extract orthographic textures that get run through a proprietary tool set created in SideFX Houdini

Other Relevent Projects: