point clouds

Real-Time Point Clouds Zed Mini

Real-Time Point Clouds with the Zed Mini

Lately, I have been experimenting with real-time point clouds with the Zed Mini from Stereolabs. Firstly, I am a big believer in real-time point clouds being a viable solution for co-located, virtual reality experiences. Concurrently, I am also interested in examining the development of this technology, and how we use artificial intelligence and machine learning to examine the world we live in. The Zed Mini functions much in the same way that the Kinect Azure does. The big difference is that the Kinect is more plug and play, while the Zed needs external libraries. Lastly, another big difference is the image and point-cloud quality, with the Zed being far superior. For some people, the fact that some of the tools need to be built using CMake and Visual Studio will be a deal-breaker. Stereolab provides resources in their Git Repository.

The Zed Tools

After I built the tools from the source, I now have access to some tools in samples/bin. As an example, the most immediately useful is ZED_SVO_Recording.exe. Firstly, this allows me to write an SVO file to disk. SVO is a proprietary format from Stereolab. Most importantly, this format allows for the recording of all the data from the Zed camera into a compressed file format. This will allow me to construct the point-clouds after shooting. In order to run these tools, I need to call them from the command-line. For ZED_SVO_Recording the only argument I need is a path to where I want the SVO file saved. So assuming I am in the shell in the correct directory, you would just write:

ZED_SVO_Recording.exe c:SVO_save_folder/mySVOFile.svo

The prompt should start scrolling frame numbers, and a ctrl-c will break the operation. One the SVO is saved, one can use some of the other tools that Stereolab provides.

Zed in Touchdesigner

I am working with a friend, Shaoyu Su to get TensorFlow and YOLO working through the Zed and Touchdesigner for object recognition and tracking.

You can download a free version of Touchdesigner here:

Other Houdini Tutorials:

Lidar with Ouster and Touchdesigner

Real-Time Lidar with Touchdesigner and Ouster

I have been wanting to experiment with real-time lidar using the newer, small Ouster Units and Touchdesigner. The last time I did any experiments with real-time lidar it was with a Velodyne 16. Using Touchdesigner we can now easily use lidar that runs in real-time. Ever since I first got to play with that Velodyne 16, I have been an advocate of the point cloud as a means of geometry display. Points are light in memory and can contain their RGB information. Because of this, it is far easier to display real-time evolving data than it would be to try to mesh, UV and texture. Even as a post-process the very concept of meshing will always be subject to artificating and resolution problems.

Lidar Compared Depth Based Techniques

Currently, real-time Lidar solutions lag behind any depth-based solution like a Kinect Azure or a Zed Camera in regards to a resolution. As well, we do not yet get any RGB data. Moreover, to get RGB data we would need an additional device, such as a small Virtual Reality camera. However, we do get an intensity value. This acts like a black and white image.


One of my students was able to get a loaned out OS2, a $16,000 unit so that he can use it in his Masters Thesis at USC School of Cinematic Arts. We used the Lidar with the Ouster running through Touchdesigner. Touchdesigner has an Ouster Top and Chop that allows us to pull in all the data. The unit has the following specs:

  • The OS0 lidar sensor: an ultra-wide field of view sensor with 128 lines of resolution
  • Two new 32 channel sensors: both an OS0 and OS2 version
  • New beam configuration options on 32 and 64 beam sensors

Volumetric Video

You can see some of my research with Google and the Foundry here. I believe this is how we will eventually represent large, realistic datasets for both virtual and augmented reality. Self-driving car and machine learning have brought us smaller, better and faster lidar and I am seeing the technology trickle down into interesting alternative use cases.

Intensity Pass

These images below show the full image range of a normalized “intensity” pass.

Other Houdini Tutorials:

Volumetric Video for Virtual Reality

Volumetric Video for Virtual Reality

Volumetric Video for virtual reality using Google Jump, Houdini, and Nuke. I love the visual aesthetic of point clouds and their ability to represent and recreate three-dimensional space. I used to bake out final gather points out of Mental Ray. I would ombine them to re-create a static visual cue of the entire animation sequence. At the time, I would also use them as combined elements for lighting calculations. With Houdini, working with point clouds is just so much fun. Currently, point clouds are everywhere,. We use them from everything from storing values, geometric reconstruction, and volumetric video. I can also just enjoy their pure visual beauty.

Google Jump Point Clouds in Post Production

Depth Maps

We can now get depth maps from the Google Jump Assembler in resolutions up to 8k. This allows us to implement some more advanced film techniques into the world of virtual reality production. As well, we can begin to see how we will start to create new production workflows. In the near future there will be tools for handling volumetric video. We will use this for 3d reconstruction in order to create a more immersive experiences. While not “light-field” in that this won’t represent correct light changes that are view-dependent, it will allow for greater depth and parallax. As well, it allows for better integration with CGI in the world of Stereo Virtual Reality. This is very tricky and inherently has problems.  In these examples, I am using the Foundry’s NukeX, Cara VR, and Ocula on the front end. Houdini from Side Effects on the back end.

Google Jump Point Clouds in Post Production

Point Clouds

This image shows the perspective of the viewing camera of the scene with both point clouds from Nuke and Houdini merged using deep and viewing through nukes viewer. The 3D tree is rendered through Mantra with a deep pass using the point cloud generated out of Nuke as reference geometry. This is important for correct spatial alignment of depth, as the depth, in this case, comes from the Google Jump Stitcher. This is not a spatially accurate representation of real-world depth, rather a spherically projected approximation. At this time, the biggest caveat is the 8-bit banding. This can be solved with a special plug-in that would incorporate a bilateral filter that would process the points while preserving edge detail.

Google Jump Point Clouds in Post Production
Google Jump Point Clouds in Post Production


Reality Capture  is my current go-to program for photogrammetry. I am just getting the hang of how it functions but it is incredibly fast, especially if you compare it Agisoft. These photos were a test using my studio space. This is with only 377 photographs. I can see where I need more camera positions to capture the whole space and will shoot it again soon. The maximum amount of images that I can do with the version of Reality capture that I have is 2500, I am not sure that I need that much, but I would like to test with around 1500.

Other Relevent Projects: