Universal Theme Parks Show Control System

Systems Design

I-Dream Sky

Systems Design

Touchdesigner

Systems Design

YOLACT and Real-time Object Segmentation

Real-time Volumetric Capture with the Zed Camera

Real-time Point Clouds

Real-Time Lidar with Touchdesigner and Ouster

Ouster Lidar Point Clouds

I have been wanting to experiment with real-time lidar using the newer, small Ouster Units and Touchdesigner. The last time I did any experiments with real-time lidar it was with a Velodyne 16. Using Touchdesigner we can now easily use lidar that runs in real-time. Ever since I first got to play with that Velodyne 16, I have been an advocate of the point cloud as a means of geometry display. Points are light in memory and can contain their RGB information. Because of this, it is far easier to display real-time evolving data than it would be to try to mesh, UV and texture. Even as a post-process the very concept of meshing will always be subject to artificating and resolution problems.

Lidar Compared Depth Based Techniques

Currently, real-time Lidar solutions lag behind any depth-based solution like a Kinect Azure or a Zed Camera in regards to a resolution. As well, we do not yet get any RGB data. Moreover, to get RGB data we would need an additional device, such as a small Virtual Reality camera. However, we do get an intensity value. This acts like a black and white image.

Testing

One of my students was able to get a loaned out OS2, a $16,000 unit so that he can use it in his Masters Thesis at USC School of Cinematic Arts. We used the Lidar with the Ouster running through Touchdesigner. Touchdesigner has an Ouster Top and Chop that allows us to pull in all the data. The unit has the following specs:

  • The OS0 lidar sensor: an ultra-wide field of view sensor with 128 lines of resolution
  • Two new 32 channel sensors: both an OS0 and OS2 version
  • New beam configuration options on 32 and 64 beam sensors

Volumetric Video

You can see some of my research with Google and the Foundry here. I believe this is how we will eventually represent large, realistic datasets for both virtual and augmented reality. Self-driving car and machine learning have brought us smaller, better and faster lidar and I am seeing the technology trickle down into interesting alternative use cases.

USD Workflows in Houdini for Unity

Houdini with Unity and USD