Exploring Volumetric Video Capture — Early Steps

screenshot of project screen with Randy Rode
October 29, 2020

Volumetric video capture is one of the latest areas of investigation, led by Blended Reality team members Farid Abdul and Randy Rode. What is volumetric video? Imagine walking into a studio and creating a 360, volumetric video of yourself. That video could be livestreamed to an audience watching you with VR headsets or recorded for use in an immersive media project. Major movie projects have had this capability for a number of years, with multi-million dollar studios hosting racks of high powered computers. With recent technology advances there are now several companies offering cameras and software to bring this capability within the reach of an individual user running a standard VR-ready computer.

Farid Abdul testing a 2 camera capture setup

A significant component of the Blended Reality project is monitoring new technologies and identifying tools that can be used in our research and our support of Yale project teams. Two core principles of our project work are 1) Use case before technology and 2) Identify repeatable workflows. Essentially we don’t want to end up with closets full of outdated ‘cool’ technology that nobody ever used and avoid custom, one-off projects that fail to inform and support the work of other project teams. Those principles have been front and center as we explore the current state of volumetric capture technologies and identify how to match those to the interests of our project researchers.

Azure Kinect cameras in sync!

Team accomplishments to date:

  • Assembled and tested the required hardware: 4 Azure Kinect cameras, HP Zbook laptops, and assorted USB3 cables and tripods. Testing both 4 camera/2 sync’d laptops and single camera/single computer setups.
  • Reviewing and evaluating software platforms: Ef-Eve Volcap and Creator software for the 4 camera setup, Depthkit for the single camera setup.
  • Defining Use Cases: The team presented its work to a Blended Reality community meeting and recently hosted a brainstorming session with faculty and students to start defining use cases and key requirements of whatever technology we settle on.

First pass at use case development

Next steps:

  • Continue technical work to understand and document the workstream to take EF-Eve captures into a Unity 3D project
  • Assemble a working group to explore workstreams for use of Depthkit for single camera captures
  • Evaluate the notes from the recent brainstorming and pull those into a draft set of use cases/requirements for further discussion.

The team is open to collaboration with other Yale faculty/student researchers, and is always interested in exchanging experiences with other university researchers. Please contact randall.rode@yale.edu to learn more about our work.

References:

Subscribe to the CCAM Newsletter

* indicates required