Generalized Scene Reconstruction

GSR is a method for automated virtualization of scenes wherein a scene model representing a generalized light field and a relightable matter field is created.

Ten Reasons Why it’s Hard to Make GSR Easy

GSR requires more than incremental changes to existing approaches. It’s challenging for ten reasons. Quidient has reimagined a solution from top-to-bottom with these reasons in mind.


Three Key Technologies

Quidient meets the challenges of GSR using three key technologies:
plenoptic fields, 5D databases, and AI / Machine Learning.

Plenoptic Fields icon for GSR technology

Plenoptic Fields

Most 3D reconstruction approaches today locate features and surfaces in space, but don’t represent light. By contrast, Quidient reconstructs the flow of light throughout a scene (a generalized light field). Simultaneously, Quidient creates a relightable matter field, which includes information about how materials at each location emit, absorb, reflect, scatter, or transmit light. Therefore, objects can be placed in any new lighting environment, and their reflections and refractions will update correctly. The generalized light field can also be reused to light new scenes.

A Word from John Leffingwell

AI/ML Icon for GSR technology

AI / Machine Learning

Traditional “Black Box” AI/ML using conventional images requires massive training data sets and has not yielded adequate accuracy for the kinds of transformational applications that Quidient will enable. Quidient engines make extensive use of AI, including a novel ML approach called Physics-based Machine Learning. Within the engine, AI makes scene reconstruction more efficient. Within engine-based apps, it creates a powerfully accurate source of data for higher level AI tasks by feeding a 5D model into network for training (rather than a conventional image).

Artificial Intelligence (AI) and Machine Learning (ML) with 3D+ Models
AI/ML Icon for GSR technology

5D Databases

Quidient’s voxel-based 5D Database represents a major shift in underlying architecture. It provides a novel means for separately encoding a matter field (as 3D Voxels, shown in turquoise) simultaneously with a light field (as 2D solid-angle elements, or “Saels”, shown in yellow). Both are stored in octrees. This spatially sorted, hierarchical approach leads to randomly accessible and searchable scenes with exceptionally fast subscene insertion and extraction. Its speed meets critical requirements for representing scenes with virtually unlimited levels of detail, such as an interactive city map.

Plenoptic Database Revolving Image Including Light Field and Matter Field

A Word from Don Meagher