How It Works
This note describes how 3D models are processed and streamed by the Umbra Composit platform.
The Umbra Composit platform supports 3D models in multiple input formats. When the models are uploaded they are converted to Umbra's internal format, so for example your FBX file is not stored in the cloud as-is.
In addition to the 3D geometry of the model, also its diffuse color textures and normal maps are uploaded.
The models can be uploaded using Umbra's command line interface (CLI) tools.
Once the input model has been uploaded, its volume is split into a hierarchy of axis aligned blocks forming an octree structure. Umbra generates a unique mesh for each node of the octree, with the most detailed geometry is in the leaf nodes and the least detailed in the root node. When an application wants to stream in geometry from the Umbra Composit platform, it needs to first receive this block hierarchy, and then decide which of its nodes it wants to download and render based on their positions relative to the camera, depth in the tree and desired quality. This is automatically done by the runtime library.
The contents of the blocks are optimized in parallel, which makes it possible to optimize very large models.
For each block, the optimization creates a new mesh that re-creates input details larger than a fixed fraction of the block's size, while merging smaller features. This detail size can be controlled separately for texture and geometry. Since each block attempts to maintain a detail size proportional to the block's size and the blocks form an octree with larger blocks near the root, the blocks are also a hierarchical level of detail (LOD) representation. The detail size of the most detailed blocks is controlled by the "resolution" parameter that must be set when importing a model.
When optimizing, the blocks are first split into small volumes (voxels) that are classified according to the input geometry that intersects them. These voxels are then used to construct a mesh that appears similar to the input, preserving larger features while merging smaller ones. The mesh created from the voxels has a roughly fixed triangle density, wasting triangles in areas that do not contain any actual detail. These excess triangles are removed using a mesh decimation step.
Since re-creating the mesh loses all texturing data, new textures are created for the decimated mesh by sampling the input mesh and its textures. This includes raytracing for diffuse and normal maps, with normal maps accounting for both differences between the input/ouput mesh and any input normal maps.
Once the mesh has been textured, it is packed into a compressed format for efficient streaming and stored in the Umbra cloud, from where the runtime can access it.
You can also inspect the full model online and use the wireframe view to see how the mesh updates when camera position changes.
The Umbra Composit runtime is used by the application to fetch the optimized 3D models. The runtime consists of data streaming, asset loading and rendering control systems.
The application controls rendering by providing the runtime with parameters such as camera position, heading, and the desired model quality. The runtime then prioritizes streaming based on the given parameters, and returns a list of asset identifiers that need to be rendered. The identifiers are application-provided, and therefore the runtime has no access to the renderable asset itself, allowing integration with almost any rendering engine.
Data is streamed from the Umbra Composit platform by the main Runtime class. Streaming is controlled by the runtime to effectively prioritize assets that are needed to maintain quality settings given by the user.
Once data has been streamed to system memory, it's handed over to the application through an asynchronous job system. Jobs decompress the downloaded assets and let the application convert them to its own internal representation. The application provides the runtime with an identifier for the final renderable asset on job completion. Since asset decompression and loading can be somewhat slow, the job system is designed for easy application-controlled multithreading.
The runtime is organized around three main classes: Runtime, Scene and View. Runtime provides the main entry point and asset loading. Scenes represent optimized datasets and views are the rendering control system. Each application should have a single runtime with one or more scenes and views.
See the Runtime Loop example for a concrete example.