How We Stream Billions of Points in Real Time
A deep dive into the streaming architecture behind SkyGIS — how we render massive point clouds without downloads or plugins.
The Challenge
Point clouds from modern LiDAR scanners can contain billions of individual points, producing files that are tens or even hundreds of gigabytes. Traditional approaches require downloading the entire dataset before visualization can begin — an unacceptable bottleneck for cloud-native workflows.
Our Approach: Progressive Octree Streaming
At the core of SkyGIS's rendering engine is a progressive octree data structure. When a point cloud is uploaded, we process it into a hierarchical spatial index that enables level-of-detail (LOD) rendering.
How it works:
1. Spatial indexing — The point cloud is partitioned into an octree, where each node represents a cubic region of space
2. LOD generation — Each level of the tree contains progressively more detail, from a coarse overview down to individual points
3. View-dependent loading — Only the nodes visible in the current viewport are fetched, at the appropriate level of detail
4. GPU-accelerated rendering — Points are rendered using WebGL with custom shaders optimized for large datasets
Performance Results
With this architecture, we achieve:
- Initial render in under 2 seconds for any dataset size
- 60 FPS navigation even with billions of points loaded
- Bandwidth-efficient streaming — typically 10–50 MB for an initial viewport, regardless of total file size
Compression & Transfer
We use a custom binary format for point data transfer that achieves 3–5x compression over raw coordinates. Color data is quantized and delta-encoded for additional savings.
What's Ahead
We're currently experimenting with predictive prefetching based on user navigation patterns, and WebGPU for next-generation rendering performance. Stay tuned for benchmarks.