Fluree RDF Benchmark
Performance benchmarks for Fluree's SPARQL query engine — measuring insert throughput, query latency, and scalability across standard RDF workloads.
We benchmarked Fluree's SPARQL engine against standard RDF workloads to measure insert throughput, query latency, and scalability characteristics.
Methodology
We used the Berlin SPARQL Benchmark (BSBM) framework adapted for Fluree, testing against datasets ranging from 1M to 100M triples.
Test Environment
- Hardware: AWS r6g.2xlarge (8 vCPU, 64 GB RAM)
- Fluree Version: 3.0
- Dataset: BSBM e-commerce dataset (product catalog with reviews, offers, and producers)
Insert Performance
| Dataset Size | Triples/Second | Total Load Time |
|---|---|---|
| 1M triples | 45,000 t/s | 22s |
| 10M triples | 42,000 t/s | 238s |
| 100M triples | 38,000 t/s | 2,632s |
Insert throughput remains consistent as dataset size grows, with only a modest decrease at the 100M triple scale.
Query Performance
Simple Lookups
Single-entity lookups with 1-2 triple patterns complete in under 2ms across all dataset sizes.
Complex Analytical Queries
Multi-join queries with aggregation, filtering, and optional patterns:
| Dataset Size | Avg Latency | P95 Latency |
|---|---|---|
| 1M triples | 12ms | 28ms |
| 10M triples | 45ms | 120ms |
| 100M triples | 180ms | 450ms |
Key Takeaways
- Fluree handles standard RDF workloads with competitive performance
- The immutable ledger architecture adds minimal overhead to query operations
- SPARQL compliance is full — no query rewriting needed for standard benchmarks
Reproduce These Results
All benchmark scripts and configurations are available in our GitHub repository. We encourage the community to run these benchmarks independently.