FlureeLabs
Published

Fluree RDF Benchmark

Performance benchmarks for Fluree's SPARQL query engine — measuring insert throughput, query latency, and scalability across standard RDF workloads.

We benchmarked Fluree's SPARQL engine against standard RDF workloads to measure insert throughput, query latency, and scalability characteristics.

Methodology

We used the Berlin SPARQL Benchmark (BSBM) framework adapted for Fluree, testing against datasets ranging from 1M to 100M triples.

Test Environment

  • Hardware: AWS r6g.2xlarge (8 vCPU, 64 GB RAM)
  • Fluree Version: 3.0
  • Dataset: BSBM e-commerce dataset (product catalog with reviews, offers, and producers)

Insert Performance

Dataset SizeTriples/SecondTotal Load Time
1M triples45,000 t/s22s
10M triples42,000 t/s238s
100M triples38,000 t/s2,632s

Insert throughput remains consistent as dataset size grows, with only a modest decrease at the 100M triple scale.

Query Performance

Simple Lookups

Single-entity lookups with 1-2 triple patterns complete in under 2ms across all dataset sizes.

Complex Analytical Queries

Multi-join queries with aggregation, filtering, and optional patterns:

Dataset SizeAvg LatencyP95 Latency
1M triples12ms28ms
10M triples45ms120ms
100M triples180ms450ms

Key Takeaways

  1. Fluree handles standard RDF workloads with competitive performance
  2. The immutable ledger architecture adds minimal overhead to query operations
  3. SPARQL compliance is full — no query rewriting needed for standard benchmarks

Reproduce These Results

All benchmark scripts and configurations are available in our GitHub repository. We encourage the community to run these benchmarks independently.