FlowerDocs 2025.3 : Performance Benchmarks
Uxopian Software Engineering Team · 7 min read · Gatling Framework
At a glance : what the numbers say
FlowerDocs 2025.3 was tested under sustained load across six real-world document management scenarios. Here are the three numbers that matter most.
These figures were measured with 100 concurrent users over a sustained 10-minute window per scenario, run across all six scenarios back-to-back on the same platform instance, making memory leak detection possible alongside throughput measurement.
Architecture and test configuration
The test infrastructure runs on a 4-core pod configuration with Redis caching and OpenSearch indexing. Object storage relies on an S3-compatible layer. The Gatling load testing framework generates the traffic, and every result is compared against the previous release to catch regressions before they ship.
What we tested : six production scenarios
Each scenario reproduces an everyday workflow of a document management platform. Every scenario was run under two configurations : with and without automated back-end event-driven technical handlers running in the background.
Results : raw benchmark data
| Scenario | Total requests | RPS | Min (ms) | Max (ms) | Mean (ms) | Std Dev | p50 | p75 | p95 | p99 |
|---|---|---|---|---|---|---|---|---|---|---|
| S1 | 190,735 | 3,173.94 | 10 | 3,870 | 31 | 28 | 29 | 35 | 48 | 79 |
| S2 ⚠ | 75,516 | 122.19 | 28 | 148,908 | 797 | 2,093 | 300 | 705 | 3,077 | 8,018 |
| S3 | 3,471,850 | 5,786.42 | 4 | 4,615 | 17 | 20 | 15 | 23 | 34 | 47 |
| S4 | 1,856,702 | 3,089.35 | 11 | 4,173 | 32 | 27 | 29 | 36 | 49 | 89 |
| S5 | 107,180 | 178.34 | 11 | 3,272 | 559 | 435 | 453 | 780 | 1,435 | 1,972 |
| S6 | 52,495 | 87.2 | 220 | 3,294 | 1,143 | 565 | 1,002 | 1,415 | 2,300 | 2,647 |
⚠ S2 : max response time of 148.9s indicates S3 storage layer pressure under simultaneous large-file ingestion. FlowerDocs core continued handling requests throughout.
| Scenario | Total requests | RPS | Min (ms) | Max (ms) | Mean (ms) | Std Dev | p50 | p75 | p95 | p99 |
|---|---|---|---|---|---|---|---|---|---|---|
| S1 | 219,344 | 3,651.15 | 4 | 4,298 | 27 | 17 | 23 | 35 | 57 | 79 |
| S2 ⚠ | 27,578 | 45.21 | 23 | 46,239 | 2,187 | 4,181 | 962 | 1,744 | 10,795 | 22,191 |
| S3 | 2,878,084 | 4,788.83 | 1 | 4,013 | 21 | 20 | 14 | 28 | 63 | 84 |
| S4 | 1,906,640 | 3,172.45 | 4 | 3,797 | 31 | 21 | 24 | 42 | 71 | 98 |
| S5 | — | — | — | — | — | — | — | — | — | — |
| S6 | 31,306 | 527.96 | 29 | 1,062 | 189 | 96 | 174 | 244 | 369 | 435 |
S5 not measured in this configuration. S2 bimodal distribution (p50 = 962ms vs p99 = 22,191ms) points to S3 storage layer pressure under extreme concurrent load, not to FlowerDocs core behavior.
Analysis : what the data tells us
The results reveal a robust infrastructure, supported by the 4-core-pod configuration and Redis, which handles pure metadata operations with remarkable efficiency.
Document reading dominates with 5,786 req/s and 17ms average latency, confirming the cache is working perfectly. Metadata creation and updates maintain high stability at around 30ms for approximately 3,000 req/s.
Document creation with content (S2) reveals an identified infrastructure constraint : under simultaneous large-file ingestion, throughput reduces to 122 req/s and max response time reaches 148.9 seconds, pointing to S3 storage layer pressure. FlowerDocs continued to handle requests correctly throughout.
Search at 1,143ms average on 1M+ documents remains acceptable, with OpenSearch and S3 identified as the primary bottleneck for complex operations.
The infrastructure remains overall robust but shows specific stress during complex creation operations when event-driven handlers are active.
Pure metadata scenarios (S1, S3, S4) show excellent results : creation at 3,651 req/s with 27ms average, updates at 3,172 req/s, confirming cache and OpenSearch indexing stay efficient despite the added technical load.
Document creation with content (S2) shows an identified infrastructure constraint under simultaneous large-file ingestion. With 45 req/s, equivalent to 2,700 documents with content created per minute, the platform comfortably absorbs real-world usage peaks. The bimodal distribution (p50 = 962ms vs p99 = 22,191ms) points to S3 storage pressure, not to FlowerDocs core behavior.
Search (S6) remains acceptable at 189ms average, with virtual folder optimization helping maintain a smooth experience at this scale.
What this means for you
- check Everyday operations are fast and reliable. Reading, creating metadata, and updating documents all respond under 50ms at the 95th percentile, across thousands of concurrent requests.
- check The caching layer works exactly as intended. Redis delivers sub-20ms average read latency at nearly 6,000 req/s, a clear validation of the architecture choices made in this release cycle.
- info Document creation with content : an identified infrastructure constraint. Under simultaneous large-file ingestion, response times are linked to S3 storage layer pressure, not to FlowerDocs itself. This constraint is tracked closely for the next release cycle.
- autorenew Continuous testing prevents regressions. Every version is benchmarked against the previous one, so past optimizations are never silently broken by new code.
Want to go deeper into the FlowerDocs architecture ?
Talk to our team about deployment configuration, sizing, and what these benchmarks mean for your specific usage volumes.