docs: add benchmarking blog posts and performance reference page#254
docs: add benchmarking blog posts and performance reference page#254SamBarker wants to merge 3 commits into
Conversation
Covers methodology, test environment, passthrough proxy results, encryption latency and throughput ceiling, the per-connection scaling insight, and sizing guidance. Includes a TODO placeholder for the connection sweep results before publication. Assisted-by: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Sam Barker <sam@quadrocket.co.uk>
Covers why we chose OMB over Kafka's own tools, the benchmark harness we built (Helm chart, orchestration scripts, JBang result processors), workload design rationale, CPU flamegraphs with embedded interactive iframes, the per-connection ceiling discovery, bugs found in our own tooling, and the cluster recovery incident. Assisted-by: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Sam Barker <sam@quadrocket.co.uk>
Adds /performance/ as a dedicated quick-reference page with headline benchmark numbers, comparison tables, and sizing guidance, linked from both blog posts. Updates the existing Performance section in overview.markdown with the key headline numbers and a link to the full reference page. Assisted-by: Claude Sonnet 4.6 <noreply@anthropic.com> Signed-off-by: Sam Barker <sam@quadrocket.co.uk>
| | Kroxylicious proxy | 1.4% | | ||
| | GC | 0.1% | | ||
|
|
||
| The proxy is overwhelmingly I/O-bound. 59% of CPU is in `send`/`recv` syscalls — the inherent cost of maintaining two TCP connections (client→proxy, proxy→Kafka) with data flowing through the JVM. The proxy itself accounts for 1.4%. It really is a TCP relay with protocol awareness. |
There was a problem hiding this comment.
I wonder how much that's down to the decode predicate thing -- basically we know the filter chain, and what each filter in it wants to intercept, and I think we avoid doing the request/response decoding when we know nothing is interested. That was code that was in there from the beginning, but I don't actually know how relevant it is -- maybe some of the internal filters mean we're decoding requests and response always, in which case 1.4% is impressive. Or maybe we're acting more like a L4 proxy most of the time, in which case 1.4% is not quite as impressive.
|
|
||
| The direct crypto cost is 13.3% (11.3% AES-GCM + 2.0% Kroxylicious filter logic). But encryption adds indirect costs too: | ||
|
|
||
| - **Buffer management (+5.8%)**: encrypted records need to be read into buffers, encrypted, and written to new buffers — more allocation, more copying |
There was a problem hiding this comment.
Did we ever figure out how to reuse the buffers more? I think that was a TODO at one point.
|
|
||
| Fix: `kubectl uncordon worker0 worker1 worker2`. Once uncordoned, pods scheduled, operators recovered, and the upgrade completed. | ||
|
|
||
| Not a Kroxylicious bug, but it cost several hours of cluster recovery time during an active benchmark campaign. Worth knowing about if you're running OCP on Fyre. |
There was a problem hiding this comment.
Given Fyre is an IBM internal thing, this is not terribly useful to all readers. Can we generalise it to being about OpenShift more generally?
Summary
/performance/reference page summarising key numbers and linking to both postsoverview.markdownwith headline performance figures and a link to the reference pageStatus
Draft — the posts are first drafts. Known open items:
Test plan
./run.shand verify site renders athttp://127.0.0.1:4000//performance/page renders with correct tables/performance/work🤖 Generated with Claude Code