Give your AI Agent a senior engineer's debugging brain. Runbook-driven incident investigation via MCP — evidence collection, decision rules, structured reports. No more hallucinated root causes.
-
Updated
Apr 10, 2026 - HTML
Give your AI Agent a senior engineer's debugging brain. Runbook-driven incident investigation via MCP — evidence collection, decision rules, structured reports. No more hallucinated root causes.
Prompt and response tracing for LLM workflows
Forkline is a replay-first tracing and diffing library for agentic AI workflows that lets you deterministically reproduce, fork, and compare agent runs to find exactly where behavior diverged.
Zero-intrusion guard for LLM calls in dev: dedupe, cache, and protect AI requests across Node, browser, and Vite.
OKI TRACE: Local LLM observability. See step-by-step, layer-by-layer what your AI thinks. Logit Lens & Attention for HuggingFace models.
Root cause analysis for AI agents. Detects agent loops, retry storms, and optimization opportunities in LangSmith, Langfuse, and OpenTelemetry traces.
LLM Harness for developing, debugging, and evaluating custom agents, tools, MCP servers, prompts, and memory constructs using Anthropic, OpenAI, or a local model.
Stop guessing why your MLX model outputs garbage. Triage in 30 seconds — no model load required.
Add a description, image, and links to the llm-debugging topic page so that developers can more easily learn about it.
To associate your repository with the llm-debugging topic, visit your repo's landing page and select "manage topics."