Serverless observability
Debug route handlers, scheduled runs, and async jobs with shared logs, traces, execution history, and latency views across serverless entry points.
How it works
Zero-config, full visibility
Invoke
Run your function from the UI, CLI, API Gateway, or pipeline. Every run is captured automatically.
Trace
Every invocation produces a detailed trace: timing, logs, output, and error stack if any.
Analyze
Aggregate metrics over time. Emerging helpers can spotlight anomalies and slow functions—treat them as hints and validate against your own baselines.
Features
Trace every invocation
Execution traces
Full trace per invocation: steps, log lines, timing, input/output, and error details.
Live streaming
Metrics pushed in real time over WebSocket — no polling, no refresh needed.
Latency percentiles
p50 / p95 / p99 latency updated after every run. Spot regressions before users do.
Run insights
Heuristics over run history can suggest where latency clusters—pair with your own reviews before treating output as authoritative.
Example
Execution trace payload
{ "runId": "run_01hw3k...", "functionName": "ai-summarizer", "status": "SUCCEEDED", "durationMs": 312, "startedAt": "2025-03-23T10:41:22.100Z", "logs": [ { "level": "INFO", "message": "Fetching URL..." }, { "level": "INFO", "message": "Calling OpenAI API..." }, { "level": "INFO", "message": "Summary generated (824 words)" } ], "output": { "summary": "AI funding surges as...", "wordCount": 824 } }
Get started free
Deploy your first function in minutes. No credit card required.