Inquir Compute
Observability

Serverless observability

Debug route handlers, scheduled runs, and async jobs with shared logs, traces, execution history, and latency views across serverless entry points.

38msp50
91msp95
187msp99
99.4%success
live

Zero-config, full visibility

1

Invoke

Run your function from the UI, CLI, API Gateway, or pipeline. Every run is captured automatically.

2

Trace

Every invocation produces a detailed trace: timing, logs, output, and error stack if any.

3

Analyze

Aggregate metrics over time. Emerging helpers can spotlight anomalies and slow functions—treat them as hints and validate against your own baselines.

Trace every invocation

Execution traces

Full trace per invocation: steps, log lines, timing, input/output, and error details.

Live streaming

Metrics pushed in real time over WebSocket — no polling, no refresh needed.

Latency percentiles

p50 / p95 / p99 latency updated after every run. Spot regressions before users do.

Run insights

Heuristics over run history can suggest where latency clusters—pair with your own reviews before treating output as authoritative.

Execution trace payload

trace.json
{
  "runId": "run_01hw3k...",
  "functionName": "ai-summarizer",
  "status": "SUCCEEDED",
  "durationMs": 312,
  "startedAt": "2025-03-23T10:41:22.100Z",
  "logs": [
    { "level": "INFO",  "message": "Fetching URL..." },
    { "level": "INFO",  "message": "Calling OpenAI API..." },
    { "level": "INFO",  "message": "Summary generated (824 words)" }
  ],
  "output": {
    "summary": "AI funding surges as...",
    "wordCount": 824
  }
}

Get started free

Deploy your first function in minutes. No credit card required.

Inquir Compute

The simplest way to run AI agents and backend jobs without infrastructure.

Contact info@inquir.org

© 2025 Inquir Compute. All rights reserved.