Inquir Compute
Pipelines

Serverless pipelines, cron jobs, and background work

Model serverless pipelines for cron jobs, webhook-triggered workflows, and multi-step background jobs—staged work, retries, and fan-out while reusing the same functions as your live HTTP API.

How serverless pipelines work

1

Define steps

Pick functions as pipeline steps. Configure inputs, outputs, and dependencies between them.

2

Connect and schedule

Link steps into a DAG or sequential chain. Add cron triggers or webhook listeners to start runs automatically.

3

Monitor in real time

Watch each step's status, logs, and output live. Replay failed steps without re-running the whole pipeline.

Built for real workflows

DAG execution

Steps run in parallel when dependencies allow. Full dependency resolution out of the box.

Webhook triggers

Trigger a pipeline via API or UI and pass structured payload data to the first step.

Manual approval

Pause execution and require a human sign-off before proceeding to the next step.

Cron scheduling

Run pipelines on a cron schedule — daily reports, batch jobs, cleanup tasks.

Pipelines reuse the same functions you expose through the gateway.

Pipelines vs direct functions vs async jobs

Use pipelines

When cron jobs, webhook-triggered workflows, or background jobs need retries, branching, fan-out, or staged orchestration across functions.

Use direct functions

When one HTTP request can finish quickly in a single gateway handler without cron, webhooks, or multi-step state.

Use async jobs

When callers should return immediately and you only need a queued background job—not a full DAG across many steps.

DAG pipeline (manual trigger)

customer-support-pipeline.json
{
  "schemaVersion": 1,
  "nodes": [
    { "id": "t1", "kind": "manualTrigger", "name": "Start", "position": { "x": 0, "y": 0 }, "config": {} },
    { "id": "n1", "kind": "lambda", "name": "Classify intent", "position": { "x": 240, "y": 0 }, "config": { "functionId": "intent-classifier", "onError": "failPipeline" } },
    { "id": "n2", "kind": "lambda", "name": "Billing agent", "position": { "x": 480, "y": 0 }, "config": { "functionId": "billing-agent", "onError": "failPipeline" } },
    { "id": "r1", "kind": "respond", "name": "Done", "position": { "x": 720, "y": 0 }, "config": { "statusCode": 200, "bodyMapping": { "ok": true } } }
  ],
  "edges": [
    { "id": "e1", "sourceNodeId": "t1", "targetNodeId": "n1" },
    { "id": "e2", "sourceNodeId": "n1", "targetNodeId": "n2", "sourceHandle": "success" },
    { "id": "e3", "sourceNodeId": "n2", "targetNodeId": "r1", "sourceHandle": "success" }
  ]
}

Sequential example (nightly processing)

nightly-sequential-pipeline.json
{
  "executionMode": "sequential",
  "trigger": { "type": "schedule", "cron": "0 2 * * *" },
  "steps": [
    { "id": "extract", "functionId": "nightly-extract" },
    { "id": "transform", "functionId": "nightly-transform", "dependsOn": ["extract"] },
    { "id": "publish", "functionId": "nightly-publish", "dependsOn": ["transform"] }
  ]
}

Get started free

Deploy your first function in minutes. No credit card required.

Inquir Compute

The simplest way to run AI agents and backend jobs without infrastructure.

Contact info@inquir.org

© 2025 Inquir Compute. All rights reserved.