Serverless async jobs without a separate worker server
Queue async work from any trigger—HTTP request, webhook, cron, or another job—and run it in isolated serverless containers. No separate queue infrastructure, no worker deployment, one place to observe all async executions.
Last updated: 2026-04-20
Answer first
Direct answer
Serverless async jobs without a separate worker server. In Inquir, an async job is a pipeline: a function triggered by an HTTP handler, webhook, or schedule that runs outside the originating request window. The platform handles queuing, retry scheduling, and execution logging—you write handler logic.
When it fits
- Any work that should not block the HTTP or webhook response
- Fan-out patterns: one event triggers N parallel processing steps
Tradeoffs
- Fire-and-forget `setTimeout` or background `Promise` chains inside HTTP handlers get killed when the process restarts. There is no retry, no observability, and no durability guarantee.
- Calling another Lambda/function asynchronously hides the queue inside another service's limits and billing—and you still need to design retries manually.
Workload and what breaks
The hidden cost of async queues
Every async job architecture eventually needs three things: a queue (RabbitMQ, BullMQ, SQS), a worker process to consume it, and a retry + dead-letter strategy. Each of those is a separate system to deploy, monitor, and scale—before the first line of business logic.
The worker process is the fragile link: it needs to stay running, handle crashes, pick up from where it left off, and not duplicate work when it restarts. Most teams end up writing the same boilerplate for every new job type.
Trade-offs
Why embedded async in HTTP servers fails
Fire-and-forget `setTimeout` or background `Promise` chains inside HTTP handlers get killed when the process restarts. There is no retry, no observability, and no durability guarantee.
Calling another Lambda/function asynchronously hides the queue inside another service's limits and billing—and you still need to design retries manually.
How Inquir helps
Async jobs as first-class primitives
In Inquir, an async job is a pipeline: a function triggered by an HTTP handler, webhook, or schedule that runs outside the originating request window. The platform handles queuing, retry scheduling, and execution logging—you write handler logic.
All async jobs share workspace secrets, observability, and the same function deployment model as HTTP routes. No queue service to provision, no worker server to manage.
What you get
Async job trigger patterns
HTTP → async job
Accept request, return 202, trigger pipeline in background. Pattern: user uploads file → HTTP acks → pipeline processes.
Webhook → async job
Verify webhook signature, ack fast, continue processing in pipeline step. No provider timeout pressure.
Cron → async job
Schedule triggers a pipeline at fixed intervals—nightly ETL, hourly sync, weekly cleanup.
Job → job chaining
One pipeline step triggers another pipeline—fan-out, conditional branching, approval waits.
What to do next
How to enqueue and run async jobs on Inquir
Write the job handler
Same handler contract as HTTP functions. Receive payload from event.payload; return structured output.
Trigger from any entry point
Call global.durable.startNew(name, undefined, payload) from an HTTP handler, webhook processor, or cron job.
Observe and alert
Job execution history shows duration, retries, step outputs. Set alerts on failure rates.
Code example
Async job from webhook trigger
Webhook verifies and acks in under 1 second; slow work runs in a separate pipeline step with full retry and observability.
export async function handler(event) { // Verify signature first — see /serverless-webhook-processor for full pattern const payload = JSON.parse(event.body || '{}'); await db.upsertEvent(payload.id, payload.type); // idempotency key // Trigger async job — runs outside this request window await global.durable.startNew('process-event', undefined, { eventId: payload.id, type: payload.type }); return { statusCode: 200, body: 'accepted' }; }
When it fits
Good fit for serverless async jobs
When this works
- Any work that should not block the HTTP or webhook response
- Fan-out patterns: one event triggers N parallel processing steps
When to skip it
- Work under 1 second that is safe to do synchronously
FAQ
FAQ
Is there a built-in job queue?
Async invocation queues work without external Redis or worker processes. Call global.durable.startNew() or POST /functions/:id/invoke-async from your handler; the platform manages scheduling and delivery.
How do I chain async jobs?
Return from a pipeline step and trigger another pipeline in the same step—or configure multi-step pipelines with dependsOn for sequential or parallel execution.