Serverless functions without HTTP timeout limits
Standard HTTP serverless functions time out in seconds or minutes. Pipeline-backed serverless functions run outside the request window with step-level timeouts—so a 3-hour ETL, a batch ML job, or a multi-stage data pipeline runs to completion without timeout hacks.
Last updated: 2026-04-20
Answer first
Direct answer
Serverless functions without HTTP timeout limits. Inquir separates HTTP request handling from async execution. HTTP handlers are fast—they validate, accept, and trigger. Pipelines are async—they run to completion with per-step timeouts, retries, and execution history.
When it fits
- Work consistently taking longer than your HTTP platform limit
- Multi-step jobs where step-level retries are essential
Tradeoffs
- Recursive invocations (calling yourself to continue work) lose context between calls, double-write on failure, and are hard to observe.
- Chunking work into smaller Lambda calls works until the fan-out gets complex and you need to aggregate results—at which point you have rebuilt a workflow engine without the tooling.
Workload and what breaks
Timeout limits in common serverless platforms
- Vercel Serverless Functions: 60s (Pro), 300s (Enterprise)
- AWS Lambda: 900s maximum
- Cloudflare Workers: 30s CPU time
- Supabase Edge Functions: 60s
Every platform has a ceiling. For stateless HTTP handlers this is fine—requests should be fast. But when you need to process a large CSV, run multi-model inference, sync 100k records, or generate a complex report, these limits force awkward workarounds that fail silently.
Trade-offs
Why workarounds fail
Recursive invocations (calling yourself to continue work) lose context between calls, double-write on failure, and are hard to observe.
Chunking work into smaller Lambda calls works until the fan-out gets complex and you need to aggregate results—at which point you have rebuilt a workflow engine without the tooling.
How Inquir helps
Pipelines run outside the HTTP window
Inquir separates HTTP request handling from async execution. HTTP handlers are fast—they validate, accept, and trigger. Pipelines are async—they run to completion with per-step timeouts, retries, and execution history.
Because pipelines are managed serverless steps rather than persistent processes, you get the no-ops benefits of serverless without the HTTP timeout ceiling.
What you get
What no-timeout async execution enables
Large file processing
CSV/Excel with 500k rows, PDF rendering, video thumbnail extraction—no time pressure from HTTP gateway.
Multi-model ML pipelines
Embed → classify → summarize → notify in sequential steps. Each step retries independently.
Bulk API sync
Paginate through external APIs, write to database, handle rate limits—without holding a connection open.
Nightly ETL with audit trail
Scheduled pipelines run for hours, log every step, and alert on anomalies—without a VPS cron job.
What to do next
Pattern: HTTP accepts, pipeline runs
HTTP handler validates and enqueues
Parse input, validate, call global.durable.startNew(), return 202.
Pipeline steps run to completion
Each step runs as an isolated function with its own timeout. Chain steps with dependsOn.
Notify when done
Final step posts webhook or updates status in database—client polls or receives callback.
Code example
No-timeout CSV processing pipeline
HTTP accepts the upload URL and returns 202. Pipeline step reads, transforms, and stores—taking as long as needed with step-level retries.
export async function handler(event) { const { fileUrl, jobId } = JSON.parse(event.body || '{}'); if (!fileUrl) return { statusCode: 400, body: JSON.stringify({ error: 'fileUrl required' }) }; await global.durable.startNew('process-csv', undefined, { fileUrl, jobId }); return { statusCode: 202, body: JSON.stringify({ jobId, status: 'processing' }) }; }
export async function handler(event) { const { fileUrl, jobId } = event.payload ?? {}; const rows = await downloadAndParseCSV(fileUrl); // may take minutes for large files let processed = 0; for (const batch of chunk(rows, 1000)) { await db.upsertBatch(batch); processed += batch.length; } await notifyUser(jobId, { processed, total: rows.length }); return { jobId, processed, total: rows.length }; }
When it fits
When you need no-timeout execution
When this works
- Work consistently taking longer than your HTTP platform limit
- Multi-step jobs where step-level retries are essential
When to skip it
- Fast synchronous work under 5 seconds—keep it in the HTTP handler
FAQ
FAQ
Are pipeline steps literally unlimited in time?
Each step has a configurable per-step timeout—longer than HTTP request windows, but not infinite. Chain steps for multi-hour work. See docs for current per-step limits.
How does this differ from Lambda's 900s limit?
Lambda caps the entire execution at 900s regardless of what you are doing. Inquir pipelines chain steps, each with its own timeout budget, so complex jobs compose naturally.