Inquir Compute logoInquir Compute
Inquir vs BullMQ

BullMQ alternative: serverless background jobs without Redis workers

BullMQ gives you a robust Redis-backed job queue with workers, concurrency control, and retries. Inquir pipelines offer the same reliability semantics—retries, execution history, step-level failure isolation—without provisioning Redis, maintaining a worker process, or managing queue drain on deploys.

Last updated: 2026-04-20

Direct answer

BullMQ alternative: serverless background jobs without Redis workers. Inquir pipelines are triggered from HTTP handlers (or cron, webhook, or another pipeline) and run as managed serverless function invocations. The platform handles scheduling, retry delivery, and execution records—no Redis, no worker process, no queue drain logic.

When it fits

  • You want reliable background job execution without managing Redis and a worker process
  • Your team is on serverless/cloud functions and wants background jobs to fit the same model

Tradeoffs

  • Every BullMQ deployment needs a Redis instance close to the worker process. Redis availability becomes a dependency for every background job in the system—a Redis restart or network partition stalls job processing.
  • Worker processes need graceful shutdown logic, concurrency tuning, and stalled-job recovery. These are solved problems in BullMQ, but they require configuration and ongoing maintenance that a managed serverless pipeline handles automatically.

What BullMQ requires to run in production

  • Redis instance: provisioned, monitored, backed up, and connected to workers
  • Worker process: a persistent Node.js process consuming the queue—needs its own deploy, scaling, and crash recovery
  • Queue drain on deploy: workers must finish in-flight jobs before restart or accept duplicate delivery
  • Dead-letter queue: manual DLQ setup for jobs that exhaust retries
  • Dashboard: Bull Board or Arena for queue visibility—another service to deploy

BullMQ is a mature, battle-tested queue library. For teams that already run Redis and want fine-grained queue semantics, it is excellent. For teams that want reliable background job execution without managing queue infrastructure, the operational overhead is significant.

Why BullMQ adds operational surface area

Every BullMQ deployment needs a Redis instance close to the worker process. Redis availability becomes a dependency for every background job in the system—a Redis restart or network partition stalls job processing.

Worker processes need graceful shutdown logic, concurrency tuning, and stalled-job recovery. These are solved problems in BullMQ, but they require configuration and ongoing maintenance that a managed serverless pipeline handles automatically.

Pipeline semantics without queue infrastructure

Inquir pipelines are triggered from HTTP handlers (or cron, webhook, or another pipeline) and run as managed serverless function invocations. The platform handles scheduling, retry delivery, and execution records—no Redis, no worker process, no queue drain logic.

Execution history gives the same visibility as Bull Board—every job run has a record with input, output, duration, retry count, and failure reason. Accessible from the console without deploying a dashboard.

BullMQ vs Inquir pipelines

Queue backend

BullMQ: Redis (self-managed or hosted). Inquir: managed platform—no queue infrastructure to provision.

Worker process

BullMQ: persistent Node.js worker process consuming queue. Inquir: serverless invocations—no persistent worker.

Retries

BullMQ: configurable retry count, backoff, and DLQ. Inquir: configurable retry count and delay per pipeline step.

Execution visibility

BullMQ: Bull Board / Arena dashboard (separate deploy). Inquir: execution history built into the platform console.

Migrating from BullMQ to Inquir pipelines

1

Convert job processor to a pipeline function

BullMQ job processors receive a Job object. Inquir pipeline steps receive event.payload. Move the processor logic to a handler function.

2

Replace queue.add() with global.durable.startNew()

Where you call queue.add(name, data, opts), call global.durable.startNew(name, undefined, data) instead. The platform handles scheduling and retry delivery.

3

Set retry config on the pipeline step

Configure retry count and delay per step—equivalent to BullMQ job options attempts and backoff.

BullMQ → Inquir pipeline migration

BullMQ: define a processor and add jobs to the queue. Inquir: export a handler and trigger a pipeline—same semantics, no Redis.

Before: BullMQ
import { Queue, Worker } from 'bullmq';
const queue = new Queue('email-queue', { connection: redisConnection });

// Producer (HTTP handler)
await queue.add('send-welcome', { userId, email }, { attempts: 3, backoff: 5000 });

// Consumer (worker process)
const worker = new Worker('email-queue', async (job) => {
  await sendEmail(job.data.userId, job.data.email);
}, { connection: redisConnection });
After: Inquir pipeline
// Producer (HTTP handler — no Redis, no worker process)
await global.durable.startNew('send-welcome', undefined, { userId, email });
return { statusCode: 202, body: JSON.stringify({ queued: true }) };

// Consumer (pipeline function — jobs/send-welcome.mjs)
export async function handler(event) {
  const { userId, email } = event.payload ?? {};
  await sendEmail(userId, email);
  return { sent: true };
}

Choose Inquir pipelines over BullMQ when

When this works

  • You want reliable background job execution without managing Redis and a worker process
  • Your team is on serverless/cloud functions and wants background jobs to fit the same model

When to skip it

  • You need sub-second job latency and fine-grained queue prioritization where BullMQ's Redis-backed model has proven the right fit

FAQ

Does Inquir support job priorities?

Pipeline triggers are first-in first-scheduled. For strict priority queues with multiple lanes, BullMQ remains the right tool. Inquir fits well when you need reliable async execution without queue infrastructure overhead.

How do I migrate gradually from BullMQ?

Start with new job types as Inquir pipelines. Keep existing BullMQ workers running until new types are stable. Migrate high-value job types first—those with the most operational overhead from Redis and worker management.

Inquir Compute logoInquir Compute

The simplest way to run AI agents and backend jobs without infrastructure.

Contact info@inquir.org

© 2025 Inquir Compute. All rights reserved.