Inquir Compute logoInquir Compute
Inquir Compute · cron

Serverless scheduled jobs with run history, retries, and logs

Schedule recurring jobs with cron expressions validated at save time, execution history beside HTTP invocations, configurable retries, and the same secrets and logging model as your API functions—no VPS crontab, no silent failures.

Last updated: 2026-04-20

Direct answer

Serverless scheduled jobs with run history, retries, and logs. Each scheduled job is a pipeline with a cron trigger. The platform validates cron expressions at save time, tracks next-run-at per pipeline, and creates invocation records for every execution—queryable without SSH.

When it fits

  • Nightly ETL, hourly sync, weekly reports, certificate rotation
  • Any recurring task that needs visible run history and retries

Tradeoffs

  • When job state lives on a single host, every deploy might reset the schedule, every SSH session is a potential foot-gun, and every rotation of the VPS means re-reading a 2-year-old runbook.
  • Secrets stored in `.env` files on the server are audit nightmares; crontab entries are not version-controlled with your application code.

Silent failures in traditional scheduled jobs

  • VPS crontab: output routed to root mail nobody reads; no retry on failure
  • systemd timers: better reliability, but logs scattered, secrets manual, no history
  • Kubernetes CronJob: correct primitives but cluster overhead for small teams

Scheduled jobs fail silently more often than any other backend primitive. The job runs on a VPS, the script exits non-zero, the error goes to a mail spool, and nobody knows until data is 3 days stale.

Why crontab-on-a-server doesn't scale past one engineer

When job state lives on a single host, every deploy might reset the schedule, every SSH session is a potential foot-gun, and every rotation of the VPS means re-reading a 2-year-old runbook.

Secrets stored in `.env` files on the server are audit nightmares; crontab entries are not version-controlled with your application code.

Scheduled pipelines as first-class serverless

Each scheduled job is a pipeline with a cron trigger. The platform validates cron expressions at save time, tracks next-run-at per pipeline, and creates invocation records for every execution—queryable without SSH.

Scheduled jobs share workspace secrets and the same observability stack as HTTP routes. Nightly ETL, weekly reports, hourly sync—all visible in execution history next to webhook and API invocations.

Scheduled job features

Cron expression validation

Expressions are validated when you save the pipeline—malformed entries fail immediately, not silently on the next tick.

Run history and logs

Every scheduled execution creates an invocation record: start time, duration, exit status, step outputs. Queryable without SSH.

Retries and overlap guards

Configure retry count and delay per step. Add idempotency keys for overlap protection when long jobs run past their interval.

Shared secrets

Scheduled jobs use the same workspace secrets as HTTP routes—no parallel `.env` files on a server.

How to create serverless scheduled jobs on Inquir

1

Write the job handler

Standard serverless function. Use environment variables for secrets; return structured output per step.

2

Create pipeline with cron trigger

Set trigger type to schedule, enter a cron expression. Platform validates and schedules the first run.

3

Monitor run history

Open execution history to see every run, duration, and output without SSH access.

Nightly sync job with watermark

Scheduled handler reads cursor from environment, fetches incremental updates, upserts idempotently, and returns new cursor for the next run.

jobs/nightly-sync.mjs
export async function handler(event) {
  // event.trigger?.type === 'schedule' when fired by a cronTrigger node
  const since = process.env.SYNC_CURSOR ?? new Date(Date.now() - 86_400_000).toISOString();
  const records = await source.fetchUpdatedSince(since);
  if (records.length === 0) return { synced: 0, cursor: since };
  await destination.upsertBatch(records); // idempotent by record ID
  const newCursor = records.at(-1)?.updatedAt ?? since;
  // Store cursor for next run (update env var or external store)
  return { synced: records.length, cursor: newCursor };
}

Use serverless scheduled jobs for…

When this works

  • Nightly ETL, hourly sync, weekly reports, certificate rotation
  • Any recurring task that needs visible run history and retries

When to skip it

  • Sub-minute scheduling—validate platform timer resolution before relying on fine-grained intervals

FAQ

Can I run the same function on a schedule and via HTTP?

Yes. Reference the same function ID in a pipeline schedule trigger and in a gateway HTTP route. Use event.pipeline to distinguish invocation context in the handler if needed.

What timezone do cron expressions use?

Use UTC for production schedules unless your requirements are strictly wall-clock business hours. Document the timezone assumption in the pipeline name.

Inquir Compute logoInquir Compute

The simplest way to run AI agents and backend jobs without infrastructure.

Contact info@inquir.org

© 2025 Inquir Compute. All rights reserved.