Inquir Compute logoInquir Compute
Inquir Compute · containers

Serverless functions backed by real containers, not edge isolates

Every Inquir function runs in an isolated container (Docker/Firecracker)—not a V8 edge isolate. That means native modules, real Node.js 22 / Python 3.12 / Go 1.22 environments, and no isolate restrictions—while keeping the serverless model: deploy code, get an HTTP endpoint, no cluster to manage.

Last updated: 2026-04-20

  • Per-function container isolation — dependencies never clash across functions
  • Native modules via layers: sharp, bcrypt, numpy, pandas, cgo — no isolate limits
  • Node.js 22, Python 3.12, Go 1.22 behind one gateway and pipeline
  • No cluster to manage — push code, get an endpoint

Direct answer

Serverless functions backed by real containers, not edge isolates. Each function runs in its own isolated container. Native modules attached via layers work because the runtime is a real container environment, not an isolate shim. Heavy Python libraries, native Node.js addons, and Go CGO packages all run as expected.

When it fits

  • Your function needs native modules, subprocesses, or private network access that edge isolates forbid.
  • You want Node.js, Python, and Go in one gateway without splitting vendors per language.

Tradeoffs

  • Provisioning a VM or a Kubernetes Deployment for each function means owning OS updates, scaling policies, ingress certificates, and rolling deploys before writing any business logic.
  • Managed container services like ECS or Cloud Run help, but each still requires wiring load balancers, health checks, and task definitions—more ceremony than most backends need.

Why edge isolates are not enough for heavier workloads

V8 isolates are fast and globally distributed—ideal for tiny logic at the edge. But they ban native addons, restrict filesystem access, cap memory, and block access to private networks. Sharp for image resizing, bcrypt for password hashing, numpy for data processing, and any C-extension Python library need a real runtime environment.

Functions backed by real containers fill the gap: you still deploy code and get HTTP endpoints without managing a cluster, but the execution environment is a full OS-level runtime instead of a sandboxed JavaScript engine.

Why VMs and Kubernetes are too heavy for most teams

Provisioning a VM or a Kubernetes Deployment for each function means owning OS updates, scaling policies, ingress certificates, and rolling deploys before writing any business logic.

Managed container services like ECS or Cloud Run help, but each still requires wiring load balancers, health checks, and task definitions—more ceremony than most backends need.

Container isolation without cluster operations

Each function runs in its own isolated container. Native modules attached via layers work because the runtime is a real container environment, not an isolate shim. Heavy Python libraries, native Node.js addons, and Go CGO packages all run as expected.

The same container boundary serves gateway HTTP invocations, pipeline steps, cron triggers, and async jobs—one deployment model for all your backend workloads.

What container-backed functions unlock

Native modules via layers

Attach sharp, bcrypt, canvas, grpc, numpy, pandas, or any C-extension library as a layer. Runs correctly because the runtime is a real container—not a V8 isolate.

Full standard library access

Node.js fs, child_process, crypto; Python subprocess, os, ctypes; Go os/exec and CGO—all available with no sandbox restrictions.

Warm containers for steady traffic

Enable hot containers to keep a pool of pre-warmed instances ready. Useful for functions with consistent traffic that need low p95 latency.

Polyglot in one gateway

Mix Node.js, Python, and Go functions behind one API gateway, one secrets model, and one execution history—no separate infrastructure per language.

How container-backed functions work on Inquir

1

Choose a runtime

Node.js 22, Python 3.12, or Go 1.22. Declare dependencies in package.json, requirements.txt, or go.mod—no Dockerfile needed.

2

Attach layers for native dependencies

Add platform-provided or custom layers for native modules. The platform mounts them at runtime inside the container.

3

Wire to gateway, cron, or pipeline

Attach the function to an HTTP route, a scheduled trigger, or a background job—the same container handles all invocation types.

Native module example: sharp image processing

Sharp uses native C++ bindings—runs fine in Inquir containers, fails in most edge isolates. The function receives image bytes via HTTP body or pipeline payload.

images/thumbnail.mjs
import sharp from 'sharp';

export async function handler(event) {
  // Body is base64 for binary payloads routed through the gateway
  const input = Buffer.from(event.body ?? '', 'base64');
  const thumbnail = await sharp(input)
    .resize(320, 240)
    .jpeg({ quality: 82 })
    .toBuffer();
  return {
    statusCode: 200,
    headers: { 'Content-Type': 'image/jpeg' },
    body: thumbnail.toString('base64'),
    isBase64Encoded: true,
  };
}

Use serverless containers when…

When this works

  • Your function needs native modules, subprocesses, or private network access that edge isolates forbid.
  • You want Node.js, Python, and Go in one gateway without splitting vendors per language.

When to skip it

  • Your function is tiny, pure JavaScript, and must run at hundreds of edge POPs for global latency—V8 isolates are better for that.

FAQ

Do I need to write a Dockerfile?

No. Declare dependencies in package.json, requirements.txt, or go.mod and the platform builds the container from a managed base image for the chosen runtime. Attach layers for native modules—no custom Dockerfile required.

How do containers compare to Cloudflare Workers?

Workers run in V8 isolates optimized for edge latency; Inquir containers run full Node/Python/Go images optimized for native modules, private network calls, and heavier dependencies. Use Workers at the edge for caching and fan-out; use Inquir containers for origin logic.

Is cold start slower than edge isolates?

Container cold starts are slower than V8 isolate cold starts. Use hot containers (warm pools) to pre-warm functions that need steady low latency. Measure p95/p99 with realistic traffic before committing to a cold-only model.

Can I use Python ML libraries like numpy and pandas?

Yes. Python 3.12 containers support the full PyPI ecosystem including numpy, pandas, scikit-learn, and other libraries with C extensions—pip install them like any other dependency.

Inquir Compute logoInquir Compute

The simplest way to run AI agents and backend jobs without infrastructure.

Contact info@inquir.org

© 2025 Inquir Compute. All rights reserved.