Offload work to background jobs for email sending, webhook delivery, credit resets, and RAG indexing. The job adapter is selected by JOB_PROVIDER.
Providers
| Provider | Env Value | Best For |
|---|---|---|
| In-memory queue | memory (default) | Development, single-instance |
| BullMQ (Redis) | bullmq | Production, multi-instance |
Enqueuing Jobs
import { enqueue, enqueueEmail } from '@/lib/jobs';
// Generic job
await enqueue('webhook-retry', { webhookId: 'wh_123', attempt: 1 });
// Shorthand for emails
await enqueueEmail('[email protected]', 'Welcome!', emailHtml);
Built-in Jobs
| Job Name | Description |
|---|---|
send-email | Sends transactional email via Resend |
webhook-retry | Retries failed webhook deliveries with exponential backoff |
credit-reset | Resets monthly AI/usage credits for all organizations |
rag-index | Indexes documents for AI RAG pipeline |
Cron Jobs
| Schedule | Task |
|---|---|
| Daily | Session cleanup (remove expired sessions) |
| Monthly | Credit reset (reset usage quotas) |
Cron jobs run automatically when the app starts. They use the same job adapter.
Switching to BullMQ
For production, use BullMQ with Redis for durable, concurrent job processing:
JOB_PROVIDER="bullmq"
REDIS_URL="redis://localhost:6379"
JOB_QUEUE_NAME="my-app" # optional, defaults to "codapult"
Worker Process
In production, run the worker as a separate process alongside your Next.js server. The worker picks up jobs from Redis and processes them independently.
For Kubernetes deployments, the Helm chart includes a dedicated worker Deployment. For Docker, add a separate service in docker-compose.yml.
Important: The
memoryadapter processes jobs in-process and loses pending jobs on restart. Always usebullmqin production.
Environment Variables
| Variable | Required | Description |
|---|---|---|
JOB_PROVIDER | No | "memory" (default) or "bullmq" |
REDIS_URL | Yes* | Redis connection URL, e.g. redis://localhost:6379 |
JOB_QUEUE_NAME | No | BullMQ queue name. Defaults to "codapult" |
* Required when JOB_PROVIDER=bullmq.