Async Processing Architecture

Background Jobs &
Task Queues

Queues, Cron Triggers, and async processing patterns. Handle heavy work without blocking requests.

๐Ÿ“– 12 min read January 24, 2026

Workers have a 30-second CPU time limit. That's plenty for most requests, but not for sending emails, processing images, syncing data, or generating reports. You need background jobs.

Here's how we handle async work across our production systems.

Async Job Flow
๐Ÿ“ฅ Request
โ†’
๐Ÿ“ค Queue
โ†’
โš™๏ธ Process
โ†’
โœ… Done

Pattern 1: Cloudflare Queues

queue-producer.ts
interface EmailJob { type: 'email'; to: string; template: string; data: Record<string, any>; } interface ReportJob { type: 'report'; reportId: string; userId: string; dateRange: { start: string; end: string }; } type Job = EmailJob | ReportJob; // Send job to queue from HTTP handler export default { async fetch(request: Request, env: Env) { const lead = await request.json(); // Save lead to database await saveLead(lead, env); // Queue notification email (don't wait) await env.JOB_QUEUE.send({ type: 'email', to: lead.email, template: 'lead-confirmation', data: { name: lead.name, property: lead.property } }); // Return immediately return Response.json({ success: true, id: lead.id }); } };
queue-consumer.ts
// Process jobs from queue export default { async queue(batch: MessageBatch<Job>, env: Env) { for (const message of batch.messages) { const job = message.body; try { switch (job.type) { case 'email': await sendEmail(job, env); break; case 'report': await generateReport(job, env); break; default: console.log('Unknown job type', job); } message.ack(); } catch (error) { console.error('Job failed', error); if (message.attempts < 3) { message.retry({ delaySeconds: 60 * message.attempts }); } else { // Move to dead letter queue await env.DLQ.send({ job, error: error.message }); message.ack(); } } } } };

Pattern 2: Cron Triggers

cron-handler.ts
// wrangler.toml // [triggers] // crons = ["0 * * * *", "0 0 * * *"] # Hourly and daily export default { async scheduled( event: ScheduledEvent, env: Env, ctx: ExecutionContext ) { const cronPattern = event.cron; switch (cronPattern) { case '0 * * * *': // Every hour await syncLeadScores(env); await cleanupExpiredSessions(env); break; case '0 0 * * *': // Daily at midnight await generateDailyReport(env); await refreshMarketData(env); await sendDigestEmails(env); break; } } }; async function syncLeadScores(env: Env) { // Get leads updated in last hour const leads = await env.DB.prepare(` SELECT * FROM leads WHERE updated_at > datetime('now', '-1 hour') `).all(); // Recalculate scores for (const lead of leads.results) { const score = await calculateLeadScore(lead, env); await env.DB.prepare( 'UPDATE leads SET score = ? WHERE id = ?' ).bind(score, lead.id).run(); } }

Pattern 3: waitUntil for Fire-and-Forget

waituntil-pattern.ts
export default { async fetch(request: Request, env: Env, ctx: ExecutionContext) { const start = Date.now(); const response = await handleRequest(request, env); // Fire-and-forget tasks (don't block response) ctx.waitUntil(Promise.all([ // Log request logRequest({ method: request.method, path: new URL(request.url).pathname, duration: Date.now() - start, status: response.status }, env), // Update analytics trackPageview(request, env), // Sync to external service syncToAnalytics(request, response, env) ])); // Response sent immediately return response; } };
When to Use What
Use waitUntil for logging, analytics, and quick side effects (<30s). Use Queues for reliable job processing with retries. Use Cron Triggers for scheduled tasks like reports and cleanup.

Background Job Checklist

  • Use Queues for reliable async job processing
  • Implement retries with exponential backoff
  • Move failed jobs to a dead letter queue after max retries
  • Use Cron Triggers for scheduled recurring tasks
  • Use waitUntil for non-critical fire-and-forget tasks
  • Monitor queue depth and processing latency
  • Break long jobs into smaller chunks
  • Store job progress for resumability

Background jobs should be idempotentโ€”running the same job twice should produce the same result. This makes retries safe and debugging easier.

Related Articles

Webhook Processing at Scale
Read more โ†’
Real-Time Data Pipelines
Read more โ†’
Error Handling Patterns
Read more โ†’

Need Async Processing?

We build job systems that scale reliably.

โ†’ Get Started