The Complete Cloudflare Workers Stack: 28 Workers Powering a Real Business
Real architecture. Real response times. Real code. How we built our entire backend on the edge.
This isn't a tutorial. It's a production architecture guide from someone running 28 Cloudflare Workers in production right now. Sub-millisecond response times. Zero server management. Global distribution.
I'm going to show you exactly how we built it, why we made specific decisions, and share real performance metrics from our dashboard.
The Architecture: Why 28 Workers, Not 1
My first attempt was a monolith. One giant worker doing everything: leads, valuations, notifications, data processing. 10,000+ lines of code. It workedโuntil it didn't.
Debugging was hell. One bug could take down everything. Deployments were scary. So I rebuilt everything with a microservices mindset:
"Each worker does one thing well. If it breaks, only that function breaks. If it needs scaling, only that function scales."
The Worker Inventory (Production)
Here's every worker we run, what it does, and its actual response time:
๐ค AI & Chatbots
๐ Data & Analytics
๐ Integrations & Webhooks
๐ ๏ธ Utilities & Proxies
Code Patterns That Work
Here are the patterns we use across all 28 workers:
1. The Standard Worker Structure
export default {
async fetch(request, env, ctx) {
// CORS handling
if (request.method === 'OPTIONS') {
return new Response(null, { headers: corsHeaders });
}
try {
const url = new URL(request.url);
// Route to handlers
if (url.pathname === '/api/valuation') {
return handleValuation(request, env);
}
return new Response('Not Found', { status: 404 });
} catch (error) {
return new Response(JSON.stringify({ error: error.message }), {
status: 500,
headers: { 'Content-Type': 'application/json', ...corsHeaders }
});
}
}
}
2. KV Caching Pattern
async function getWithCache(key, fetchFn, env, ttl = 3600) {
// Try cache first
const cached = await env.CACHE_KV.get(key, 'json');
if (cached) return cached;
// Fetch fresh data
const fresh = await fetchFn();
// Store in cache (non-blocking)
ctx.waitUntil(
env.CACHE_KV.put(key, JSON.stringify(fresh), { expirationTtl: ttl })
);
return fresh;
}
3. Lead Processing Flow
async function processLead(lead, env) {
// 1. Validate
const validation = validateLead(lead);
if (!validation.valid) return { error: validation.errors };
// 2. Enrich with property data
const propertyData = await env.VALUATION.fetch('/api/quick', {
method: 'POST',
body: JSON.stringify({ address: lead.address })
});
// 3. Score the lead
const score = calculateLeadScore(lead, propertyData);
// 4. Route to appropriate channel
await env.SLACK_INTEL.fetch('/api/notify', {
method: 'POST',
body: JSON.stringify({
channel: score > 80 ? '#hot-leads' : '#leads-incoming',
lead: { ...lead, score, propertyData }
})
});
return { success: true, score };
}
Cloudflare Services We Use
โ๏ธ Workers
Serverless compute at 200+ edge locations. All 28 of our backend services run here. Free tier is generous (100K requests/day).
๐๏ธ D1 Database
SQLite at the edge. We use this for lead storage, property data, and analytics. Automatic replication, zero config.
โก KV Storage
Key-value store for caching. API responses, session data, rate limiting. Eventually consistent but blazing fast.
๐ Pages
Static site hosting with edge functions. Our main site, instant-offer calculator, and playbook all run on Pages.
Lessons Learned
1. Start Small, Split Early
Don't wait until your worker is 10,000 lines to split it. If a function could be its own service, make it one. The overhead is minimal, the benefits are massive.
2. Use Service Bindings
Workers can call other workers directly without HTTP overhead. We use this for our lead-bot to call valuation, valuation to call market-intelligence, etc.
3. Cache Aggressively
KV is cheap. API calls aren't. We cache everything that doesn't need to be real-time: property data, market stats, even AI responses for common queries.
4. Monitor Everything
Cloudflare's analytics are good, but we also log to our Slack-intel worker. Every error, every slow response, every anomaly gets flagged immediately.
FAQ
How fast are Cloudflare Workers really?
Our 28 production workers average 0.3ms to 2.5ms response times. The fastest (playbook-api) runs at 0.3ms. All are sub-3ms. This is real data from our dashboard, not marketing claims.
Should I use one big worker or multiple small workers?
Multiple small workers. We learned this the hard way with a 10,000-line monolith. Now each worker does one thing well. Easier to debug, deploy, and scale independently.
What can you build with Cloudflare Workers?
APIs, chatbots, webhooks, CRMs, dashboards, content systems, proxy services, real-time data processingโanything. We run our entire backend on Workers. It works for any industry, not just real estate.
Want Us to Build This for You?
We build custom Cloudflare Workers architectures for any business.
Start Your Project โ
Founder & CEO of PropTechUSA.ai. Running 28 Cloudflare Workers in production. Building AI-powered software for any industry.