Database Patterns
for Serverless
Connection pooling, query optimization, and data access patterns for Cloudflare Workers. Stop fighting your database.
Serverless and databases don't naturally play well together. Traditional databases expect persistent connections. Serverless functions spin up and down constantly. This mismatch causes connection exhaustion, cold start latency, and unpredictable performance.
Here's how we handle data access across 28 Workers without melting our databases.
The Serverless Database Problem
Traditional database access looks like this:
- Server starts โ Opens connection pool
- Request comes in โ Grabs connection from pool
- Query executes โ Returns connection to pool
- Server stays running โ Pool persists
Serverless breaks this model. Each invocation might be a new instance. Connection pools don't persist. You can exhaust database connections within minutes under load.
Pattern 1: D1 for Edge-Native Data
Cloudflare D1 is built for Workers. No connection management needed:
interface Property {
id: string;
address: string;
city: string;
price: number;
status: string;
}
class PropertyRepository {
constructor(private db: D1Database) {}
async findById(id: string): Promise<Property | null> {
const result = await this.db
.prepare('SELECT * FROM properties WHERE id = ?')
.bind(id)
.first();
return result as Property | null;
}
async findByCity(city: string, limit = 20): Promise<Property[]> {
const { results } = await this.db
.prepare(`
SELECT * FROM properties
WHERE city = ? AND status = 'active'
ORDER BY created_at DESC
LIMIT ?
`)
.bind(city, limit)
.all();
return results as Property[];
}
// Batch insert for efficiency
async createMany(properties: Property[]): Promise<void> {
const stmt = this.db.prepare(`
INSERT INTO properties (id, address, city, price, status)
VALUES (?, ?, ?, ?, ?)
`);
const batch = properties.map(p =>
stmt.bind(p.id, p.address, p.city, p.price, p.status)
);
await this.db.batch(batch);
}
}
Pattern 2: Hyperdrive for External Databases
When you need Postgres, use Hyperdrive for connection pooling:
import { Client } from 'pg';
async function getClient(env: Env): Promise<Client> {
// Hyperdrive provides a connection string with pooling
const client = new Client({
connectionString: env.HYPERDRIVE.connectionString
});
await client.connect();
return client;
}
// Use in handler
export default {
async fetch(request: Request, env: Env) {
const client = await getClient(env);
try {
const result = await client.query(
'SELECT * FROM users WHERE id = $1',
[userId]
);
return Response.json(result.rows[0]);
} finally {
// Always close - Hyperdrive handles actual pooling
await client.end();
}
}
};
| Aspect | D1 | Hyperdrive + Postgres |
|---|---|---|
| Connection Management | Automatic | Pooled via Hyperdrive |
| Latency | ~5-20ms | ~30-100ms |
| Max Database Size | 10GB | Unlimited |
| Query Complexity | Basic SQL | Full Postgres |
| Best For | Read-heavy, simple queries | Complex queries, large data |
Pattern 3: Read-Through Cache
Cache database results in KV for frequent reads:
class CachedPropertyRepository {
constructor(
private db: D1Database,
private kv: KVNamespace
) {}
async findById(id: string): Promise<Property | null> {
const cacheKey = `property:${id}`;
// Try cache first
const cached = await this.kv.get(cacheKey, 'json');
if (cached) return cached as Property;
// Cache miss - query database
const result = await this.db
.prepare('SELECT * FROM properties WHERE id = ?')
.bind(id)
.first();
if (result) {
// Cache for 5 minutes
await this.kv.put(cacheKey, JSON.stringify(result), {
expirationTtl: 300
});
}
return result as Property | null;
}
async update(id: string, data: Partial<Property>): Promise<void> {
// Update database
await this.db
.prepare('UPDATE properties SET price = ?, status = ? WHERE id = ?')
.bind(data.price, data.status, id)
.run();
// Invalidate cache
await this.kv.delete(`property:${id}`);
}
}
Pattern 4: Avoiding N+1 Queries
// BAD: N+1 queries
async function getPropertiesWithOwnersBad(ids: string[]) {
const properties = await getProperties(ids);
for (const prop of properties) {
prop.owner = await getOwner(prop.owner_id); // N queries!
}
return properties;
}
// GOOD: Single query with JOIN
async function getPropertiesWithOwnersGood(ids: string[]) {
const placeholders = ids.map(() => '?').join(',');
const { results } = await db
.prepare(`
SELECT p.*, o.name as owner_name, o.email as owner_email
FROM properties p
JOIN owners o ON p.owner_id = o.id
WHERE p.id IN (${placeholders})
`)
.bind(...ids)
.all();
return results;
}
// ALTERNATIVE: Batch loading
async function batchLoadOwners(ownerIds: string[]): Promise<Map<string, Owner>> {
const unique = [...new Set(ownerIds)];
const placeholders = unique.map(() => '?').join(',');
const { results } = await db
.prepare(`SELECT * FROM owners WHERE id IN (${placeholders})`)
.bind(...unique)
.all();
return new Map(results.map(r => [r.id, r]));
}
Database Pattern Checklist
- Use D1 for simple, read-heavy edge workloads
- Use Hyperdrive when you need Postgres features
- Cache frequently accessed data in KV
- Invalidate cache on writes
- Avoid N+1 queries with JOINs or batch loading
- Use prepared statements to prevent SQL injection
- Index columns used in WHERE and ORDER BY
- Monitor query performance and optimize hot paths
The best database query is the one you don't make. Cache aggressively, batch everything, and design your data model for your access patterns.