PropTech SaaS Pricing Models: Usage vs Subscription Guide
Discover the pros and cons of usage-based vs subscription pricing for PropTech SaaS. Learn which model drives growth and maximizes revenue in real estate.
The AI startup graveyard is filling up fast. Most of them share the same fatal architecture: they built AI as the product instead of building a business powered by AI.
This isn't a branding problem. It's a systems design problem. And it's solvable.
The Architecture Problem
When AI is your product, you're selling a feature that any competitor can replicate the moment a better model drops. When AI powers your business, you're selling outcomes that compound over time.
- Value prop = the AI itself
- Moat = model access (no moat)
- Pricing = per API call
- Risk = model commoditization
- Data = flows through, not captured
- Switching cost = zero
- Value prop = business outcome
- Moat = domain logic + data
- Pricing = value delivered
- Risk = distributed across stack
- Data = compounds internally
- Switching cost = high
The architectural question isn't "how do we use AI?" It's "where does AI sit in the stack, and what layers above it create defensibility?"
The Correct Stack
Here's how AI should fit into a defensible business architecture:
Notice where AI sits: Layer 4. Important infrastructure, but sandwiched between commodity compute below and defensible business logic above. The AI is never the top layer.
Design Pattern: Model Abstraction
The first architectural principle: never couple business logic directly to a specific model. Abstract the AI layer so models become swappable components.
// Abstract interface • business logic codes against this
interface AIProvider {
complete(prompt: string, options?: CompletionOptions): Promise<string>
embed(text: string): Promise<number[]>
}
// Concrete implementations • swappable
class ClaudeProvider implements AIProvider { ... }
class OpenAIProvider implements AIProvider { ... }
class GeminiProvider implements AIProvider { ... }
// Business logic never knows which model runs underneath
const ai = getProvider(env.AI_PROVIDER) // Switch via config
This isn't premature abstraction•t's survival architecture. When GPT-5 drops and undercuts Claude pricing by 40%, you should be able to switch in an afternoon, not a quarter.
Design Pattern: Domain Logic Encoding
The defensible moat isn't the AI. It's the domain logic that sits above it. This logic should be:
- Encoded in prompts • System prompts carry domain expertise
- Implemented in code • Business rules that shape AI output
- Accumulated in data • Feedback loops that improve over time
Domain Logic: Offers below 70% ARV with <7 day close + assignment clause = Grade DAI Role: Extract terms from unstructured offer documents, apply grading rules, generate explanation
The AI is interchangeable. The grading logic is the product. A competitor could use the same model and get completely different (worse) results because they lack the domain expertise encoded in the rules.
Design Pattern: Data Flywheel
The third layer of defense: proprietary data that compounds. Every interaction should make the system smarter.
// Every AI interaction feeds back into the system
async function processInteraction(input, output, feedback) {
// Store for analysis
await db.insert('interactions', { input, output, feedback })
// Update domain rules if pattern emerges
if (feedback.negative && detectPattern(input)) {
await flagForRuleUpdate(input, output, feedback)
}
// Retrain embeddings periodically
await queue.add('retrain-check', { threshold: 1000 })
}
After 10,000 interactions, you have 10,000 data points a competitor doesn't have. After 100,000, you have a moat that can't be replicated by switching to a better model.
Design Pattern: Outcome-Based Pricing
If you're pricing per API call or per token, you've architected yourself into a race to the bottom. Price on the outcome layer, not the infrastructure layer.
Right: $999 for a landing page that converts at 47%
The AI cost is a line item in your COGS, not your pricing model. Customers pay for outcomes. Your margin comes from efficiency.
Anti-Patterns to Avoid
1. Model Lock-in
Building features that only work with one provider's API. When that provider changes pricing or deprecates endpoints, you're stuck.
2. AI-First Features
Adding AI because it's trendy rather than because it solves a problem better than alternatives. "AI-powered" isn't a value proposition.
3. Thin Wrappers
Putting a UI on top of an API without adding domain logic. Zero defensibility. A weekend project can replicate it.
4. Leaking Infrastructure
Exposing AI limitations to customers. They shouldn't know or care that there's an LLM underneath. Abstract failures gracefully.
Implementation Checklist
- AI provider abstracted behind interface
- Model switchable via configuration
- Domain logic encoded separately from AI calls
- Feedback loop capturing every interaction
- Pricing tied to outcome, not usage
- Graceful degradation when AI fails
- No "powered by [Model]" in customer-facing copy
The Bottom Line
The businesses that survive the AI commoditization wave won't be the ones with the best models. They'll be the ones with the best architecture•here AI is essential infrastructure but never the product itself.
Build the outcome layer. Build the domain logic layer. Build the data flywheel. Then plug in whatever AI makes sense today, knowing you can swap it tomorrow.
The model is rented. The architecture is owned. Build what you can own.
Related Articles
See This Architecture in Production
Explore our products to see these patterns implemented in real systems.
View Live Systems