Introduction
So you have a working stack — React frontend, FastAPI backend, PostgreSQL (with PostGIS perhaps), all Dockerized — and now you want to ship it to production. You've heard AWS and Cloudflare are a powerful combo, and you want to add authentication (Clerk) and subscription billing (Stripe) on top. Where do you even start?
This tutorial walks through a production-grade architecture designed for exactly this scenario. The goal is not textbook perfection — it is a setup you can actually deploy today, evolve over time, and hand off to an AI agent or a new teammate without confusion.
The core principle is simple:
Cloudflare handles "fast, secure, cheap traffic." AWS handles "stable, scalable compute."
Prerequisites
Before following this guide, you should have:
- A working React app (Vite or CRA) with
@clerk/clerk-reactinstalled - A FastAPI app containerized with Docker
- PostgreSQL (optionally with PostGIS) running locally via Docker Compose
- Accounts on: AWS, Cloudflare, Clerk, and Stripe
- Basic familiarity with
wranglerCLI (Cloudflare Workers CLI) and AWS ECS concepts
Architecture Overview
Here is how the components map to services:
| Your Component | Deployed As |
|---|---|
| React frontend | Cloudflare Pages |
| FastAPI API | AWS ECS Fargate |
| Database (Docker) | AWS RDS PostgreSQL |
| GIS tiles / large files | Cloudflare R2 |
| API gateway + auth edge | Cloudflare Workers |
| Domain / HTTPS / WAF | Cloudflare |
The request flow looks like this:
Browser
└─ Clerk.js (issues JWT)
└─ Cloudflare Pages (React SPA)
└─ Authorization: Bearer <Clerk JWT>
└─ Cloudflare Worker (validates JWT, injects user context)
└─ X-User-Id / X-Org-Id / X-Plan headers
└─ AWS ALB
└─ AWS ECS (FastAPI)
└─ RDS / Aurora PostgreSQLAWS resources are never exposed to the public internet. Only Cloudflare IPs can reach your ALB. This separation is the foundation of the whole design.
Step-by-Step Guide
Step 1: Deploy the React Frontend to Cloudflare Pages
Cloudflare Pages is the right place for your React app. It gives you global CDN, free SSL, automatic CI/CD from GitHub, and essentially zero cost at MVP scale.
Connect your repository in the Cloudflare dashboard under Pages, set your build command (npm run build) and output directory (dist), then configure environment variables per environment:
VITE_CLERK_PUBLISHABLE_KEY=pk_live_...
VITE_API_BASE_URL=https://api.yourdomain.com
VITE_STRIPE_PUBLISHABLE_KEY=pk_live_...For your dev environment, use a separate Cloudflare Pages preview deployment (triggered automatically on non-main branches) with dev-specific values:
VITE_API_BASE_URL=https://api.dev.yourdomain.comOne critical rule: the frontend never makes permission decisions. It only collects the Clerk JWT and sends it upstream. All access control happens at the Worker layer.
Here is how to attach the token to every API request:
import { useAuth } from "@clerk/clerk-react";
function useApiClient() {
const { getToken } = useAuth();
return async (path: string, options?: RequestInit) => {
const token = await getToken();
return fetch(`${import.meta.env.VITE_API_BASE_URL}${path}`, {
...options,
headers: {
...options?.headers,
Authorization: `Bearer ${token}`,
},
});
};
}Step 2: Deploy FastAPI to AWS ECS Fargate
Since your app is already Dockerized, ECS Fargate is the right choice. You do not manage servers, scaling is straightforward, and your Docker Compose workflow translates directly to ECS task definitions.
A minimal task definition for FastAPI looks like this:
{
"family": "my-api",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024",
"containerDefinitions": [
{
"name": "api",
"image": "123456789.dkr.ecr.ap-northeast-1.amazonaws.com/my-api:latest",
"portMappings": [{ "containerPort": 8000 }],
"environment": [
{ "name": "ENV", "value": "production" },
{ "name": "DB_HOST", "value": "my-db.cluster-xxxx.rds.amazonaws.com" }
],
"secrets": [
{
"name": "DB_PASSWORD",
"valueFrom": "arn:aws:secretsmanager:..."
}
]
}
]
}Run FastAPI with Gunicorn + Uvicorn workers in production:
FROM python:3.12-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY . .
CMD ["gunicorn", "main:app", "-w", "4", "-k", "uvicorn.workers.UvicornWorker", "--bind", "0.0.0.0:8000"]Place ECS inside a private VPC subnet. Configure the ALB security group to only allow inbound traffic from Cloudflare's published IP ranges.
For the dev environment, deploy a separate ECS service (my-api-dev) pointing to a dev RDS instance. It can run with a single task and no auto-scaling to keep costs low.
Step 3: Set Up Cloudflare Workers as the Trust Boundary
This is the most critical piece of the architecture. The Worker sits between the browser and AWS, and it is the only place that touches Clerk JWTs. Once a request passes the Worker, AWS never sees a JWT — only internal headers.
The Worker does five things:
- Validates the Clerk JWT (using Clerk's JWKS endpoint)
- Extracts user ID, organization ID, and plan tier
- Injects internal headers (
X-User-Id,X-Org-Id,X-Plan) - Applies rate limiting and plan-based gating
- Proxies the request to your AWS ALB
Here is a minimal Worker implementation:
// worker/src/index.ts
import { verifyToken } from "@clerk/backend";
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const authHeader = request.headers.get("Authorization");
if (!authHeader?.startsWith("Bearer ")) {
return new Response("Unauthorized", { status: 401 });
}
const token = authHeader.slice(7);
let payload;
try {
payload = await verifyToken(token, {
secretKey: env.CLERK_SECRET_KEY,
});
} catch {
return new Response("Invalid token", { status: 401 });
}
const userId = payload.sub;
const orgId = payload.org_id ?? "";
// Look up plan from KV cache (fallback: fetch from API)
const plan = (await env.PLAN_CACHE.get(userId)) ?? "free";
// Forward to AWS with internal headers
const upstreamUrl = new URL(request.url);
upstreamUrl.hostname = env.INTERNAL_API_HOST;
const proxiedRequest = new Request(upstreamUrl.toString(), {
method: request.method,
headers: {
...Object.fromEntries(request.headers),
"X-User-Id": userId,
"X-Org-Id": orgId,
"X-Plan": plan,
"X-Request-Id": crypto.randomUUID(),
},
body: request.body,
});
return fetch(proxiedRequest);
},
};Configure separate Worker environments in wrangler.toml:
name = "my-api-worker"
[env.production]
name = "my-api-worker-prod"
vars = { ENV = "production", INTERNAL_API_HOST = "alb.prod.internal.example.com" }
[env.development]
name = "my-api-worker-dev"
vars = { ENV = "development", INTERNAL_API_HOST = "alb.dev.internal.example.com" }Set secrets per environment:
wrangler secret put CLERK_SECRET_KEY --env production
wrangler secret put CLERK_SECRET_KEY --env developmentStep 4: Configure FastAPI to Trust Internal Headers
Because FastAPI sits behind the Worker, it never parses JWTs. It simply reads the headers the Worker already validated.
from fastapi import Request, HTTPException
def get_current_user(request: Request):
user_id = request.headers.get("x-user-id")
org_id = request.headers.get("x-org-id")
plan = request.headers.get("x-plan", "free")
if not user_id:
raise HTTPException(status_code=401, detail="Missing user context")
return {"user_id": user_id, "org_id": org_id, "plan": plan}
@app.get("/api/analysis")
def run_analysis(user=Depends(get_current_user)):
if user["plan"] == "free":
# Check usage limits from DB
pass
# ... business logicThis keeps 100% of the backend focused on business logic. Swapping auth providers in the future means only updating the Worker, not touching FastAPI.
Step 5: Handle Stripe Subscriptions via Webhooks
Never query Stripe in real time during API requests. The latency is unpredictable and a missed webhook can leave users locked out.
Instead, maintain a subscriptions table in your database and treat it as the source of truth:
CREATE TABLE subscriptions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
subject_type VARCHAR(10) NOT NULL, -- 'user' or 'org'
subject_id TEXT NOT NULL,
stripe_customer_id TEXT NOT NULL,
stripe_subscription_id TEXT,
plan VARCHAR(20) NOT NULL DEFAULT 'free',
status VARCHAR(20) NOT NULL DEFAULT 'active',
current_period_end TIMESTAMPTZ,
usage_count INT NOT NULL DEFAULT 0,
usage_limit INT NOT NULL DEFAULT 5,
updated_at TIMESTAMPTZ DEFAULT NOW()
);Receive Stripe events in FastAPI and update this table:
import stripe
from fastapi import Request, HTTPException
@app.post("/webhook/stripe")
async def stripe_webhook(request: Request):
payload = await request.body()
sig_header = request.headers.get("stripe-signature")
try:
event = stripe.Webhook.construct_event(
payload, sig_header, settings.STRIPE_WEBHOOK_SECRET
)
except stripe.error.SignatureVerificationError:
raise HTTPException(status_code=400, detail="Invalid signature")
if event["type"] == "customer.subscription.updated":
sub = event["data"]["object"]
customer_id = sub["customer"]
new_plan = sub["items"]["data"][0]["price"]["lookup_key"]
await db.execute(
"""
UPDATE subscriptions
SET plan = $1, status = $2, current_period_end = to_timestamp($3)
WHERE stripe_customer_id = $4
""",
new_plan,
sub["status"],
sub["current_period_end"],
customer_id,
)
return {"ok": True}Store the stripe_customer_id in Clerk's user public metadata so you can cross-reference webhook events back to your internal user:
{
"stripe_customer_id": "cus_abc123"
}Step 6: Store Large Files and GIS Assets in Cloudflare R2
If your app generates exports (CSV, GeoJSON, map tiles), use Cloudflare R2 instead of S3. R2 has no egress fees, which matters a lot for data-heavy applications.
import boto3
r2_client = boto3.client(
"s3",
endpoint_url=f"https://{settings.CF_ACCOUNT_ID}.r2.cloudflarestorage.com",
aws_access_key_id=settings.R2_ACCESS_KEY_ID,
aws_secret_access_key=settings.R2_SECRET_ACCESS_KEY,
region_name="auto",
)
def upload_export(file_path: str, object_key: str):
r2_client.upload_file(file_path, "my-exports-bucket", object_key)
return f"https://exports.yourdomain.com/{object_key}"Step 7: Parallel Dev and Production Environments
The key mental model: dev and prod are not different .env files — they are separate copies of every resource.
| Resource | Production | Development |
|---|---|---|
| Cloudflare Pages | app.yourdomain.com |
Preview deployments (auto) |
| Cloudflare Worker | my-api-worker-prod |
my-api-worker-dev |
| AWS ECS Service | my-api-prod (2+ tasks) |
my-api-dev (1 task) |
| RDS Instance | db-prod.cluster-xxx.rds.amazonaws.com |
db-dev.cluster-xxx.rds.amazonaws.com |
| Clerk Application | Production app (live keys) | Development app (test keys) |
| Stripe | Live mode | Test mode |
| Cloudflare R2 | my-exports-prod |
my-exports-dev |
For FastAPI, use AWS Secrets Manager or SSM Parameter Store — separate namespaces per environment:
/my-app/production/DB_URL
/my-app/production/STRIPE_WEBHOOK_SECRET
/my-app/development/DB_URL
/my-app/development/STRIPE_WEBHOOK_SECRETFor local development, a plain .env file is fine. Never commit it to the repository:
# .env.local (git-ignored)
ENV=local
DB_URL=postgresql://postgres:password@localhost:5432/mydb
STRIPE_WEBHOOK_SECRET=whsec_test_...Permission Model Summary
The three-layer model that ties everything together:
Auth (Clerk) → Who are you?
Plan (Stripe/DB) → How much can you use?
Usage (DB) → How much have you used?The Worker can enforce plan gates before the request even reaches AWS — rejecting over-quota requests at the edge with no backend cost.
Example for a usage-based feature:
@app.post("/api/analysis/run")
async def run_analysis(user=Depends(get_current_user)):
sub = await db.fetchrow(
"SELECT usage_count, usage_limit FROM subscriptions WHERE subject_id = $1",
user["user_id"]
)
if sub["usage_count"] >= sub["usage_limit"]:
raise HTTPException(status_code=429, detail="Usage limit reached")
# Run analysis...
await db.execute(
"UPDATE subscriptions SET usage_count = usage_count + 1 WHERE subject_id = $1",
user["user_id"]
)Common Pitfalls
Do not do permission checks in the frontend. Any logic controlled by JavaScript can be bypassed. The frontend is display only.
Do not query Stripe on every API request. Your database subscription table is the source of truth. Stripe webhooks keep it up to date asynchronously.
Do not parse Clerk JWTs in FastAPI. That is the Worker's job. If FastAPI also tries to validate JWTs, you end up with two permission models and confusing bugs.
Do not run a Docker database in production. Docker databases are fine for local development. In production, use RDS. Managed databases handle backups, failover, and upgrades — your Docker container does not.
Cost Estimate (MVP Scale)
| Service | Monthly Cost |
|---|---|
| Cloudflare Pages | Free |
| Cloudflare Workers (10M req/month) | Free |
| Cloudflare R2 (10GB) | ~$0.15 |
| AWS ECS Fargate (0.5 vCPU / 1GB, 24/7) | ~$15-25 |
| AWS RDS PostgreSQL (db.t4g.micro) | ~$15-25 |
Total for a functional MVP: roughly $30-50/month. Dev environment adds another ~$15-20 if kept running 24/7 (consider stopping it overnight to save cost).
Summary
This architecture gives you a production-ready foundation that scales from solo project to multi-tenant SaaS without fundamental redesign:
- Cloudflare Pages — static hosting with zero ops overhead
- Cloudflare Workers — the single trust boundary for all authentication and routing
- AWS ECS Fargate — containerized backend with no server management
- AWS RDS — managed PostgreSQL (add PostGIS for geospatial workloads)
- Cloudflare R2 — large file and export storage with no egress fees
- Clerk — authentication handled entirely at the edge, never in the backend
- Stripe — billing via webhook-driven DB synchronization, never queried in real time
The architecture is designed to be swappable at each layer. If you later want to move from Clerk to Auth0, or from ECS to a different compute platform, the interfaces do not change — only the implementation behind them.
Start with this as your baseline, validate your product, and scale the pieces that actually need it.
