Rate Limits
IndepAI uses rate limiting to ensure fair usage and protect the API from abuse. This page explains how rate limits work and how to handle them in your application.
Rate Limit Tiers
Section titled “Rate Limit Tiers”Rate limits are based on your subscription tier:
| Tier | Daily Limit | AI Coach Limit | Burst Limit |
|---|---|---|---|
| Starter | 1,000 requests | 50 requests | 60 req/minute |
| Pro | 10,000 requests | 500 requests | 300 req/minute |
Anonymous Users
Section titled “Anonymous Users”Unauthenticated requests are rate-limited by IP address:
- 100 requests per day across all public endpoints
- 10 requests per minute burst limit
Rate Limit Headers
Section titled “Rate Limit Headers”Every API response includes rate limit headers:
X-RateLimit-Limit: 100X-RateLimit-Remaining: 95X-RateLimit-Reset: 1705312800X-RateLimit-Resource: api| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum requests allowed in the period |
X-RateLimit-Remaining | Requests remaining in current period |
X-RateLimit-Reset | Unix timestamp when the limit resets |
X-RateLimit-Resource | The resource being rate-limited |
Handling Rate Limits
Section titled “Handling Rate Limits”When you exceed the rate limit, the API returns:
HTTP/1.1 429 Too Many RequestsContent-Type: application/jsonRetry-After: 3600
{ "success": false, "error": "Rate limit exceeded", "code": "RATE_LIMITED", "message": "Rate limit exceeded. Upgrade to increase your API limit.", "retryAfter": 3600}Retry-After Header
Section titled “Retry-After Header”The Retry-After header tells you how many seconds to wait:
async function makeRequest() { const response = await fetch("/api/v1/calculator", { method: "POST", body: JSON.stringify(data), });
if (response.status === 429) { const retryAfter = response.headers.get("Retry-After") || 60; console.log(`Rate limited. Retry in ${retryAfter} seconds`);
// Wait and retry await new Promise((resolve) => setTimeout(resolve, retryAfter * 1000)); return makeRequest(); }
return response.json();}Exponential Backoff
Section titled “Exponential Backoff”For production applications, use exponential backoff:
async function fetchWithRetry(url: string, options: RequestInit, maxRetries = 3) { for (let attempt = 0; attempt < maxRetries; attempt++) { const response = await fetch(url, options);
if (response.status !== 429) { return response; }
const retryAfter = parseInt(response.headers.get("Retry-After") || "60"); const backoff = Math.min(retryAfter, Math.pow(2, attempt) * 1000);
console.log(`Rate limited. Retrying in ${backoff}ms (attempt ${attempt + 1})`); await new Promise((resolve) => setTimeout(resolve, backoff)); }
throw new Error("Max retries exceeded");}Best Practices
Section titled “Best Practices”1. Monitor Your Usage
Section titled “1. Monitor Your Usage”Track your remaining requests and plan accordingly:
let remainingRequests = 100;
async function makeAPICall() { const response = await fetch("/api/v1/calculator", { method: "POST", body });
// Update tracking remainingRequests = parseInt(response.headers.get("X-RateLimit-Remaining") || "0");
if (remainingRequests < 10) { console.warn(`Low on API requests: ${remainingRequests} remaining`); }
return response.json();}2. Cache Responses
Section titled “2. Cache Responses”Avoid unnecessary API calls by caching:
const cache = new Map();const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
async function getCachedResult(key: string, fetcher: () => Promise<any>) { const cached = cache.get(key);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) { return cached.data; }
const data = await fetcher(); cache.set(key, { data, timestamp: Date.now() }); return data;}3. Batch Requests
Section titled “3. Batch Requests”Where possible, batch multiple operations:
// Instead of multiple individual requestsconst results = await Promise.all( assets.map((a) => fetch("/api/v1/assets", { method: "POST", body: JSON.stringify(a), }) ));
// Consider using batch endpoints when availableconst result = await fetch("/api/v1/assets/batch", { method: "POST", body: JSON.stringify({ assets }),});Endpoint-Specific Limits
Section titled “Endpoint-Specific Limits”Some endpoints have additional restrictions:
| Endpoint | Additional Limit |
|---|---|
/api/v1/ai/* | 5-500/day by tier |
/api/v1/geo/recommendations | 20/hour |
/api/v1/tax/calculate | 50/hour |
Testing Rate Limits
Section titled “Testing Rate Limits”In development, you can test rate limit handling:
# Make 101 requests quickly to trigger limitfor i in {1..101}; do curl -s -o /dev/null -w "%{http_code}\n" \ -X POST http://localhost:3000/api/v1/calculator \ -H "Content-Type: application/json" \ -d '{"currentAge":30,"annualIncome":80000,"annualExpenses":40000,"currentSavings":100000}'doneUpgrading Your Tier
Section titled “Upgrading Your Tier”If you need higher limits, upgrade your plan:
- Starter - Standard API access for personal use
- Pro - Higher limits for power users and integrations
See current pricing and upgrade at indepai.app/dashboard/settings.
What counts as a request?
Section titled “What counts as a request?”Each HTTP request to an API endpoint counts as one request, regardless of the response status.
Do failed requests count?
Section titled “Do failed requests count?”Yes, all requests count including 4xx and 5xx errors.
When do limits reset?
Section titled “When do limits reset?”Daily limits reset at midnight UTC. The exact reset time is in the X-RateLimit-Reset header.
Can I get a temporary limit increase?
Section titled “Can I get a temporary limit increase?”Contact support@indepai.app to discuss options.