Skip to main content
Skip to main content
Guide

Rate Limiting

Understand rate limits, monitor your usage with response headers, and implement proper backoff strategies.

Rate Limit Tiers

Rate limits vary by plan and endpoint. Higher-tier plans get more generous limits and concurrent task allowances.

PlancreateTaskgetTaskResultgetBalanceConcurrentBurst
Free
10 req/min30 req/min10 req/min515
Starter
60 req/min120 req/min30 req/min25100
Pro
300 req/min600 req/min60 req/min100500
Enterprise
CustomCustomCustomUnlimitedUnlimited

Response Headers

Every API response includes rate limit headers. Use these to monitor your usage and implement proactive throttling.

X-RateLimit-Limit
60

Maximum number of requests allowed in the current window.

X-RateLimit-Remaining
45

Number of requests remaining in the current window.

X-RateLimit-Reset
1711612800

Unix timestamp (seconds) when the rate limit window resets.

Retry-After
30

Seconds to wait before retrying (only present in 429 responses).

Example rate-limited response (HTTP 429)
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1711612800
Retry-After: 30

{
  "errorId": 98,
  "errorCode": "ERROR_RATE_LIMITED",
  "errorDescription": "Too many requests. Please retry after 30 seconds."
}

Best Practices

Monitor headers proactively

Check X-RateLimit-Remaining before each request and slow down when running low.

Use webhooks instead of polling

Configure webhooks to receive results instead of polling getTaskResult repeatedly.

Batch requests when possible

Create multiple tasks in quick succession, then poll results in a single loop.

Cache balance checks

Do not call getBalance before every task. Cache the balance and refresh periodically.

Implement circuit breakers

After receiving a 429, stop all requests until the reset time instead of hammering the API.

Queue requests client-side

Use a client-side queue with rate-aware dispatching to stay within limits.

Exponential Backoff

When you receive a 429 response, use exponential backoff with jitter to retry gracefully. Here is a production-ready implementation.

/**
 * Fetch with automatic retry and exponential backoff.
 * Respects the Retry-After header when present.
 */
async function fetchWithBackoff(url, options, maxRetries = 5) {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    const response = await fetch(url, options);

    if (response.status !== 429) {
      return response;
    }

    if (attempt === maxRetries) {
      throw new Error('Max retries exceeded');
    }

    // Respect Retry-After header if present
    const retryAfter = response.headers.get('Retry-After');
    let delay;

    if (retryAfter) {
      delay = parseInt(retryAfter, 10) * 1000;
    } else {
      // Exponential backoff with jitter
      const base = Math.pow(2, attempt) * 1000;
      const jitter = Math.random() * 1000;
      delay = base + jitter;
    }

    console.log(`Rate limited. Retrying in ${delay}ms (attempt ${attempt + 1}/${maxRetries})`);
    await new Promise(resolve => setTimeout(resolve, delay));
  }
}

// Usage
const response = await fetchWithBackoff('/api/solver/createTask', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ clientKey: API_KEY, task: { ... } })
});
Pro Tip
Always add random jitter to your backoff delay to prevent the "thundering herd" problem where many clients retry at the exact same time.