Skip to main content

Documentation Index

Fetch the complete documentation index at: https://yieldxyz.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

Overview

Rate limits protect the API and ensure fair usage across all clients. Limits vary by plan tier and are designed to accommodate everything from development to high-scale production. For a full breakdown of plan tiers, rate limits per plan, and pricing details, see Plans & Pricing.

Rate Limit Headers

API responses include headers to help you track your rate limit status:
x-ratelimit-limit: 100
x-ratelimit-remaining: 95
x-ratelimit-reset: 1698765432
HeaderDescription
x-ratelimit-limitMaximum requests per second for your plan
x-ratelimit-remainingRequests remaining in current window
x-ratelimit-resetUnix timestamp when the limit resets

Handling Rate Limits

When rate limited, you’ll receive a 429 Too Many Requests response:
{
  "statusCode": 429,
  "error": "Too Many Requests",
  "message": "Rate limit exceeded",
  "retryAfter": 30
}

Retry Strategy

Implement exponential backoff when encountering rate limits:
async function fetchWithRetry(
  url: string, 
  options: RequestInit, 
  maxRetries = 3
): Promise<Response> {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch(url, options);
    
    if (response.status === 429) {
      const retryAfter = response.headers.get('Retry-After') || '5';
      const waitTime = parseInt(retryAfter) * 1000 * Math.pow(2, attempt);
      
      console.log(`Rate limited. Retrying in ${waitTime / 1000}s...`);
      await new Promise(r => setTimeout(r, waitTime));
      continue;
    }
    
    return response;
  }
  
  throw new Error('Max retries exceeded');
}

Best Practices

Cache responses

Cache yield metadata, validators, and other stable data to reduce API calls

Batch requests

Use aggregate endpoints like /yields/balances instead of individual calls

Implement backoff

Use exponential backoff for retries to avoid cascading failures

Monitor usage

Track your rate limit usage via response headers

Optimizing API Usage

Yield metadata (APY, TVL, validators) changes infrequently. Cache for 5–15 minutes to reduce calls:
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes
const cache = new Map();

async function getYields() {
  const cached = cache.get('yields');
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }
  
  const response = await fetch('https://api.yield.xyz/v1/yields');
  const data = await response.json();
  cache.set('yields', { data, timestamp: Date.now() });
  return data;
}
Instead of calling /yields/{yieldId}/balances for each yield, use the aggregate endpoint:
# Instead of multiple calls:
# GET /yields/eth-lido-staking/balances
# GET /yields/eth-rocketpool-staking/balances

# Use aggregate:
POST /yields/balances
{
  "addresses": ["0x..."],
  "yieldIds": ["eth-lido-staking", "eth-rocketpool-staking"]
}
Queue requests to stay within rate limits during high-traffic periods:
class RequestQueue {
  private queue: (() => Promise<any>)[] = [];
  private processing = false;
  private requestsPerSecond = 100;
  
  async add<T>(fn: () => Promise<T>): Promise<T> {
    return new Promise((resolve, reject) => {
      this.queue.push(async () => {
        try {
          resolve(await fn());
        } catch (e) {
          reject(e);
        }
      });
      this.process();
    });
  }
  
  private async process() {
    if (this.processing) return;
    this.processing = true;
    
    while (this.queue.length > 0) {
      const fn = this.queue.shift()!;
      await fn();
      await new Promise(r => 
        setTimeout(r, 1000 / this.requestsPerSecond)
      );
    }
    
    this.processing = false;
  }
}

Upgrade Your Plan

Need higher rate limits?

Upgrade to Pro

1,000+ requests/second for high-volume apps

Enterprise Inquiry

Custom rate limits for institutional scale

Next Steps

Plans & Pricing

Compare all plan features

Compute Units

CU pricing and endpoint costs