Overview
Enigma enforces rate limits to ensure fair usage and system stability. Rate limits are applied per user (API key owner), not per API key.
Rate limits are per user, not per API key. If you have multiple API keys, they all share the same rate limit.
Rate Limits by Endpoint
| Endpoint | Limit | Window | Notes |
|---|
POST /start/start-session | 10 requests | per minute per user | Session creation |
POST /start/send-message | 10 requests | per minute per user | All action types |
POST /start/run-task | 10 requests | per minute per user | Single-task execution |
POST /v1/chat/completions | 10 requests | per minute per user | OpenAI-compatible endpoint |
GET /task/:sessionId/:taskId | No limit | - | Polling endpoint |
GET /v1/models | No limit | - | Model listing |
GET /start/health | No limit | - | Health check |
GET /start/active-sockets | No limit | - | Active connections |
Rate Limit Details
Per-User Limits
All rate limits are tied to your user account, not individual API keys. This means:
- Creating multiple API keys does not increase your rate limit
- All requests from all your API keys count toward the same limit
- Rate limits reset every minute
Example:
// These all count toward the same 10/minute limit:
fetch('https://connect.enigma.click/start/run-task', {
headers: { 'Authorization': 'Bearer API_KEY_1' }
});
fetch('https://connect.enigma.click/start/run-task', {
headers: { 'Authorization': 'Bearer API_KEY_2' }
});
fetch('https://connect.enigma.click/start/run-task', {
headers: { 'Authorization': 'Bearer API_KEY_3' }
});
// Total: 3 requests toward your 10/minute limit
Per-Minute Window
Rate limits use a sliding window of 60 seconds:
- At any given moment, you can make up to 10 requests
- The window slides continuously (not fixed to clock minutes)
- Requests older than 60 seconds ago don’t count toward your limit
Example Timeline:
Time Action Requests in last 60s
00:00 Make 10 requests 10/10 (at limit)
00:30 Try another request ❌ Rate limited
01:00 First request expired 9/10 (space available)
01:00 Make new request 10/10 (at limit)
01:05 5 more requests expired 5/10 (space available)
Unlimited Endpoints
Some endpoints have no rate limits:
GET /task/:sessionId/:taskId - Poll as frequently as needed for task results
GET /v1/models - Retrieve model list anytime
GET /start/health - Monitor system health without restrictions
GET /start/active-sockets - Check active connections anytime
Use polling endpoints freely for monitoring long-running tasks without worrying about rate limits.
Enigma API responses include rate limit information in HTTP headers:
| Header | Description | Example |
|---|
X-RateLimit-Limit | Maximum requests per window | 10 |
X-RateLimit-Remaining | Requests remaining in current window | 7 |
X-RateLimit-Reset | Unix timestamp when limit resets | 1704067260 |
Retry-After | Seconds until you can retry (429 only) | 30 |
Example Response Headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 10
X-RateLimit-Remaining: 7
X-RateLimit-Reset: 1704067260
Example 429 Response Headers:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 10
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1704067260
Retry-After: 30
Handling 429 Errors
When you exceed the rate limit, you’ll receive a 429 Too Many Requests response:
{
"success": false,
"message": "Rate limit exceeded. Please try again later."
}
HTTP Headers:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 10
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1704067260
Retry-After: 30
Retry Strategy
Implement exponential backoff when handling rate limits:
async function makeRequestWithRetry(url, options, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
const response = await fetch(url, options);
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
const backoffTime = Math.min(retryAfter * 1000, 60000); // Max 60s
console.log(`Rate limited. Retrying in ${retryAfter}s...`);
await new Promise(resolve => setTimeout(resolve, backoffTime));
continue;
}
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return await response.json();
} catch (error) {
if (attempt === maxRetries - 1) throw error;
// Exponential backoff for other errors
const backoffTime = Math.min(1000 * Math.pow(2, attempt), 30000);
await new Promise(resolve => setTimeout(resolve, backoffTime));
}
}
throw new Error('Max retries exceeded');
}
// Usage
const result = await makeRequestWithRetry(
'https://connect.enigma.click/start/run-task',
{
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer YOUR_API_KEY'
},
body: JSON.stringify({ taskDetails: 'Search Google for Anthropic' })
}
);
Check Rate Limit Before Request
Proactively check remaining rate limit from headers:
class RateLimitedClient {
constructor(apiKey) {
this.apiKey = apiKey;
this.remaining = 10;
this.resetTime = Date.now();
}
async request(endpoint, options = {}) {
// Wait if rate limit is exhausted
if (this.remaining <= 0 && Date.now() < this.resetTime) {
const waitTime = this.resetTime - Date.now();
console.log(`Rate limit exhausted. Waiting ${waitTime}ms...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
}
const response = await fetch(`https://connect.enigma.click${endpoint}`, {
...options,
headers: {
...options.headers,
'Authorization': `Bearer ${this.apiKey}`
}
});
// Update rate limit info from headers
this.remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '10');
this.resetTime = parseInt(response.headers.get('X-RateLimit-Reset') || Date.now()) * 1000;
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
return await response.json();
}
}
// Usage
const client = new RateLimitedClient('YOUR_API_KEY');
const result = await client.request('/start/run-task', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ taskDetails: 'Search Google' })
});
Best Practices
1. Batch Operations
Instead of making many small requests, combine operations into fewer requests:
❌ Bad - Multiple requests:
// Each task is a separate request (10 requests)
for (let i = 0; i < 10; i++) {
await fetch('https://connect.enigma.click/start/run-task', {
method: 'POST',
headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' },
body: JSON.stringify({ taskDetails: `Task ${i}` })
});
}
// ❌ Rate limited after 10 requests
✅ Good - Use persistent sessions:
// Create one session (1 request)
const session = await fetch('https://connect.enigma.click/start/start-session', {
method: 'POST',
headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' },
body: JSON.stringify({ taskDetails: 'Initial task' })
}).then(r => r.json());
// Run multiple tasks in same session (10 requests)
for (let i = 0; i < 10; i++) {
await fetch('https://connect.enigma.click/start/send-message', {
method: 'POST',
headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' },
body: JSON.stringify({
sessionId: session.sessionId,
message: { actionType: 'newTask', newState: 'start', taskDetails: `Task ${i}` }
})
});
}
// ✅ Total: 11 requests (still under limit if done quickly)
2. Use Polling Efficiently
Polling endpoints have no rate limit, so use them instead of creating new requests:
❌ Bad:
// Repeatedly calling run-task to check status (hits rate limit)
while (true) {
const result = await fetch('https://connect.enigma.click/start/run-task', {
method: 'POST',
headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' },
body: JSON.stringify({ taskDetails: 'Search Google' })
}).then(r => r.json());
if (result.status === 'complete') break;
await new Promise(r => setTimeout(r, 2000));
}
✅ Good:
// Create task once, then poll (unlimited)
const task = await fetch('https://connect.enigma.click/start/run-task', {
method: 'POST',
headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' },
body: JSON.stringify({ taskDetails: 'Search Google' })
}).then(r => r.json());
// Poll until complete (no rate limit)
while (true) {
const result = await fetch(task.pollUrl).then(r => r.json());
if (!result.pending) return result;
await new Promise(r => setTimeout(r, 2000));
}
3. Implement Request Queuing
Queue requests to stay within rate limits:
class RateLimitedQueue {
constructor(apiKey, requestsPerMinute = 10) {
this.apiKey = apiKey;
this.requestsPerMinute = requestsPerMinute;
this.queue = [];
this.processing = false;
this.requestTimes = [];
}
async enqueue(endpoint, options) {
return new Promise((resolve, reject) => {
this.queue.push({ endpoint, options, resolve, reject });
this.processQueue();
});
}
async processQueue() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
// Remove requests older than 60 seconds
const now = Date.now();
this.requestTimes = this.requestTimes.filter(time => now - time < 60000);
// Wait if at rate limit
if (this.requestTimes.length >= this.requestsPerMinute) {
const oldestRequest = this.requestTimes[0];
const waitTime = 60000 - (now - oldestRequest);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
// Process next request
const { endpoint, options, resolve, reject } = this.queue.shift();
this.requestTimes.push(Date.now());
try {
const response = await fetch(`https://connect.enigma.click${endpoint}`, {
...options,
headers: {
...options.headers,
'Authorization': `Bearer ${this.apiKey}`
}
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}: ${response.statusText}`);
}
resolve(await response.json());
} catch (error) {
reject(error);
}
}
this.processing = false;
}
}
// Usage
const queue = new RateLimitedQueue('YOUR_API_KEY');
// Enqueue 20 requests (will be automatically throttled)
const promises = [];
for (let i = 0; i < 20; i++) {
promises.push(
queue.enqueue('/start/run-task', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ taskDetails: `Task ${i}` })
})
);
}
const results = await Promise.all(promises);
console.log(`Completed ${results.length} tasks`);
Always check rate limit headers to avoid hitting limits:
async function monitoredRequest(url, options) {
const response = await fetch(url, options);
const limit = response.headers.get('X-RateLimit-Limit');
const remaining = response.headers.get('X-RateLimit-Remaining');
const reset = response.headers.get('X-RateLimit-Reset');
console.log(`Rate Limit: ${remaining}/${limit} remaining`);
console.log(`Resets at: ${new Date(reset * 1000).toISOString()}`);
if (parseInt(remaining) <= 2) {
console.warn('⚠️ Approaching rate limit!');
}
return await response.json();
}
5. Use WebSocket for Real-Time Updates
WebSocket connections don’t count toward rate limits:
❌ Bad - Polling via REST:
// Each poll is a rate-limited request
setInterval(async () => {
const status = await fetch('https://connect.enigma.click/start/send-message', {
method: 'POST',
headers: { 'Authorization': 'Bearer YOUR_API_KEY', 'Content-Type': 'application/json' },
body: JSON.stringify({
sessionId: 'abc123',
message: { actionType: 'state', newState: 'status' }
})
}).then(r => r.json());
console.log(status);
}, 5000);
// ❌ Hits rate limit quickly
✅ Good - WebSocket events:
import { io } from 'socket.io-client';
const socket = io('https://connect.enigma.click', {
auth: { sessionId: 'abc123' }
});
socket.on('message', (data) => {
console.log('Real-time update:', data);
});
// ✅ No rate limit, real-time updates
Common Rate Limit Scenarios
Scenario 1: Batch Processing
Problem: Need to process 100 tasks
Solution: Create sessions and spread requests over time
const queue = new RateLimitedQueue('YOUR_API_KEY', 10);
// Create 10 sessions (takes 1 minute)
const sessions = [];
for (let i = 0; i < 10; i++) {
sessions.push(
queue.enqueue('/start/start-session', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({})
})
);
}
await Promise.all(sessions);
// Run 10 tasks per session (takes 10 minutes total)
for (let i = 0; i < 100; i++) {
const sessionIndex = i % 10;
queue.enqueue('/start/send-message', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
sessionId: sessions[sessionIndex].sessionId,
message: {
actionType: 'newTask',
newState: 'start',
taskDetails: `Task ${i}`
}
})
});
}
Scenario 2: Real-Time Monitoring
Problem: Need to monitor multiple sessions in real-time
Solution: Use WebSocket instead of REST polling
import { io } from 'socket.io-client';
const sessions = ['session1', 'session2', 'session3'];
sessions.forEach(sessionId => {
const socket = io('https://connect.enigma.click', {
auth: { sessionId }
});
socket.on('message', (data) => {
console.log(`[${sessionId}]`, data);
});
});
// ✅ No rate limits, real-time updates for all sessions
Scenario 3: Burst Traffic
Problem: Occasional bursts of 20-30 requests
Solution: Implement request queuing with automatic throttling
// Use the RateLimitedQueue from earlier
const queue = new RateLimitedQueue('YOUR_API_KEY', 10);
// Handle burst (automatically throttled)
async function handleBurst(tasks) {
const promises = tasks.map(task =>
queue.enqueue('/start/run-task', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ taskDetails: task })
})
);
return await Promise.all(promises);
}
// Process 30 tasks (automatically spread over 3 minutes)
const tasks = Array.from({ length: 30 }, (_, i) => `Task ${i}`);
const results = await handleBurst(tasks);
FAQ
Can I increase my rate limit?
Currently, rate limits are fixed at 10 requests per minute per user. Enterprise plans with higher rate limits may be available in the future.
Do WebSocket connections count toward rate limits?
No. WebSocket connections and messages do not count toward rate limits. Only REST API requests are rate limited.
What happens if I hit the rate limit?
You’ll receive a 429 Too Many Requests response with a Retry-After header indicating how long to wait before retrying.
Are rate limits shared across API keys?
Yes. Rate limits are per user, not per API key. All your API keys share the same rate limit.
Can I check my current rate limit usage?
Yes. Check the X-RateLimit-Remaining header in any API response to see how many requests you have left in the current window.
Do failed requests count toward the rate limit?
Yes. All requests count toward the rate limit, including those that fail with 4xx or 5xx errors.
Next Steps