Rate Limiting
Platform-level rate limiting protects the system from abuse with sliding window limits, queue dedup, endpoint debounce, and abuse detection.
Rate Limiting
OneLift applies multi-layer rate limiting to protect both the platform and your servers from accidental or intentional overload.
Layers
Request Flow
+------------------+ +-------------------+ +------------------+ +-----------+
| Endpoint | --> | Job Rate Limit | --> | Queue Capacity | --> | Worker |
| Debounce (3s) | | (sliding window) | | (size cap) | | Execute |
+------------------+ +-------------------+ +------------------+ +-----------+
| | |
429 Too 429 Too 429 System
Many Requests Many Requests Busy
Layer 1: Endpoint Debounce
Prevents duplicate rapid-fire clicks. Every mutating request (POST/PUT/PATCH/DELETE) from the same user session is debounced with a 3-second window. The second identical request within 3s returns 429.
- Uses Redis
SET NX EX(atomic) - Identifies users by JWT token hash (works before auth resolves)
- Internal callers (worker, webhooks) are bypassed automatically
Layer 2: Job Rate Limit (Sliding Window)
Per-user, per-category limits using Redis sorted sets. Actions are grouped into categories:
| Category | Actions | Free Tier | Pro Tier |
|---|---|---|---|
| install | marketplace_install, install_tools, reinstall, uninstall, reconfigure, update | 10/hr | 50/hr |
| backup | backup_trigger, backup_init | 5/hr | 25/hr |
| restore | backup_restore, snapshot_restore | 3/hr | 15/hr |
Actions not in any category (server_status, server_restart, etc.) are exempt from rate limiting.
Enterprise tier has unlimited rate limits.
Layer 3: Queue Capacity
Prevents queue flooding even when individual rate limits are not exceeded:
| Queue | Max Size |
|---|---|
| fast | 500 |
| deploy | 100 |
| heavy | 50 |
| backup | 20 |
Layer 4: Job Dedup
BullMQ native jobId prevents the exact same operation from running twice. Pattern: {action}:{userId}:{projectDocumentId}. If a duplicate job is submitted, returns 409 Conflict.
Layer 5: Abuse Detection
Tracks rate limit hits per user. Escalates automatically:
| Hits in 1 hour | Action |
|---|---|
| 3+ | Soft throttle (60s cooldown) |
| 10+ | Hard throttle (1h cooldown) + alert to ops |
HTTP Responses
429 Too Many Requests
{
"error": { "message": "Rate limit exceeded for install operations." },
"data": {
"resetSec": 1847,
"current": 10,
"limit": 10,
"upgradeMessage": "Upgrade to Pro for higher limits."
}
}
Headers: X-RateLimit-Limit, X-RateLimit-Remaining, Retry-After
409 Conflict (Duplicate Job)
{
"error": { "message": "This operation is already running for this project." }
}
CLI Behavior
When Lift CLI receives a rate limit error:
Error: Rate limit: 10/10 used. Try again in 31 minutes. Upgrade to Pro for higher limits.
The CLI extracts rate limit info from both the response body and Retry-After header.
Fail-Open Design
All rate limiting is fail-open: if Redis is unavailable, requests proceed normally. This ensures platform availability is never blocked by the rate limiting subsystem.
Configuration
Rate limits are configured per package tier via the quotas.rateLimits field in the Package collection. To override defaults for a tier, set the JSON in the admin panel:
{
"rateLimits": {
"install": { "limit": 30, "windowSec": 3600 },
"backup": { "limit": 15, "windowSec": 3600 },
"restore": { "limit": 10, "windowSec": 3600 }
}
}
Related
- Security -- SSH, encryption, and Traefik rate limiting
- Troubleshooting -- Rate limit error solutions