lift scale

Scale your application up or down by setting the number of replicas.

lift scale

Scale your application up or down. Show current replica count if no argument provided.

Usage

lift scale <app-name> [replicas]

Options

FlagDescriptionDefault
--forceSkip capacity checksfalse
--serviceTarget a specific compose serviceauto-detected
--pathOverride deploy directory (for marketplace tools)/home/deploy/<name>

Rules

  • Replicas: 1-10 for both Docker and Compose modes.
  • Docker mode: Multi-replica requires a domain (Traefik load balancing).
  • Compose mode: container_name in compose file blocks scaling above 1.
  • A pre-flight RAM capacity check runs before scaling up (use --force to skip).
  • Config is automatically updated after successful scaling.

Examples

# Show current replicas
$ lift scale myapp
myapp: 2 replica(s) running

# Scale up to 3 replicas
$ lift scale myapp 3
Scaled myapp to 3 replica(s)

# Scale down to 1
$ lift scale myapp 1
Scaled down to 1 replica(s)

# Scale with force (skip capacity check)
$ lift scale myapp 5 --force
Scaled myapp to 5 replica(s)

# Scale a specific compose service
$ lift scale myapp 2 --service worker
Scaled worker to 2 replica(s)

Scaling Marketplace Tools

Marketplace tools (PostgreSQL, n8n, Grafana, etc.) can also be scaled directly from the CLI using the --path flag, which points to the tool's install directory:

# Scale a marketplace tool to 2 replicas
lift scale postgresql 2 --path /opt/onelift/tools/postgresql

# Scale a specific service within a multi-service tool
lift scale strapi 3 --path /opt/onelift/tools/strapi --service app

# Show current replica count for a marketplace tool
lift scale postgresql --path /opt/onelift/tools/postgresql

You can also scale marketplace tools from the OneLift dashboard under the tool's settings panel — no CLI needed.

Load Balancing

When running multiple replicas, Traefik uses round-robin with sticky sessions. A cookie (onelift_instance) pins each visitor to a specific replica, preventing session loss for stateful apps. Different visitors are distributed across replicas.

See Replicas & Scaling for details.