Cluster Mode
Scale your application across multiple CPU cores automatically
Cluster Mode π₯οΈ
AzuraJS provides built-in cluster mode support to automatically scale your application across all available CPU cores with zero manual configuration.
Enable Cluster Mode β‘
Simply enable cluster mode in your configuration file and AzuraJS handles everything automatically:
import type { ConfigTypes } from "azurajs/config";
const config: ConfigTypes = {
server: {
port: 3000,
cluster: true, // Enable cluster mode
},
};
export default config;That's it! When cluster: true is set, AzuraJS automatically:
- β Detects the number of CPU cores available
- β Spawns one worker process per CPU core
- β Distributes incoming connections across workers using round-robin
- β Automatically restarts crashed workers
- β Handles graceful shutdown of all workers
- β Manages inter-process communication
How It Works π§
No manual cluster code is needed. Your application code remains simple:
import { AzuraClient, applyDecorators } from "azurajs";
import { HomeController } from "./controllers/HomeController";
const app = new AzuraClient();
applyDecorators(app, [HomeController]);
await app.listen();AzuraJS internally handles all cluster logic based on your configuration. The framework will:
- Create a primary process that manages workers
- Fork worker processes (one per CPU core)
- Each worker runs your application independently
- Load balancing is handled at the OS level
- Worker crashes are detected and new workers are spawned automatically
You don't need to write any cluster code yourself - AzuraJS manages everything behind the scenes.
When to Use Cluster Mode π
Use cluster mode when:
- β Running in production environments
- β Handling high traffic and concurrent requests
- β Multi-core server available (2+ cores)
- β Need improved performance and reliability
- β Want automatic process recovery
Don't use cluster mode when:
- β Developing locally (single process is easier to debug)
- β Running on single-core systems (no benefit)
- β Using container orchestration (Kubernetes, Docker Swarm)
- β Need to debug specific issues
- β Running scheduled tasks or cron jobs
Environment-Based Configuration π
Enable cluster mode only in production:
const isProduction = process.env.NODE_ENV === "production";
const config: ConfigTypes = {
environment: isProduction ? "production" : "development",
server: {
port: 3000,
cluster: isProduction, // Cluster only in production
},
};
export default config;Complete Configuration Example βοΈ
import type { ConfigTypes } from "azurajs/config";
const config: ConfigTypes = {
environment: "production",
server: {
port: process.env.PORT || 3000,
cluster: true, // Enable cluster mode
ipHost: false,
},
logging: {
enabled: true,
showDetails: true, // Shows worker process IDs in logs
},
plugins: {
cors: {
enabled: true,
origins: ["*"],
methods: ["GET", "POST", "PUT", "DELETE"],
},
},
};
export default config;Shared State Considerations πΎ
Workers run in separate processes and don't share memory. Use external storage for shared state:
β Won't Work Across Workers
// In-memory cache won't be shared between workers
const cache = new Map();
@Get("/data")
getData() {
if (cache.has("key")) {
return cache.get("key");
}
// This cache is per-worker, not shared!
}β Use External Storage
// Redis for shared cache across all workers
import Redis from "ioredis";
const redis = new Redis();
@Get("/data")
async getData() {
const cached = await redis.get("key");
if (cached) {
return JSON.parse(cached);
}
const data = await fetchData();
await redis.set("key", JSON.stringify(data));
return data;
}Recommended solutions for shared state:
- Redis for caching and sessions
- PostgreSQL/MySQL for persistent data
- MongoDB for document storage
- External message queues (RabbitMQ, Kafka)
Performance Benefits π
Expected performance improvements with cluster mode:
| CPU Cores | Throughput Increase |
|---|---|
| 2 cores | ~1.8x |
| 4 cores | ~3.5x |
| 8 cores | ~6-7x |
| 16 cores | ~12-14x |
Actual gains depend on:
- I/O vs CPU-bound operations
- Operating system
- Application architecture
- Network conditions
Docker and Kubernetes π³
When using container orchestration, disable cluster mode and let the orchestrator handle scaling:
const config: ConfigTypes = {
server: {
cluster: false, // Let orchestrator handle scaling
},
};Scale containers instead:
services:
api:
image: myapp
deploy:
replicas: 4 # Run 4 containersapiVersion: apps/v1
kind: Deployment
metadata:
name: azurajs-app
spec:
replicas: 4 # Run 4 pods
template:
spec:
containers:
- name: app
image: myappMonitoring and Logs π
With logging.showDetails: true, AzuraJS logs show worker information:
[Worker 1] Server listening on port 3000 (PID: 12345)
[Worker 2] Server listening on port 3000 (PID: 12346)
[Worker 3] Server listening on port 3000 (PID: 12347)
[Worker 4] Server listening on port 3000 (PID: 12348)When a worker crashes and restarts automatically:
[Primary] Worker 2 (PID: 12346) crashed
[Primary] Starting new worker...
[Worker 5] Server listening on port 3000 (PID: 12350)Best Practices β¨
Enable in production only - Development is easier with a single process
Use external storage - Redis, databases, or message queues for shared state
Test thoroughly - Behavior may differ between single and cluster mode
Monitor your workers - Track worker health and restart patterns in production
Troubleshooting π
Workers Keep Crashing
Check your application logs to identify the error. Common issues:
- Uncaught exceptions
- Memory leaks
- Database connection issues
- Missing error handling
Inconsistent Behavior Between Requests
This usually means you're using in-memory state that isn't shared. Solution:
- Move state to Redis or a database
- Ensure all data is stored externally
- Use stateless architecture
Port Already in Use Error
If you see this error, you might be running multiple instances. Check:
- No other processes on the same port
- Only one AzuraJS instance running
- Configuration file is correct
