v1.0.0 · Open Source · PolyForm Noncommercial

High-Performance
Redis Job Queue

10x throughput. 90% less memory. Built in C with Redis.
Bindings for Python & Node.js.

No bloated runtime. Just speed.

bash
$ fastq worker start --queue high --threads 8 → Connected to Redis localhost:6379 → Worker pool 8 threads active → Queues [high] [normal] [low] ✓ Processing ~27,000 jobs/sec • 8 MB RSS

Free & open source · No signup required · Works on Linux & macOS

10x
Throughput vs BullMQ
90%
Less Memory
38
Tests Passing
C
Powered Engine

Features

Everything a queue should be

Core features free forever. Advanced power for Pro users.

Blazing Fast Engine

Redis MULTI/EXEC pipelining with multi-threaded worker pools using pthreads. ~27,000+ jobs/sec on a single node.

Smart Retries

Automatic exponential backoff, configurable max retries, and Dead Letter Queue for failed jobs. Never lose a job.

Priority Queues

Three priority lanes — high, normal, low. Critical jobs never wait behind routine tasks.

Multi-Language

Native Python (CPython extension) and Node.js (N-API) bindings. Go & Rust bindings coming soon.

PRO

Job Scheduling

Persistent cron jobs and delayed execution. Schedule tasks minutes, hours, or days in the future.

PRO

Prometheus Metrics

Real-time queue depth, throughput, latency histograms, and JSON health endpoints for your monitoring stack.

PRO

Rate Limiting

Token-bucket rate limiting per worker. Protect downstream services from being overwhelmed.

PRO

DAG Workflows

Job chaining and directed acyclic graph (DAG) workflows. Define complex dependencies between jobs.

Daemon + systemd

First-class daemon mode with systemd service file included. Crash recovery built in. Production-ready out of the box.

SDK

Simple by design

from fastq import FastQ, Worker # Connect & push a job fq = FastQ("redis://localhost:6379") fq.push("send_email", {"to": "user@example.com"}, priority="high") # Define a worker worker = Worker(fq, queue="default", threads=8) @worker.job("send_email") def handle_email(job): send(**job.payload) job.complete() worker.start() # blocks — processes ~27k+ jobs/sec
const { FastQ, Worker } = require('fastq'); // Connect & push a job const fq = new FastQ('redis://localhost:6379'); await fq.push('send_email', { to: 'user@example.com' }, { priority: 'high' }); // Define a worker const worker = new Worker(fq, { queue: 'default', threads: 8 }); worker.on('send_email', async (job) => { await sendEmail(job.payload); await job.complete(); }); worker.start(); // non-blocking — processes ~27k+ jobs/sec
# Start a worker $ fastq worker start --queue high,normal --threads 8 # Push a job $ fastq push send_email '{"to":"user@example.com"}' --priority high # Inspect queue stats $ fastq stats Queue Pending Active Failed Throughput high 3 8 0 12,400/s normal 142 16 2 38,000/s low 89 4 0 3,200/s # Activate your Pro license $ fastq license activate "user@example.com:67a8f3e1:a1b2c3d4e5f60718:3f4a8b2c1d6e9f0a7b5c4d3e2f1a0b9c8d7e6f5a4b3c2d1e0f9a8b7c6d5e4f3b" ✓ FastQ Pro activated — scheduling & rate-limiting unlocked

vs the competition

Why FastQ wins

Feature FastQ BullMQ Sidekiq Celery
Language C Node.js Ruby Python
Throughput ~27k/s ~5k/s ~7k/s ~4k/s
Memory (idle) ~8 MB ~100 MB ~80 MB ~120 MB
Priority Queues
Cron Scheduling PRO
Rate Limiting PRO
DAG Workflows PRO
Multi-language
Runtime dependency Redis only Redis + Node Redis + Ruby Redis + Python

How it works

Three steps to production

1

Install

Download the binary or build from source. Only needs Redis at runtime.

$ make && make install
2

Push Jobs

Use the CLI or any SDK binding to push jobs with optional priorities.

$ fastq push myJob '{}'
3

Process

Workers consume and process jobs at ~27k/s with automatic retries.

✓ ~27,000 jobs/sec

Ready to queue
at the speed of C?

Start free. Upgrade when you need more power.