YOLOX: Anchor-free object detection model
PyTorch anchor-free object detector with scalable model variants.
Learn more about YOLOX
YOLOX is an object detection model that removes the anchor-based detection mechanism used in earlier YOLO versions, replacing it with an anchor-free approach. The architecture is implemented in PyTorch with a companion MegEngine implementation available separately. The model family includes variants ranging from YOLOX-Nano with 0.91M parameters to YOLOX-X with 99.1M parameters, enabling deployment across different computational constraints. Common applications include real-time object detection in computer vision pipelines, with support for edge deployment through quantization and conversion to optimized inference formats.
Anchor-Free Detection
Eliminates anchor boxes from the detection pipeline, removing hyperparameter tuning for anchor dimensions and aspect ratios. Simplifies training workflow while maintaining accuracy comparable to anchor-based YOLO versions.
Multi-Framework Deployment
Export trained models to ONNX, TensorRT, ncnn, OpenVINO, and MegEngine with built-in conversion scripts. Enables deployment across cloud GPUs, edge devices, and mobile hardware without manual optimization.
Scalable Model Variants
Provides five model sizes from Nano (0.91M parameters) to X (99.1M parameters) with consistent architecture. Allows selection based on target hardware constraints, from resource-limited embedded devices to high-performance server inference.
import torch
from yolox.exp import get_exp
exp = get_exp(None, "yolox-s")
model = exp.get_model()
ckpt = torch.load("yolox_s.pth", map_location="cpu")
model.load_state_dict(ckpt["model"])
model.eval()
img = torch.randn(1, 3, 640, 640)
outputs = model(img)Adds torch.hub loading, JIT compilation, and pip install support; Windows users will compile operators just-in-time on first use.
- –Install via pip on most platforms; Windows compiles custom operators at runtime instead of during installation.
- –Load models through torch.hub, enable JIT compile ops, and use wandb logger or custom datasets in evaluator.
YOLOX is now pip-installable; set YOLOX_DATADIR env var to point to datasets when using the pip package.
- –Install via pip install yolox and use the Exp controller API to instantiate models, dataloaders, and optimizers programmatically.
- –Training now uses 30% less memory on COCO and logs per-class AP metrics during evaluation.
Breaking change removes normalization (-mean/std), making old weights incompatible; add `--legacy` flag to demo/eval or retrain models.
- –Add `--legacy` flag when running demo.py or eval.py with pre-0.1.1 weights; deployment demos no longer support old weights.
- –Use `--cache` flag to enable image caching for 2× faster training, requiring large system RAM; torch.cuda.amp replaces Apex.
See how people are using YOLOX
Related Repositories
Discover similar tools and frameworks used by developers
LLaMA-Factory
Parameter-efficient fine-tuning framework for 100+ LLMs.
AI-Trader
LLM agent benchmarking framework for autonomous market trading.
sglang
High-performance inference engine for LLMs and VLMs.
InvokeAI
Node-based workflow interface for local Stable Diffusion deployment.
PaddleOCR
Multilingual OCR toolkit with document structure extraction.