Navigate:
~$YOLOX0.0%

YOLOX: Anchor-free object detection model

PyTorch anchor-free object detector with scalable model variants.

LIVE RANKINGS • 06:52 AM • STEADY
OVERALL
#230
30
AI & ML
#80
1
30 DAY RANKING TREND
ovr#230
·AI#80
STARS
10.3K
FORKS
2.4K
DOWNLOADS
1
7D STARS
+3
7D FORKS
+2
Tags:
See Repo:
Share:

Learn more about YOLOX

YOLOX is an object detection model that removes the anchor-based detection mechanism used in earlier YOLO versions, replacing it with an anchor-free approach. The architecture is implemented in PyTorch with a companion MegEngine implementation available separately. The model family includes variants ranging from YOLOX-Nano with 0.91M parameters to YOLOX-X with 99.1M parameters, enabling deployment across different computational constraints. Common applications include real-time object detection in computer vision pipelines, with support for edge deployment through quantization and conversion to optimized inference formats.


1

Anchor-Free Detection

Eliminates anchor boxes from the detection pipeline, removing hyperparameter tuning for anchor dimensions and aspect ratios. Simplifies training workflow while maintaining accuracy comparable to anchor-based YOLO versions.

2

Multi-Framework Deployment

Export trained models to ONNX, TensorRT, ncnn, OpenVINO, and MegEngine with built-in conversion scripts. Enables deployment across cloud GPUs, edge devices, and mobile hardware without manual optimization.

3

Scalable Model Variants

Provides five model sizes from Nano (0.91M parameters) to X (99.1M parameters) with consistent architecture. Allows selection based on target hardware constraints, from resource-limited embedded devices to high-performance server inference.


import torch
from yolox.exp import get_exp

exp = get_exp(None, "yolox-s")
model = exp.get_model()
ckpt = torch.load("yolox_s.pth", map_location="cpu")
model.load_state_dict(ckpt["model"])
model.eval()

img = torch.randn(1, 3, 640, 640)
outputs = model(img)

v0.3.0

Adds torch.hub loading, JIT compilation, and pip install support; Windows users will compile operators just-in-time on first use.

  • Install via pip on most platforms; Windows compiles custom operators at runtime instead of during installation.
  • Load models through torch.hub, enable JIT compile ops, and use wandb logger or custom datasets in evaluator.
v0.2.0

YOLOX is now pip-installable; set YOLOX_DATADIR env var to point to datasets when using the pip package.

  • Install via pip install yolox and use the Exp controller API to instantiate models, dataloaders, and optimizers programmatically.
  • Training now uses 30% less memory on COCO and logs per-class AP metrics during evaluation.
v0.1.1rc0

Breaking change removes normalization (-mean/std), making old weights incompatible; add `--legacy` flag to demo/eval or retrain models.

  • Add `--legacy` flag when running demo.py or eval.py with pre-0.1.1 weights; deployment demos no longer support old weights.
  • Use `--cache` flag to enable image caching for 2× faster training, requiring large system RAM; torch.cuda.amp replaces Apex.

See how people are using YOLOX

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers