Navigate:
All Reposultralytics
~$ULTRAL0.3%

Ultralytics YOLO: Object detection and computer vision models

PyTorch library for YOLO-based real-time computer vision.

LIVE RANKINGS • 06:52 AM • STEADY
TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25TOP 25
OVERALL
#19
10
AI & ML
#10
10
30 DAY RANKING TREND
ovr#19
·AI#10
STARS
50.9K
FORKS
9.8K
DOWNLOADS
711
7D STARS
+148
7D FORKS
+26
Tags:
See Repo:
Share:

Learn more about ultralytics

Ultralytics YOLO is a PyTorch-based computer vision library that implements successive versions of the YOLO (You Only Look Once) object detection architecture. The codebase provides model definitions, training pipelines, inference engines, and utilities for tasks including object detection, instance segmentation, image classification, pose estimation, and multi-object tracking. Models are distributed through the Ultralytics Hub and can be deployed via command-line interface or Python API. The library supports various hardware configurations and includes integration with popular deployment platforms.


1

Unified Multi-Task Interface

Single codebase handles detection, segmentation, classification, pose estimation, and tracking through consistent model APIs. Eliminates the need for separate specialized implementations or framework switching across vision tasks.

2

Versioned Model Lineage

Multiple YOLO versions (v8, v10, v11) with documented architectural differences and benchmarked performance characteristics. Enables explicit accuracy-latency trade-offs based on deployment constraints rather than guessing optimal models.

3

CLI and Python API

Offers both command-line interface for quick experiments and a comprehensive Python API for integration. Train, validate, and deploy models using simple commands or programmatic workflows with identical capabilities.


from ultralytics import YOLO

# Load a pre-trained model
model = YOLO('yolo11n.pt')

# Run inference on an image
results = model('path/to/image.jpg')

# Display results
results[0].show()


vv8.3.228

CLIP/MobileCLIP tokenizers now truncate long prompts by default to prevent crashes; pass truncate=False to restore strict validation.

  • Update CLIP.tokenize and MobileCLIPTS.tokenize calls with truncate=False if you require exact-length validation or error on overflow.
  • Expect tqdm progress bars to show seconds-per-iteration (e.g., 1.5s/it) for slow training loops instead of 0.0it/s.
vv8.3.227

Pins ONNX to <=1.19.1 to prevent export breakages in Conda environments; modernizes Edge TPU compiler install for current Debian/Ubuntu.

  • Pin ONNX to <=1.19.1 locally if ONNX or TensorFlow SavedModel exports fail, especially in Conda.
  • Edge TPU exports now auto-install compiler using APT keyrings on modern Linux; no manual setup needed.
vv8.3.226

Replaces all eval() calls with ast.literal_eval() for secure parsing; may break non-literal config inputs that previously worked.

  • Review config files and CLI args—strings like '[640, 640]' now parse safely; arbitrary expressions will fail or stay strings.
  • Use new augmentations parameter in model.train() to pass custom Albumentations transforms directly via Python API.

See how people are using ultralytics

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers