Navigate:
All Reposyolov5
~$YOLOV50.1%

YOLOv5: PyTorch object detection model

Real-time object detection with cross-platform deployment support.

LIVE RANKINGS • 09:02 AM • STEADY
OVERALL
#115
27
AI & ML
#51
13
30 DAY RANKING TREND
ovr#115
·AI#51
STARS
56.6K
FORKS
17.4K
DOWNLOADS
2
7D STARS
+40
7D FORKS
+5
Tags:
See Repo:
Share:

Learn more about yolov5

YOLOv5 is an object detection model implemented in PyTorch that processes images to identify and localize objects within them. The architecture uses a convolutional neural network approach optimized for real-time inference across different hardware targets. It provides built-in export capabilities to multiple formats, enabling deployment on diverse platforms from desktop systems to mobile devices and edge hardware. Common applications include surveillance systems, autonomous vehicle perception, industrial inspection, and general-purpose computer vision tasks requiring object localization.


1

Multi-Format Model Export

Convert trained models to ONNX, CoreML, TFLite, and other formats from a single PyTorch checkpoint. Enables deployment across mobile, edge, and cloud platforms without retraining or maintaining separate implementations.

2

PyTorch-Native Implementation

Built entirely on PyTorch framework for direct access to the ecosystem's tools and libraries. Allows standard Python workflows for model customization, fine-tuning, and integration with existing PyTorch pipelines.

3

Unified Multi-Task Architecture

Single codebase handles object detection, instance segmentation, and classification tasks with shared training infrastructure. Reduces maintenance overhead and enables task switching without learning new frameworks or toolchains.


import torch

# Load pre-trained model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')

# Run inference on an image
results = model('https://ultralytics.com/images/zidane.jpg')

# Display results
results.show()
results.print()

vv7.0

Adds instance segmentation models (YOLOv5-seg) achieving SOTA accuracy on COCO; no breaking changes noted but incorporates 280 PRs.

  • Train or deploy new YOLOv5-seg models for instance segmentation using existing workflows (n/s/m/l/x variants available).
  • Use `--cache ram` flag to auto-scan memory before caching datasets, reducing OOM risk and speeding up training.
vv6.2

Adds ImageNet classification training, validation, prediction, and export across 11 formats; includes pretrained YOLOv5-cls, ResNet, and EfficientNet models.

  • Use new `--seed` argument (default 0) for reproducible single-GPU training with torch>=1.12.0.
  • Run `python utils/benchmarks.py --weights yolov5s.pt --device 0` to benchmark all export formats on GPU or CPU.
vv6.1

Adds TensorRT, Edge TPU, and OpenVINO export support; switches default LR scheduler to one-cycle linear and retrains all models at batch-size 128.

  • Export models to 11 formats (TensorRT via `--include engine`, Edge TPU via `--include edgetpu`, OpenVINO via `--include openvino`) for inference and validation.
  • Update training configs to use one-cycle linear LR scheduler (replaces cosine) and set `lrf=0.1` in hyp-scratch-large.yaml for improved mAP.


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers