Navigate:
All Reposopenvino
~$OPENVI0.2%

OpenVINO: Toolkit for optimizing and deploying AI inference

Convert and deploy deep learning models across Intel hardware.

LIVE RANKINGS • 06:52 AM • STEADY
TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100TOP 100
OVERALL
#82
13
AI & ML
#39
10
30 DAY RANKING TREND
ovr#82
·AI#39
STARS
9.5K
FORKS
2.9K
DOWNLOADS
7D STARS
+20
7D FORKS
+12
Tags:
See Repo:
Share:

Learn more about openvino

OpenVINO is a toolkit designed to optimize and deploy deep learning models for inference workloads. It accepts trained models from frameworks including PyTorch, TensorFlow, ONNX, Keras, PaddlePaddle, and JAX/Flax, converting them to an optimized intermediate representation. The toolkit includes runtime components that execute inference on diverse hardware including x86 and ARM CPUs, Intel integrated and discrete GPUs, and Intel NPU accelerators. Common deployment scenarios include computer vision tasks, natural language processing with large language models, generative AI applications, speech recognition, and recommendation systems.


1

Multi-Framework Model Support

Accepts models from PyTorch, TensorFlow, ONNX, Keras, PaddlePaddle, and JAX/Flax without requiring original training frameworks. Direct Hugging Face Hub integration through Optimum Intel eliminates conversion dependencies.

2

Heterogeneous Hardware Targeting

Single optimized model runs across x86/ARM CPUs, Intel integrated and discrete GPUs, and NPU accelerators. Runtime hardware selection enables deployment flexibility without recompiling or maintaining separate model variants.

3

Optimized Intermediate Representation

Converts models from various frameworks into a unified intermediate representation optimized for Intel hardware. This abstraction layer enables hardware-specific optimizations while maintaining model portability across CPU, GPU, and VPU accelerators.


import openvino as ov

core = ov.Core()
model = core.read_model("model.xml")
compiled_model = core.compile_model(model, "CPU")

input_data = np.random.rand(1, 3, 224, 224)
result = compiled_model(input_data)
print(result[0].shape)


v2025.3.0

Breaking: openvino-dev package, Model Optimizer, Affinity API, and openvino-nightly PyPI removed; Python 3.9 deprecated.

  • Migrate from Model Optimizer to new conversion methods and replace Affinity API with ov::hint::enable_cpu_pinning.
  • Install from Simple PyPI nightly repo instead of openvino-nightly; plan Python 3.10+ migration before 2025.4.
v2025.2.0

Removes openvino-dev package and Model Optimizer; Affinity API replaced by CPU pinning. Adds GenAI GGUF reader, SpeechT5 TTS, and INT4 ONNX compression.

  • Migrate from Model Optimizer to new conversion methods and replace Affinity API calls with ov::hint::enable_cpu_pinning.
  • Install from Simple PyPI nightly repo instead of discontinued openvino-nightly module for pre-release builds.
v2025.1.0

Removes openvino-dev package and Model Optimizer; replaces Affinity API with CPU pinning config. Adds Phi-4, Jina CLIP, VLM support in Model Server, and NPU text generation.

  • Migrate from Model Optimizer to new conversion methods and switch from Affinity API to ov::hint::enable_cpu_pinning.
  • Install openvino directly (openvino-dev discontinued); update APT/YUM repos to new structure before 2026 cutoff.


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers