Optuna: Hyperparameter optimization framework for machine learning
Define-by-run Python framework for automated hyperparameter tuning.
Learn more about optuna
Optuna is a hyperparameter optimization framework written in Python that automates the process of finding optimal hyperparameter values for machine learning models. It employs a define-by-run programming style where search spaces are constructed dynamically at runtime using standard Python syntax, including conditionals and loops. The framework implements state-of-the-art sampling algorithms and trial pruning strategies to reduce computational overhead. Optuna supports distributed optimization across multiple workers and is commonly used in machine learning pipelines, AutoML systems, and research workflows where hyperparameter tuning is required.

Define-by-run API
Search spaces are constructed dynamically using imperative Python code rather than static configuration, allowing conditional parameters and loops within the optimization logic. This approach provides modularity and flexibility compared to declarative search space definitions.
Distributed optimization
The framework supports scaling studies across multiple workers with minimal code changes, enabling parallel trial execution on local machines or distributed systems. This architecture allows efficient utilization of computational resources for large-scale optimization tasks.
Algorithm flexibility
Optuna includes multiple sampling strategies such as Tree-structured Parzen Estimator, Gaussian Process-based sampling, and supports multi-objective and constrained optimization. Users can select or customize algorithms based on their optimization problem characteristics.
import optuna
def objective(trial):
x = trial.suggest_float('x', -10, 10)
return (x - 2) ** 2
study = optuna.create_study()
study.optimize(objective, n_trials=100)
print(f"Best value: {study.best_value}")
print(f"Best params: {study.best_params}")Drops Python 3.8, adds Python 3.13 support; TrialState string representation changed. GPSampler is significantly faster via PyTorch batching and NumPy optimizations.
- –Upgrade to Python 3.9+ before installing; Python 3.8 is no longer supported and 3.13 is now officially supported.
- –Review code relying on TrialState.__repr__ or __str__ output as their format has changed in this release.
TPESampler runs ~5× faster; GPSampler adds constrained multi-objective optimization; CmaEsSampler now handles 1D spaces.
- –Upgrade to leverage 5× faster TPESampler and dramatically faster plot_hypervolume_history for many-objective problems.
- –Use GPSampler with new constrained LogEHVI for multi-objective optimization that respects constraints efficiently.
Breaking changes to CmaEsSampler and TPESampler APIs require code updates; GPSampler now supports multi-objective optimization.
- –Update CmaEsSampler calls to remove restart_strategy and inc_popsize parameters, and make all TPESampler arguments keyword-only.
- –Use GPSampler with directions=['minimize','minimize'] for multi-objective problems; new Optuna MCP Server available via uvx.
See how people are using optuna
Related Repositories
Discover similar tools and frameworks used by developers
crewAI
Python framework for autonomous multi-agent AI collaboration.
OpenHands
LLM agent framework automating development in sandboxed containers.
context7
MCP server delivering version-specific library documentation to LLMs.
LightRAG
Graph-based retrieval framework for structured RAG reasoning.
TTS
PyTorch toolkit for deep learning text-to-speech synthesis.