Optuna: Hyperparameter optimization framework for machine learning
Define-by-run Python framework for automated hyperparameter tuning.
Learn more about Optuna
Optuna is a hyperparameter optimization framework written in Python that automates the process of finding optimal hyperparameter values for machine learning models. It employs a define-by-run programming style where search spaces are constructed dynamically at runtime using standard Python syntax, including conditionals and loops. The framework implements state-of-the-art sampling algorithms and trial pruning strategies to reduce computational overhead. Optuna supports distributed optimization across multiple workers and is commonly used in machine learning pipelines, AutoML systems, and research workflows where hyperparameter tuning is required.

Define-by-run API
Search spaces are constructed dynamically using imperative Python code rather than static configuration, allowing conditional parameters and loops within the optimization logic. This approach provides modularity and flexibility compared to declarative search space definitions.
Distributed optimization
The framework supports scaling studies across multiple workers with minimal code changes, enabling parallel trial execution on local machines or distributed systems. This architecture allows efficient utilization of computational resources for large-scale optimization tasks.
Algorithm flexibility
Optuna includes multiple sampling strategies such as Tree-structured Parzen Estimator, Gaussian Process-based sampling, and supports multi-objective and constrained optimization. Users can select or customize algorithms based on their optimization problem characteristics.
import optuna
def objective(trial):
x = trial.suggest_float('x', -10, 10)
return (x - 2) ** 2
study = optuna.create_study()
study.optimize(objective, n_trials=100)
print(f"Best value: {study.best_value}")
print(f"Best params: {study.best_params}")This release introduces stacklevel-aware warnings, performance improvements, and fixes for TPESampler distribution handling.
- –Introduce stacklevel-aware custom warnings
- –Cache distributions to skip consistency check
- –Add warnings when JournalStorage lock acquisition is delayed
- –Add support for local HPI in PED-ANOVA
- –Fix log PDF of discrete trunc log-norm distribution for TPESampler
This release drops Python 3.8 support, adds Python 3.13 support, and includes performance improvements for storage and sampling.
- –Drop Python 3.8 & Support Python 3.13
- –Change TrialState.repr and TrialState.str
- –Use iterator for lazy evaluation in journal storage's read_logs
- –Cache pair-wise distances to speed up GPSampler
This release adds constrained multi-objective optimization support and improves CMA-ES sampler capabilities.
- –Add ConstrainedLogEHVI
- –Add support for constrained multi-objective optimization in GPSampler
- –Support 1D Search Spaces in CmaEsSampler
- –Move optuna.lightgbmtuner module
- –Fix numerical issue warning on qehvicandidatesfunc
See how people are using Optuna
Related Repositories
Discover similar tools and frameworks used by developers
Real-ESRGAN
PyTorch framework for blind super-resolution using GANs.
YOLOX
PyTorch anchor-free object detector with scalable model variants.
PyTorch
Python framework for differentiable tensor computation and deep learning.
Ray
Unified framework for scaling AI and Python applications from laptops to clusters with distributed runtime.
OpenClaw
Personal AI assistant that runs on your own devices and connects to messaging platforms like WhatsApp, Telegram, and Slack.