Navigate:
All Reposoptuna
~$OPTUNA0.1%

Optuna: Hyperparameter optimization framework for machine learning

Define-by-run Python framework for automated hyperparameter tuning.

LIVE RANKINGS • 06:51 AM • STEADY
OVERALL
#139
27
AI & ML
#61
12
30 DAY RANKING TREND
ovr#139
·AI#61
STARS
13.3K
FORKS
1.2K
DOWNLOADS
7D STARS
+14
7D FORKS
0
Tags:
See Repo:
Share:

Learn more about optuna

Optuna is a hyperparameter optimization framework written in Python that automates the process of finding optimal hyperparameter values for machine learning models. It employs a define-by-run programming style where search spaces are constructed dynamically at runtime using standard Python syntax, including conditionals and loops. The framework implements state-of-the-art sampling algorithms and trial pruning strategies to reduce computational overhead. Optuna supports distributed optimization across multiple workers and is commonly used in machine learning pipelines, AutoML systems, and research workflows where hyperparameter tuning is required.

optuna

1

Define-by-run API

Search spaces are constructed dynamically using imperative Python code rather than static configuration, allowing conditional parameters and loops within the optimization logic. This approach provides modularity and flexibility compared to declarative search space definitions.

2

Distributed optimization

The framework supports scaling studies across multiple workers with minimal code changes, enabling parallel trial execution on local machines or distributed systems. This architecture allows efficient utilization of computational resources for large-scale optimization tasks.

3

Algorithm flexibility

Optuna includes multiple sampling strategies such as Tree-structured Parzen Estimator, Gaussian Process-based sampling, and supports multi-objective and constrained optimization. Users can select or customize algorithms based on their optimization problem characteristics.


import optuna

def objective(trial):
    x = trial.suggest_float('x', -10, 10)
    return (x - 2) ** 2

study = optuna.create_study()
study.optimize(objective, n_trials=100)

print(f"Best value: {study.best_value}")
print(f"Best params: {study.best_params}")

vv4.6.0

Drops Python 3.8, adds Python 3.13 support; TrialState string representation changed. GPSampler is significantly faster via PyTorch batching and NumPy optimizations.

  • Upgrade to Python 3.9+ before installing; Python 3.8 is no longer supported and 3.13 is now officially supported.
  • Review code relying on TrialState.__repr__ or __str__ output as their format has changed in this release.
vv4.5.0

TPESampler runs ~5× faster; GPSampler adds constrained multi-objective optimization; CmaEsSampler now handles 1D spaces.

  • Upgrade to leverage 5× faster TPESampler and dramatically faster plot_hypervolume_history for many-objective problems.
  • Use GPSampler with new constrained LogEHVI for multi-objective optimization that respects constraints efficiently.
vv4.4.0

Breaking changes to CmaEsSampler and TPESampler APIs require code updates; GPSampler now supports multi-objective optimization.

  • Update CmaEsSampler calls to remove restart_strategy and inc_popsize parameters, and make all TPESampler arguments keyword-only.
  • Use GPSampler with directions=['minimize','minimize'] for multi-objective problems; new Optuna MCP Server available via uvx.

See how people are using optuna

Loading tweets...


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers