Navigate:
Weights & Biases
~$W&B0.2%

Weights & Biases: Machine learning experiment tracking

ML experiment tracking platform with logging, visualization, and model versioning.

LIVE RANKINGS • 10:20 AM • STEADY
OVERALL
#311
126
AI & ML
#91
19
30 DAY RANKING TREND
ovr#311
·AI#91
STARS
10.9K
FORKS
825
7D STARS
+21
7D FORKS
+6
Tags:
See Repo:
Share:

Learn more about Weights & Biases

Weights & Biases (W&B) is a machine learning operations platform that provides experiment tracking, model management, and collaboration tools for ML teams. The platform centers around logging runs with wandb.init(), capturing metrics, hyperparameters, and artifacts throughout the training process. It integrates with popular ML frameworks like PyTorch, TensorFlow, and Keras through Python APIs and automatic logging hooks. The platform supports both cloud-hosted and self-managed deployments, with options for multi-tenant, dedicated cloud, or on-premises installations.

Weights & Biases

1

Comprehensive MLOps

Covers the full ML lifecycle from experiment tracking and hyperparameter tuning to model versioning and production monitoring. Includes specialized tools like Weave for LLM applications.

2

Framework Integration

Native integrations with major ML frameworks including PyTorch, TensorFlow, Keras, and JAX. Supports automatic logging and manual instrumentation through Python APIs.

3

Flexible Deployment

Available as multi-tenant cloud, dedicated cloud instances, or self-managed installations on AWS, GCP, Azure, or on-premises infrastructure.


import wandb
import torch
import torch.nn as nn
import torch.optim as optim

# Initialize wandb run
wandb.init(
    project="pytorch-classification",
    config={
        "learning_rate": 0.001,
        "epochs": 10,
        "batch_size": 32,
        "architecture": "CNN"
    }
)

# Simple model
model = nn.Sequential(
    nn.Linear(784, 128),
    nn.ReLU(),
    nn.Linear(128, 10)
)

optimizer = optim.Adam(model.parameters(), lr=wandb.config.learning_rate)
criterion = nn.CrossEntropyLoss()

# Training loop with logging
for epoch in range(wandb.config.epochs):
    # Simulate training step
    loss = torch.randn(1).item() * 0.1 + 0.5 - epoch * 0.05
    accuracy = min(0.9, 0.1 + epoch * 0.08)
    
    # Log metrics to wandb
    wandb.log({
        "epoch": epoch,
        "loss": loss,
        "accuracy": accuracy,
        "learning_rate": optimizer.param_groups[0]['lr']
    })

wandb.finish()

vv0.24.2

Adds Federated Auth support to wandb.Api() and fixes artifact download URL expiration issues.

  • wandb.Api() now supports Federated Auth (JWT based authentication)
  • Refresh presigned download url when it expires during artifact file downloads
vv0.24.1

Fixes data upload issues from v0.24.0 and adds downloadhistoryexports feature for parquet exports.

  • downloadhistoryexports in api.Run class to download exported run history in parquet file format
  • When a settings file contains an invalid setting, all settings files are ignored and an error is printed
  • After wandb login --host <invalid-url>, using wandb login --host <valid-url> works as usual
  • wandb beta sync correctly loads credentials
vv0.24.0

Removes deprecated wandb.beta.workflows module and adds live sync support with various bug fixes.

  • wandb beta sync now supports a --live option for syncing a run while it's being logged
  • Fixed Run.exit type annotations to accept None values, which are passed when no exception is raised
  • Fixed Invalid Client ID digest error when creating artifacts after calling random.seed()
  • Fixed CLI error when listing empty artifacts
  • Fixed regression for calling api.run() on a Sweeps run


[ EXPLORE MORE ]

Related Repositories

Discover similar tools and frameworks used by developers