Hugging Face Trackio

Added March 5, 2026 Source: Hugging Face

Use Trackio to log and visualize your ML training metrics in real time. It integrates with your Python training scripts to send data to Hugging Face Spaces for live dashboards and also lets you set up alerts for important events. This is great for monitoring remote runs and programmatically analyzing experiment data.

Installation

This skill has dependencies (scripts or reference files). Install using the method below to make sure everything is in place.

npx skills add huggingface/skills --skill hugging-face-trackio

Requires Node.js 18+. The skills CLI auto-detects your editor and installs to the right directory.

Or install manually from the source repository.

SKILL.md (reference - install via npx or source for all dependencies)

---
name: hugging-face-trackio
description: Track and visualize ML training experiments with Trackio. Use when logging metrics during training (Python API), firing alerts for training diagnostics, or retrieving/analyzing logged metrics (CLI). Supports real-time dashboard visualization, alerts with webhooks, HF Space syncing, and JSON output for automation.
---

# Trackio - Experiment Tracking for ML Training

Trackio is an experiment tracking library for logging and visualizing ML training metrics. It syncs to Hugging Face Spaces for real-time monitoring dashboards.

## Three Interfaces

| Task | Interface | Reference |
|------|-----------|-----------|
| **Logging metrics** during training | Python API | [references/logging_metrics.md](references/logging_metrics.md) |
| **Firing alerts** for training diagnostics | Python API | [references/alerts.md ([source](https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-trackio/references/alerts.md))](references/alerts.md) |
| **Retrieving metrics & alerts** after/during training | CLI | [references/retrieving_metrics.md ([source](https://raw.githubusercontent.com/huggingface/skills/main/skills/hugging-face-trackio/references/retrieving_metrics.md))](references/retrieving_metrics.md) |

## When to Use Each

### Python API → Logging

Use `import trackio` in your training scripts to log metrics:

- Initialize tracking with `trackio.init()`
- Log metrics with `trackio.log()` or use TRL's `report_to="trackio"`
- Finalize with `trackio.finish()`

**Key concept**: For remote/cloud training, pass `space_id` — metrics sync to a Space dashboard so they persist after the instance terminates.

→ See [references/logging_metrics.md](references/logging_metrics.md) for setup, TRL integration, and configuration options.

### Python API → Alerts

Insert `trackio.alert()` calls in training code to flag important events — like inserting print statements for debugging, but structured and queryable:

- `trackio.alert(title="...", level=trackio.AlertLevel.WARN)` — fire an alert
- Three severity levels: `INFO`, `WARN`, `ERROR`
- Alerts are printed to terminal, stored in the database, shown in the dashboard, and optionally sent to webhooks (Slack/Discord)

**Key concept for LLM agents**: Alerts are the primary mechanism for autonomous experiment iteration. An agent should insert alerts into training code for diagnostic conditions (loss spikes, NaN gradients, low accuracy, training stalls). Since alerts are printed to the terminal, an agent that is watching the training script's output will see them automatically. For background or detached runs, the agent can poll via CLI instead.

→ See [references/alerts.md](references/alerts.md) for the full alerts API, webhook setup, and autonomous agent workflows.

### CLI → Retrieving

Use the `trackio` command to query logged metrics and alerts:

- `trackio list projects/runs/metrics` — discover what's available
- `trackio get project/run/metric` — retrieve summaries and values
- `trackio list alerts --project <name> --json` — retrieve alerts
- `trackio show` — launch the dashboard
- `trackio sync` — sync to HF Space

**Key concept**: Add `--json` for programmatic output suitable for automation and LLM agents.

→ See [references/retrieving_metrics.md](references/retrieving_metrics.md) for all commands, workflows, and JSON output formats.

## Minimal Logging Setup

```python
import trackio

trackio.init(project="my-project", space_id="username/trackio")
trackio.log({"loss": 0.1, "accuracy": 0.9})
trackio.log({"loss": 0.09, "accuracy": 0.91})
trackio.finish()
```

### Minimal Retrieval

```bash
trackio list projects --json
trackio get metric --project my-project --run my-run --metric loss --json
```

## Autonomous ML Experiment Workflow

When running experiments autonomously as an LLM agent, the recommended workflow is:

1. **Set up training with alerts** — insert `trackio.alert()` calls for diagnostic conditions
2. **Launch training** — run the script in the background
3. **Poll for alerts** — use `trackio list alerts --project <name> --json --since <timestamp>` to check for new alerts
4. **Read metrics** — use `trackio get metric ...` to inspect specific values
5. **Iterate** — based on alerts and metrics, stop the run, adjust hyperparameters, and launch a new run

```python
import trackio

trackio.init(project="my-project", config={"lr": 1e-4})

for step in range(num_steps):
    loss = train_step()
    trackio.log({"loss": loss, "step": step})

    if step > 100 and loss > 5.0:
        trackio.alert(
            title="Loss divergence",
            text=f"Loss {loss:.4f} still high after {step} steps",
            level=trackio.AlertLevel.ERROR,
        )
    if step > 0 and abs(loss) < 1e-8:
        trackio.alert(
            title="Vanishing loss",
            text="Loss near zero — possible gradient collapse",
            level=trackio.AlertLevel.WARN,
        )

trackio.finish()
```

Then poll from a separate terminal/process:

```bash
trackio list alerts --project my-project --json --since "2025-01-01T00:00:00"
```


---

## Companion Files

The following reference files are included for convenience:

### references/logging_metrics.md

# Logging Metrics with Trackio

**Trackio** is a lightweight, free experiment tracking library from Hugging Face. It provides a wandb-compatible API for logging metrics with local-first design.

- **GitHub**: [gradio-app/trackio](https://github.com/gradio-app/trackio)
- **Docs**: [huggingface.co/docs/trackio](https://huggingface.co/docs/trackio/index)

## Installation

```bash
pip install trackio
# or
uv pip install trackio
```

## Core API

### Basic Usage

```python
import trackio

# Initialize a run
trackio.init(
    project="my-project",
    config={"learning_rate": 0.001, "epochs": 10}
)

# Log metrics during training
for epoch in range(10):
    loss = train_epoch()
    trackio.log({"loss": loss, "epoch": epoch})

# Finalize the run
trackio.finish()
```

### Key Functions

| Function | Purpose |
|----------|---------|
| `trackio.init(...)` | Start a new tracking run |
| `trackio.log(dict)` | Log metrics (called repeatedly during training) |
| `trackio.finish()` | Finalize run and ensure all metrics are saved |
| `trackio.show()` | Launch the local dashboard |
| `trackio.sync(...)` | Sync local project to HF Space |

## trackio.init() Parameters

```python
trackio.init(
    project="my-project",           # Project name (groups runs together)
    name="run-name",                # Optional: name for this specific run
    config={...},                   # Hyperparameters and config to log
    space_id="username/trackio",    # Optional: sync to HF Space for remote dashboard
    group="experiment-group",       # Optional: group related runs
)
```

## Local vs Remote Dashboard

### Local (Default)

By default, trackio stores metrics in a local SQLite database and runs the dashboard locally:

```python
trackio.init(project="my-project")
# ... training ...
trackio.finish()

# Launch local dashboard
trackio.show()
```

Or from terminal:
```bash
trackio show --project my-project
```

### Remote (HF Space)

Pass `space_id` to sync metrics to a Hugging Face Space for persistent, shareable dashboards:

```python
trackio.init(
    project="my-project",
    space_id="username/trackio"  # Auto-creates Space if it doesn't exist
)
```

⚠️ **For remote training** (cloud GPUs, HF Jobs, etc.): Always use `space_id` since local storage is lost when the instance terminates.

### Sync Local to Remote

Sync existing local projects to a Space:

```python
trackio.sync(project="my-project", space_id="username/my-experiments")
```

## wandb Compatibility

Trackio is API-compatible with wandb. Drop-in replacement:

```python
import trackio as wandb

wandb.init(project="my-project")
wandb.log({"loss": 0.5})
wandb.finish()
```

## TRL Integration

When using TRL trainers, set `report_to="trackio"` for automatic metric logging:

```python
from trl import SFTConfig, SFTTrainer
import trackio

trackio.init(
    project="sft-training",
    space_id="username/trackio",
    config={"model": "Qwen/Qwen2.5-0.5B", "dataset": "trl-lib/Capybara"}
)

config = SFTConfig(
    output_dir="./output",
    report_to="trackio",  # Automatic metric logging
    # ... other config
)

trainer = SFTTrainer(model=model, args=config, ...)
trainer.train()
trackio.finish()
```

## What Gets Logged

With TRL/Transformers integration, trackio automatically captures:
- Training loss
- Learning rate
- Eval metrics
- Training throughput

For manual logging, log any numeric metrics:

```python
trackio.log({
    "train_loss": 0.5,
    "train_accuracy": 0.85,
    "val_loss": 0.4,
    "val_accuracy": 0.88,
    "epoch": 1
})
```

## Grouping Runs

Use `group` to organize related experiments in the dashboard sidebar:

```python
# Group by experiment type
trackio.init(project="my-project", name="baseline-v1", group="baseline")
trackio.init(project="my-project", name="augmented-v1", group="augmented")

# Group by hyperparameter
trackio.init(project="hyperparam-sweep", name="lr-0.001", group="lr_0.001")
trackio.init(project="hyperparam-sweep", name="lr-0.01", group="lr_0.01")
```

## Configuration Best Practices

Keep config minimal — only log what's useful for comparing runs:

```python
trackio.init(
    project="qwen-sft-capybara",
    name="baseline-lr2e5",
    config={
        "model": "Qwen/Qwen2.5-0.5B",
        "dataset": "trl-lib/Capybara",
        "learning_rate": 2e-5,
        "num_epochs": 3,
        "batch_size": 8,
    }
)
```

## Embedding Dashboards

Embed Space dashboards in websites with query parameters:

```html
<iframe 
  src="https://username-trackio.hf.space/?project=my-project&metrics=train_loss,val_loss&sidebar=hidden" 
  style="width:1600px; height:500px; border:0;">
</iframe>
```

Query parameters:
- `project`: Filter to specific project
- `metrics`: Comma-separated metric names to show
- `sidebar`: `hidden` or `collapsed`
- `smoothing`: 0-20 (smoothing slider value)
- `xmin`, `xmax`: X-axis limits

Originally by Hugging Face, adapted here as an Agent Skills compatible SKILL.md.

Works with

Agent Skills format — supported by 20+ editors. Learn more