Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Review and add tips from notion doc #1053

Merged
merged 8 commits into from
Mar 4, 2025
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 14 additions & 6 deletions content/guides/core/artifacts/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,9 @@ For example, the proceeding code snippet shows how to log a file called `dataset
```python
import wandb

run = wandb.init(project = "artifacts-example", job_type = "add-dataset")
artifact = wandb.Artifact(name = "example_artifact", type = "dataset")
artifact.add_file(local_path = "./dataset.h5", name = "training_dataset")
run = wandb.init(project="artifacts-example", job_type="add-dataset")
artifact = wandb.Artifact(name="example_artifact", type="dataset")
artifact.add_file(local_path="./dataset.h5", name="training_dataset")
artifact.save()

# Logs the artifact version "my_data" as a dataset with data from dataset.h5
Expand All @@ -65,14 +65,18 @@ Indicate the artifact you want to mark as input to your run with the [`use_artif
Following the preceding code snippet, this next code block shows how to use the `training_dataset` artifact:

```python
artifact = run.use_artifact("training_dataset:latest") #returns a run object using the "my_data" artifact
artifact = run.use_artifact(
"training_dataset:latest"
) # returns a run object using the "my_data" artifact
```
This returns an artifact object.

Next, use the returned object to download all contents of the artifact:

```python
datadir = artifact.download() #downloads the full "my_data" artifact to the default directory.
datadir = (
artifact.download()
) # downloads the full "my_data" artifact to the default directory.
```

{{% alert %}}
Expand All @@ -84,4 +88,8 @@ You can pass a custom path into the `root` [parameter]({{< relref "/ref/python/a
* Learn how to [version]({{< relref "./create-a-new-artifact-version.md" >}}) and [update]({{< relref "./update-an-artifact.md" >}}) artifacts.
* Learn how to trigger downstream workflows in response to changes to your artifacts with [artifact automation]({{< relref "/guides/models/automations/project-scoped-automations/" >}}).
* Learn about the [registry]({{< relref "/guides/models/registry/" >}}), a space that houses trained models.
* Explore the [Python SDK]({{< relref "/ref/python/artifact.md" >}}) and [CLI]({{< relref "/ref/cli/wandb-artifact/" >}}) reference guides.
* Explore the [Python SDK]({{< relref "/ref/python/artifact.md" >}}) and [CLI]({{< relref "/ref/cli/wandb-artifact/" >}}) reference guides.

## Best practices and tips

For best practices and tips for Experiments and logging, see [Best Practices: Artifacts](https://wandb.ai/wandb/pytorch-lightning-e2e/reports/W-B-Best-Practices-Guide--VmlldzozNTU1ODY1#w&b-artifacts).
6 changes: 5 additions & 1 deletion content/guides/core/reports/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,4 +50,8 @@ Depending on your use case, explore the following resources to get started with
* Check out our [video demonstration](https://www.youtube.com/watch?v=2xeJIv_K_eI) to get an overview of W&B Reports.
* Explore the [Reports gallery]({{< relref "./reports-gallery.md" >}}) for examples of live reports.
* Try the [Programmatic Workspaces]({{< relref "/tutorials/workspaces.md" >}}) tutorial to learn how to create and customize your workspace.
* Read curated Reports in [W&B Fully Connected](http://wandb.me/fc).
* Read curated Reports in [W&B Fully Connected](http://wandb.me/fc).

## Best practices and tips

For best practices and tips for Experiments and logging, see [Best Practices: Reports](https://wandb.ai/wandb/pytorch-lightning-e2e/reports/W-B-Best-Practices-Guide--VmlldzozNTU1ODY1#reports).
9 changes: 8 additions & 1 deletion content/guides/core/reports/edit-a-report.md
Original file line number Diff line number Diff line change
Expand Up @@ -394,4 +394,11 @@ Select a panel grid and press `delete` on your keyboard to delete a panel grid.

Collapse headers in a Report to hide content within a text block. When the report is loaded, only headers that are expanded will show content. Collapsing headers in reports can help organize your content and prevent excessive data loading. The proceeding gif demonstrates the process.

{{< img src="/images/reports/collapse_headers.gif" alt="" >}}
{{< img src="/images/reports/collapse_headers.gif" alt="" >}}

## Visualize relationships across multiple dimensions

To effectively visualize relationships across multiple dimensions, use a color gradient to represent one of the variables. This enhances clarity and makes patterns easier to interpret.

1. Choose a variable to represent with a color gradient (e.g., penalty scores, learning rates, etc.). This allows for a clearer understanding of how penalty (color) interacts with reward/side effects (y-axis) over training time (x-axis).
2. Highlight key trends. Hovering over a specific group of runs highlights them in the visualization.
6 changes: 5 additions & 1 deletion content/guides/models/track/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,4 +59,8 @@ Depending on your use case, explore the following resources to get started with
* Configure experiments
* Log data from experiments
* View results from experiments
* Explore the [W&B Python Library]({{< relref "/ref/python/" >}}) within the [W&B API Reference Guide]({{< relref "/ref/" >}}).
* Explore the [W&B Python Library]({{< relref "/ref/python/" >}}) within the [W&B API Reference Guide]({{< relref "/ref/" >}}).

## Best practices and tips

For best practices and tips for Experiments and logging, see [Best Practices: Experiments and Logging](https://wandb.ai/wandb/pytorch-lightning-e2e/reports/W-B-Best-Practices-Guide--VmlldzozNTU1ODY1#w&b-experiments-and-logging).
1 change: 1 addition & 0 deletions content/guides/models/track/launch.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,7 @@ The following are some suggested guidelines to consider when you create experime
2. **Project**: A project is a set of experiments you can compare together. Each project gets a dedicated dashboard page, and you can easily turn on and off different groups of runs to compare different model versions.
3. **Notes**: Set a quick commit message directly from your script. Edit and access notes in the Overview section of a run in the W&B App.
4. **Tags**: Identify baseline runs and favorite runs. You can filter runs using tags. You can edit tags at a later time on the Overview section of your project's dashboard on the W&B App.
5. **Create multiple run sets for easy comparison**: When comparing experiments, create multiple run sets to make metrics easy to compare. You can toggle run sets on or off on the same chart or group of charts.

The following code snippet demonstrates how to define a W&B Experiment using the best practices listed above:

Expand Down
6 changes: 5 additions & 1 deletion content/guides/models/track/log/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -81,4 +81,8 @@ wandb.log({"loss": 0.314, "epoch": 5,
1. **Compare the best accuracy**: To compare the best value of a metric across runs, set the summary value for that metric. By default, summary is set to the last value you logged for each key. This is useful in the table in the UI, where you can sort and filter runs based on their summary metrics, to help compare runs in a table or bar chart based on their _best_ accuracy, instead of final accuracy. For example: `wandb.run.summary["best_accuracy"] = best_accuracy`
2. **Multiple metrics on one chart**: Log multiple metrics in the same call to `wandb.log`, like this: `wandb.log({"acc'": 0.9, "loss": 0.1})` and they will both be available to plot against in the UI
3. **Custom x-axis**: Add a custom x-axis to the same log call to visualize your metrics against a different axis in the W&B dashboard. For example: `wandb.log({'acc': 0.9, 'epoch': 3, 'batch': 117})`. To set the default x-axis for a given metric use [Run.define_metric()]({{< relref "/ref/python/run.md#define_metric" >}})
4. **Log rich media and charts**: `wandb.log` supports the logging of a wide variety of data types, from [media like images and videos]({{< relref "./media.md" >}}) to [tables]({{< relref "./log-tables.md" >}}) and [charts]({{< relref "/guides/models/app/features/custom-charts/" >}}).
4. **Log rich media and charts**: `wandb.log` supports the logging of a wide variety of data types, from [media like images and videos]({{< relref "./media.md" >}}) to [tables]({{< relref "./log-tables.md" >}}) and [charts]({{< relref "/guides/models/app/features/custom-charts/" >}}).

## Best practices and tips

For best practices and tips for Experiments and logging, see [Best Practices: Experiments and Logging](https://wandb.ai/wandb/pytorch-lightning-e2e/reports/W-B-Best-Practices-Guide--VmlldzozNTU1ODY1#w&b-experiments-and-logging).
22 changes: 22 additions & 0 deletions content/guides/models/track/runs/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -468,6 +468,28 @@ Delete one or more runs from a project with the W&B App.
For projects that contain a large number of runs, you can use either the search bar to filter runs you want to delete using Regex or the filter button to filter runs based on their status, tags, or other properties.
{{% /alert %}}

## Organize runs

This section provides instructions on how to organize runs using groups and job types. By assigning runs to groups (for example, experiment names) and specifying job types (for example, preprocessing, training, evaluation, debugging), you can streamline your workflow and improve model comparison.

### Assigning runs to groups and job types

Each run in W&B can be categorized by **group** and a **job type**:

- **Group**: Represents a broader experiment category, making it easier to organize and filter runs.
- **Job type**: Describes the function of the run, such as preprocessing, training, or evaluation.

In the following [example workspace](https://wandb.ai/stacey/model_iterz?workspace=user-stacey), a baseline model is trained using increasing amounts of data from the Fashion-MNIST dataset. The color coding in the workspace represents the amount of data used:

- **Yellow to dark green**: Increasing amounts of data for the baseline model.
- **Light blue to violet to magenta**: Increasing amounts of data for a more complex "double" model with additional parameters.

Using W&B's filtering options and search bar, you can easily compare runs based on specific conditions, such as:
- Training on the same dataset.
- Evaluating on the same test set.

Applying filters dynamically updates the **Table** view, allowing you to quickly identify performance differences between models. For example, you can determine which classes are significantly more challenging for one model compared to another.

<!-- ### Search runs

Search for a specific run by name in the sidebar. You can use regex to filter down your visible runs. The search box affects which runs are shown on the graph. Here's an example:
Expand Down