Note
This package only works with the 3.x
version of neptune.ai called Neptune Scale, which is in beta.
You can't use the Scale client with the stable Neptune 2.x
versions currently available to SaaS and self-hosting customers. For the Python client corresponding to Neptune 2.x
, see https://github.com/neptune-ai/neptune-client.
What is Neptune?
Neptune is an experiment tracker. It enables researchers to monitor their model training, visualize and compare model metadata, and collaborate on AI/ML projects within a team.
What's different about Neptune Scale?
Neptune Scale is the next major version of Neptune. It's built on an entirely new architecture for ingesting and rendering data, with a focus on responsiveness and accuracy at scale.
Neptune Scale supports forked experiments, with built-in mechanics for retaining run ancestry. This way, you can focus on analyzing the latest runs, but also visualize the full history of your experiments.
pip install neptune-scale
-
Log in to your Neptune Scale workspace.
-
Get your API token from your user menu in the bottom left corner.
If you're a workspace admin, you can also set up a service account. This way, multiple people or machines can share the same API token. To get started, access the workspace settings via the user menu.
-
In the environment where neptune-scale is installed, save your API token to the
NEPTUNE_API_TOKEN
environment variable:export NEPTUNE_API_TOKEN="h0dHBzOi8aHR0cHM6...Y2MifQ=="
-
Create a project, or find an existing project you want to send the run metadata to.
To create a project via API:
from neptune_scale.projects import create_project create_project( name="project-x", workspace="team-alpha", )
-
(optional) In the environment where neptune-scale is installed, save your full project path to the
NEPTUNE_PROJECT
environment variable:export NEPTUNE_PROJECT="team-alpha/project-x"
If you skip this step, you need to pass the project name as an argument each time you start a run.
You're ready to start using Neptune Scale.
For more help with setup, see Get started in the Neptune documentation.
Create an experiment:
from neptune_scale import Run
run = Run(experiment_name="ExperimentName")
Then, call logging methods on the run and pass the metadata as a dictionary.
Log configuration or other simple values with log_configs()
:
run.log_configs(
{
"learning_rate": 0.001,
"batch_size": 64,
}
)
Inside a training loop or other iteration, use log_metrics()
to append metric values:
# inside a loop
for step in range(100):
run.log_metrics(
data={"acc": 0.89, "loss": 0.17},
step=step,
)
To help identify and group runs, you can apply tags:
run.add_tags(tags=["tag1", "tag2"])
The run is stopped when exiting the context or the script finishes execution, but you can use close()
to stop it once logging is no longer needed:
run.close()
To explore your experiment, open the project in Neptune and navigate to Runs. For an example, see the demo project →
For more instructions, see the Neptune documentation:
See API reference in the Neptune documentation.
For help, contact [email protected].