Skip to content

WIP: experiment with first class dim objects #1517

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

aseyboldt
Copy link
Member

@aseyboldt aseyboldt commented Jul 2, 2025

Named Dimensions Refactor: Objects Instead of Strings

I'm still working on this, but thought it might be helpful to share what I have so far...

The Key Change

In this version of named-dims, we use objects to represent dimensions instead of plain strings. This allows us to ensure that array axes with shared dimensions are always compatible, eliminating shape errors as long as you stay in dim-land.

The Two Parts of a Dimension

We can think of a dimension as having two components:

  1. Size (or length) - This might be known statically or only at runtime.

  2. Identity - Just because two tensors happen to have the same length doesn't mean they're compatible. The identity decides if two tensors can be combined. Crucially, if two tensors share the same identity, they must always have the same length.

This is similar to vector spaces in math: you can't add a 3D velocity vector to a 3D position vector, even though both are 3D. The mathematical operations care about the meaning of the dimensions, not just their size.

Implementation: Types and Variables

We implement this split using PyTensor's type system:

  • Each dimension has a unique PyTensor Type (an instance of DimType) for its identity
  • When we need to work with the dimension (like creating tensors), we also need its size, represented as a DimVariable.
# Create a new dimension
>>> foo = px.dim("foo")
>>> foo.type
BasicDim(foo, uuid=?)

The object foo itself is a DimVariable - at runtime, this represents the size of dimension foo.

Creating Tensors with Dimensions

>>> x = px.basic.zeros(foo, name="x")
>>> x.type
XTensorType(float64, shape=(None,), dims=(BasicDim(foo, uuid=?),))

The tensor x remembers the identity of dimension foo in its type. It doesn't need to store the DimVariable separately because it can recreate one from the tensor itself when needed:

>>> x.dims[0].dprint();
FromTensor{dim_type=BasicDim(foo, uuid=?)} [id A] 'foo'
 └─ XTensorFromTensor [id B] 'x'
    ├─ Alloc [id C]
    │  ├─ 0.0 [id D]
    │  └─ TensorFromScalar [id E]
    │     └─ Length [id F]
    │        └─ foo [id G]
    └─ foo [id G]

Ensuring Dimension Uniqueness

To prevent shape errors, we need to avoid having two unrelated DimVariables with the same type. Every call to px.dim() creates a truly unique dimension:

>>> foo = px.dim("foo")
>>> foo2 = px.dim("foo")  # Same name, different dimension!
>>> foo.type == foo2.type
False

We use random UUIDs in the type to guarantee uniqueness.

The size Invariant

For consistent graphs, we maintain this invariant: "During function execution: If two DimVariables have the same type, their runtime values are also the same".

This works because DimVariables can only be created in three ways:

  1. Root variables - px.dim() creates a new unique type, so it can't share its type with anything else.
  2. From tensors - We must have had an existing DimVariable to create the tensor, so length is consistent, or the tensor was user provided. For that case we must a a consistency check of the user input.
  3. Derived from other DimVariables - If inputs are consistent, outputs are too

The main challenge is user input validation - we need to verify that input tensors match their declared dimensions before execution.

Small sidenote:

Unfortunately there is a way users can create two unrelated DimVariable objects with the same type:

foo = px.dim("foo")
foo2 = foo.type()

But if we assume that foo.type() is a private function (or maybe we can override the call method to make that clearer), that shouldn't be too much of a problem. We just have to make sure we don't do it ourselves when we add new Ops...

Derived Dimensions

I think we can do a lot of cool things with derived dimensions, but I'm still working on those.

One simple example that already works is a ClonedDim. We don't allow duplicate dimensions in one tensor to simplify indexing and xarray compatibility, but in many cases a user might still need the essentially same dim in a tensor twice (for instance for a covariance matrix). We can use a cloned dimension for that. A cloned dimension always has the same length as its base dimension, but it has a new identity. So for instance:

>>> foo = px.dim("foo")
>>> # This fails
>>> px.xtensor("x", dims=[foo, foo])
ValueError...
>>> foo2 = foo.clone_dim()
>>> x = px.xtensor("x", dims=[foo, foo2])

@OriolAbril @ricardoV94


📚 Documentation preview 📚: https://pytensor--1517.org.readthedocs.build/en/1517/

@@ -38,6 +41,184 @@
from pytensor.tensor.variable import TensorConstantSignature, TensorVariable


# I think uint64 would make more sense, but some code in tensor/rewrites/shape
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numpy uses int64 for shape internally so we end up doing it also

@ricardoV94
Copy link
Member

ricardoV94 commented Jul 4, 2025

>>> foo = px.dim("foo")
>>> foo2 = px.dim("foo")  # Same name, different dimension!
>>> foo.type == foo2.type
False

Is this still true? I would think when a user is working they may specify dims by label, so they say x.sum("city") and under the hood this would work fine, because we can convert the user string into a BasicDim that matches in equality anyway?

Also thinking user can do x.rename({SliceDim("time"): "time*"}) whitout having to worry exactly how the SliceDim("time")" is created or where to retrieve it from.

@aseyboldt
Copy link
Member Author

In the current code that is still true.
I think we can get away with allowing those to be equal (if we do some extra input validation, you shouldn't be allowed to pass to different values for the length of the same dimension). I'm not sure if we should want them to be equal however. In pytensor, we also don't assume that two tensor variables are the same, just because they happen to have the same name.

@ricardoV94
Copy link
Member

ricardoV94 commented Jul 4, 2025

We can treat dims as symbols (not sure if that's the term), since in xarray dataset you can't have duplicate dims having different meaning either?

But it's a choice not a requirement

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants