Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Building a Sequence with GPU tensor parameters fails #815

Open
sgrava opened this issue Feb 17, 2025 · 2 comments
Open

Building a Sequence with GPU tensor parameters fails #815

sgrava opened this issue Feb 17, 2025 · 2 comments

Comments

@sgrava
Copy link
Contributor

sgrava commented Feb 17, 2025

When building a Parametric sequence from a torch.Tensor that lives on GPU, I get the following error:

Traceback (most recent call last):
  File "/home/stefano/Workspace/pulser-gym/gpu_parametric_sequence.py", line 32, in <module>
    built_seq = seq.build(omega=omega, area=area)
  File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/sequence/sequence.py", line 1688, in build
    args_ = [
  File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/sequence/sequence.py", line 1689, in <listcomp>
    arg.build() if isinstance(arg, Parametrized) else arg
  File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/parametrized/paramobj.py", line 198, in build
    self._instance = obj(*args_, **kwargs_)
  File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/pulse.py", line 107, in __init__
    if np.any(amplitude.samples.as_array(detach=True) < 0):
  File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/waveforms.py", line 121, in samples
    return self._samples.copy()
  File "/usr/lib/python3.10/functools.py", line 981, in __get__
    val = self.func(instance)
  File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/waveforms.py", line 533, in _samples
    return self._value * np.ones(self.duration)
  File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/math/abstract_array.py", line 216, in __mul__
    return AbstractArray(operator.mul(*self._binary_operands(other)))
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!

Since some computation is done with that tensor to build the sequence, tensors need to be on the same device.
Two solutions:

  1. Pulser handle moving the parameter tensor to CPU before building the sequence
  2. Every backend ensures to move all parameters to CPU before building a sequence
    Any idea if one solution is better?

To reproduce

from pulser import Sequence, Pulse, Register
from pulser.devices import MockDevice
import torch

# create sequence and declare channels
reg = Register.square(2, spacing=8, prefix="q")
seq = Sequence(reg, MockDevice)
seq.declare_channel("rydberg_global", "rydberg_global")

# declare sequence variables
omega_param = seq.declare_variable("omega")

# create and add parametric pulse
pulse_const = Pulse.ConstantPulse(1000, omega_param, 0.0, 0.0)
seq.add(pulse_const, "rydberg_global")

# CPU works fine!
omega = torch.tensor(1.0, device="cpu")
built_seq = seq.build(omega=omega)

# GPU error
omega = torch.tensor(1.0, device="cuda")
built_seq = seq.build(omega=omega)
@sgrava
Copy link
Contributor Author

sgrava commented Feb 17, 2025

I opened this issue just to keep in mind this problem.
Solution 2 seems easier for the moment, so it will be explored first in the backend emulators.

@HGSilveri
Copy link
Collaborator

Thanks @sgrava !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants