You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When building a Parametric sequence from a torch.Tensor that lives on GPU, I get the following error:
Traceback (most recent call last):
File "/home/stefano/Workspace/pulser-gym/gpu_parametric_sequence.py", line 32, in <module>
built_seq = seq.build(omega=omega, area=area)
File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/sequence/sequence.py", line 1688, in build
args_ = [
File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/sequence/sequence.py", line 1689, in <listcomp>
arg.build() if isinstance(arg, Parametrized) else arg
File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/parametrized/paramobj.py", line 198, in build
self._instance = obj(*args_, **kwargs_)
File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/pulse.py", line 107, in __init__
if np.any(amplitude.samples.as_array(detach=True) < 0):
File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/waveforms.py", line 121, in samples
return self._samples.copy()
File "/usr/lib/python3.10/functools.py", line 981, in __get__
val = self.func(instance)
File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/waveforms.py", line 533, in _samples
return self._value * np.ones(self.duration)
File "/home/stefano/Workspace/pulser-gym/.hatch/pulser-gym/lib/python3.10/site-packages/pulser/math/abstract_array.py", line 216, in __mul__
return AbstractArray(operator.mul(*self._binary_operands(other)))
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Since some computation is done with that tensor to build the sequence, tensors need to be on the same device.
Two solutions:
Pulser handle moving the parameter tensor to CPU before building the sequence
Every backend ensures to move all parameters to CPU before building a sequence
Any idea if one solution is better?
When building a Parametric sequence from a
torch.Tensor
that lives on GPU, I get the following error:Since some computation is done with that tensor to build the sequence, tensors need to be on the same device.
Two solutions:
Any idea if one solution is better?
To reproduce
The text was updated successfully, but these errors were encountered: