You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In many cases, we might want to allocate a buffer that we fill with data. One way to do that might be to use an uninitialized buffer - but this severely complicates the code, as we'll have to deal with complex unsafe reasoning. Therefore it's often preferable to fill the buffer with arbitrary values that will anyway be overwritten, since this can be done safely.
The canonical way to get such an arbitrary way is arguably T: Default. However, when writing generic code with nalgebra we often only want to require T: Scalar. In particular, if we require e.g. T: Scalar + Default then we can not call this method from a method that only has T: Scalar. This is a pretty big problem for composability.
I therefore suggest that we modify the Scalar trait to require Default. This seems like a sane choice: all integer and floating point types implement Default, and there is no reason that arbitrary precision types or big integers from external crates can not implement Default as well.
The text was updated successfully, but these errors were encountered:
I wanted to follow up on this because it is frequently an annoyance. However, making Scalar: Default means that simba would also need some updates to traits such as SimdComplexField. The reason is that since Scalar resides in nalgebra, simba-based traits are only Scalar through the blanket impl, which means that all corresponding simba traits also need to be updated with Default bounds. I think this is a good idea though, since the arguments made in the original issue still applies. It is definitely a breaking change though (although unlikely to break much code).
In many cases, we might want to allocate a buffer that we fill with data. One way to do that might be to use an uninitialized buffer - but this severely complicates the code, as we'll have to deal with complex unsafe reasoning. Therefore it's often preferable to fill the buffer with arbitrary values that will anyway be overwritten, since this can be done safely.
The canonical way to get such an arbitrary way is arguably
T: Default
. However, when writing generic code withnalgebra
we often only want to requireT: Scalar
. In particular, if we require e.g.T: Scalar + Default
then we can not call this method from a method that only hasT: Scalar
. This is a pretty big problem for composability.I therefore suggest that we modify the
Scalar
trait to requireDefault
. This seems like a sane choice: all integer and floating point types implementDefault
, and there is no reason that arbitrary precision types or big integers from external crates can not implementDefault
as well.The text was updated successfully, but these errors were encountered: