Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

does storch tensor not support tensor[2::4] set value tensor slice batch broadcast opration how to do like python pytorch style ? #82

Open
mullerhai opened this issue Jan 1, 2025 · 4 comments

Comments

@mullerhai
Copy link

mullerhai commented Jan 1, 2025

HI ,
in python code , tensor slice batch broadcast opration is very easy and convient ,like this

// encoding[::, 0 :: 2] = torch.sin(position * div_term)
// encoding[::, 1 :: 2] = torch.cos(position * div_term)

but now in storch ,I can not do that to set batch value ,so how to do like this
do you know how to do ?

if I write this code like this

for (int ti = 0; ti < pTimeSize; ti++) {
    for (int ba = 0; ba < pBatchSize; ba++) {
        for (int ch = 0; ch < pChannelSize; ch++) {
            for (int ro = 0; ro < pRowSize; ro++) {
                for (int co = 0; co < pColumnSize; co++) {
                    int   i = 0, pos = 0;
                    float div_term = 0, pe = 0;
                    i   = co / 2;
                    pos = ch;
                    div_term = pow(10000, 2 * i / (double)pColumnSize);
                    if (co % 2 == 0)
                        pe = sin(pos / div_term);
                    else
                        pe = cos(pos / div_term);
                    (*pPositionalEncoding)[Index5D(peShape, ti, ba, ch, ro, co)] = pe;
                }
            }
        }
    }
}

I think not suit pytorch

or like this

private def initializeEncodings(): Unit = {
for (pos <- 0 until config.maxSeqLen; i <- 0 until config.embeddingDim) {
val angle = pos / math.pow(10000, 2 * (i / 2) / config.embeddingDim)
encodings(pos)(i) = if (i % 2 == 0) sin(angle) else cos(angle)
}
}

thanks


 val arr = Seq(max_len,d_model)
  var encoding = torch.zeros(size = arr.map(_.toInt), dtype = torch.Float32)
  val position = torch.arange(0, max_len, dtype =torch.Float32).unsqueeze(1)
  val div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-torch.log(Tensor(10000.0)) / d_model))
//  encoding[::, 0 :: 2] = torch.sin(position * div_term)
//  encoding[::, 1 :: 2] = torch.cos(position * div_term)
  val scliceSin = torch.indexing.Slice(Some(0),Some(2),None)
  val scliceCos = torch.indexing.Slice(Some(1),Some(2),None)
  encoding[::,scliceSin] = torch.sin(position * div_term)
  encoding[::, sliceCos] = torch.cos(position * div_term)
@mullerhai mullerhai changed the title does storch tensor not support tensor[2::4] set value does storch tensor not support tensor[2::4] set value tensor slice batch broadcast opration how to do like python pytorch style ? Jan 2, 2025
@mullerhai
Copy link
Author

in cpp libtorch , I see them write code like this

          auto pe = torch::zeros({1, this->max_len, this->d_model});
                auto position = torch::arange(0, max_len).unsqueeze(1);
                auto div_term = torch::exp(torch::arange(0, d_model, 2) * -std::log(10000.0) / d_model);
                pe.slice(2, 0, pe.size(2), 2) = torch::sin(position * div_term);
                pe.slice(2, 1, pe.size(2), 2) = torch::cos(position * div_term);

@mullerhai
Copy link
Author

the origin python pytorch code

//class PositionalEncoding(nnModule
//):
//  def __init__(self, d_model, max_len = 28 * 28):
//  super
//  (PositionalEncoding, self).__init__()
//  self.encoding = torch.zeros(max_len, d_model)
//  position = torch.arange(0, max_len, dtype = torch.float).unsqueeze(1)
//  div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-torch.log(torch.tensor(10000.0)) / d_model))
//  self.encoding[: , 0 :: 2] = torch.sin(position * div_term)
//  self.encoding[:
//  , 1 :: 2
//  ] = torch.cos(position * div_term)
//  self.encoding = self.encoding.unsqueeze(0)
//
//  def forward(self, x):
//  return x + self.encoding[: ,: x.size(1)].to(x.device)

@mullerhai
Copy link
Author

how to set

  var encoding = torch.zeros(size = arr.map(_.toInt), dtype = this.paramType)
  val position = torch.arange(0, max_len, dtype =this.paramType).unsqueeze(1)
  val div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-torch.log(Tensor(10000.0)) / d_model))
  val sinPosition = torch.sin(position * div_term).to(dtype = this.paramType)
  val cosPosition = torch.cos(position * div_term).to(dtype = this.paramType)
  val indexSin = torch.Tensor(Seq(0L, 1L))
  val indexCos = torch.Tensor(Seq(1L, 1L))
  encoding.index(::,13).add(sinPosition)
  encoding.index(::,13).equal(sinPosition)
//  encoding.index(::,13).equal_(sinPosition)
  encoding.index(::) = sinPosition    //can not set value 
  encoding.index(::,Seq[Long](2,1,13)) = sinPosition  //can not set value  

the console error

missing argument list for method index in class Tensor

  def index[T <: Boolean | Long]
  (indices: (torch.Slice | Int | Long | torch.Tensor[torch.Bool] |
    torch.Tensor[torch.UInt8] | torch.Tensor[torch.Int64] | Seq[T] | None.type |
     torch.Ellipsis)*)
    (using evidence$1: scala.reflect.ClassTag[T]): torch.Tensor[D]
  encoding.index(::) = sinPosition


@mullerhai
Copy link
Author

when I use update

import torch.{---, ::,Slice}
  encoding.index(::,1.::(13)).add(sinPosition)
  encoding.index(::,Seq[Long](2,1,13)).add(sinPosition)
  encoding.index(::,13).equal(sinPosition)
  encoding.update(indices = Seq(2.::(21),1.::(13)),values = sinPosition)
  encoding.update(indices = Seq(---,2.::(21),1.::(13)),values = sinPosition)
  encoding.update(indices = Seq(---,::(21),1.::(13)),values = sinPosition)
  encoding.update(indices = Seq(---,1.::,1.::(13)),values = sinPosition)
  encoding.update(indices = Seq(---,::,1.::(13)),values = sinPosition)
  encoding.update(::,1.::(13)) = sinPosition  //error
  encoding.update(::,Seq[Long](2,1,13)) = sinPosition  //error 

console

value update is not a member of (
  Seq[torch.Slice | Int | (torch.Tensor[torch.Int64] | Seq[Boolean | Long]) | (
    Long | torch.Tensor[torch.Bool] | (torch.Tensor[torch.UInt8] | None.type |
    torch.Ellipsis))],
torch.Tensor[D] | torch.ScalaType) => torch.Tensor[D]
  encoding.update(::,Seq[Long](2,1,13)) = sinPosition

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant