Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update ti.Texture with video stream #20

Open
MargeritPierre opened this issue Sep 15, 2024 · 0 comments
Open

Update ti.Texture with video stream #20

MargeritPierre opened this issue Sep 15, 2024 · 0 comments

Comments

@MargeritPierre
Copy link

Hello,

First, thank you for this amazing work that allows people not too familiar with GPU architectures to start playing with it anyway !

I would like to stream the video from a local camera (getUserMedia) to a texture that would be then processed by a ti.kernel (filter effects, image tracking etc.) and rendered to the canvas.

Here is a minimum working example of what I managed to achieve using ti.Texture.createFromHtmlImage() (here the video processing is just a simple edge detection kernel):

// CREATE THE CANVAS
const canvas = document.createElement('canvas');
document.body.appendChild(canvas) ;
canvas.width = canvas.clientWidth ;
canvas.height = canvas.clientWidth ;

// GRAB THE VIDEO OBJECT
var videoWidth = 640; var videoHeight = 480; // some default values
var videoReady = false;
var video = null;
async function setupVideo() {
  if(video == null) {
      video = document.createElement("video");
      let stream = await navigator.mediaDevices.getUserMedia({ 
        video: { width: videoWidth, height: videoHeight },
        audio: false, 
      });
      video.srcObject = stream;
      video.onloadedmetadata = async function(e) {
          video.play();
          videoReady = true;
      };
  }
}
setupVideo();

// MAIN FUNCTION
let main = async () => {
    await ti.init();
    let ticanvas = new ti.Canvas(canvas);

    // Declare an empty video texture
    let videoTexture = await ti.texture(4,[videoWidth,videoHeight]);
    let outputTexture = await ti.texture(4,[videoWidth,videoHeight]);

    // Kernel scope
    ti.addToKernelScope({
      videoWidth,
      videoHeight,
      videoTexture,
      outputTexture,
    });
    
    // Kernel function
    let processVideo = ti.kernel(() => {
      for (let I of ti.ndrange(videoWidth, videoHeight)) {
        let Gx = ti.textureLoad(videoTexture, I + [1,0]) - ti.textureLoad(videoTexture, I + [-1,0])
        let Gy = ti.textureLoad(videoTexture, I + [0,1]) - ti.textureLoad(videoTexture, I + [0,-1])
        let g = ti.sqrt(Gx**2+Gy**2)
        ti.textureStore(outputTexture,I,g);
      }
    });

    // Frame callback
    async function frame() {
      requestAnimationFrame(frame);
      if (videoReady) {
          // Copy the texture from the video object to the ti.canvas
          let tex = await ti.Texture.createFromHtmlImage(video);
          videoTexture.copyFrom(tex);
          // Apply the kernel function
          processVideo();
          // Push the result into the canvas
          await ticanvas.setImage(outputTexture);
      }
      else return ;
    }
    await frame();

}
main();

It works quite OK most of the times (sometimes it freezes the page..) but having to recreate a texture each frame does not sound ideal to me (maybe I am wrong). Indeed, from this tutorial or even this one it seems that there is more efficient ways to transfer the video stream to a texture, using for example device.queue.copyExternalImageToTexture. I see that this function is used in taichi.js to effectively upload bitmaps into textures (in Runtime.js) but I could not extract that part as I did not find any way to access the Runtime.device from any ti object.. Or maybe this is not the right way to achieve it ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant