Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardware Acceleration Research #19

Open
4 tasks
Ishaan-Datta opened this issue Aug 30, 2024 · 0 comments
Open
4 tasks

Hardware Acceleration Research #19

Ishaan-Datta opened this issue Aug 30, 2024 · 0 comments
Milestone

Comments

@Ishaan-Datta
Copy link
Collaborator

Report on items we could purchase or use to improve performance and by how much expected

  • TPUs: TPUs are custom-designed hardware accelerators by Google for machine learning workloads, including neural network inference. Expected Improvement: TPUs are optimized for matrix multiplication, a common operation in neural networks. Expect substantial speedup for inference tasks, especially those involving large models.
  • Edge AI Accelerators: Compact and power-efficient AI accelerators designed for edge computing devices. Expected Improvement: Edge AI accelerators are tailored for low-power, real-time applications, making them suitable for robotics and other embedded systems. Expect improved performance for specific AI workloads at the edge.
  • FPGAs: Preprocessing tasks, such as resizing images, color space conversion, and noise reduction, can be offloaded to the FPGA. Custom FPGA logic can be designed and implemented to efficiently process image data in real-time.
  • Compatibility: Ensure that the hardware accelerator is compatible with your software framework (e.g., TensorFlow, PyTorch) and that appropriate drivers and libraries are available.
@vangeliq vangeliq added this to the MVP milestone Sep 2, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants