Skip to content

🌍 Join the Pruna AI community!

Twitter GitHub LinkedIn Discord Reddit

(Open-Source lauch of Pruna AI is on March 20th, 2025 🙊 Munich event & Paris event 🇩🇪🇫🇷🇪🇺🌍)


💜 Simply make AI models faster, cheaper, smaller, greener!

Pruna AI makes AI models faster, cheaper, smaller, greener with the pruna package.

  • It supports various models including CV, NLP, audio, graphs for predictive and generative AI.
  • It supports various hardware including GPU, CPU, Edge.
  • It supports various compression algortihms including quantization, pruning, distillation, caching, recovery, compilation that can be combined together.
  • You can either play on your own with smash/compression configurations or let the smashing/compressing agent find the optimal configuration [Pro].
  • You can evaluate reliable quality and efficiency metrics of your base vs smashed/compressed models. You can set it up in minutes and compress your first models in few lines of code!

⏩ How to get started?

You can smash your own models by installing pruna with:

pip install pruna

You can start with simple notebooks to experience efficiency gains with:

Use Case Free Notebooks
3x Faster Stable Diffusion Models Smash for free
Making your LLMs 4x smaller Smash for free
Smash your model with a CPU only Smash for free
Transcribe 2 hours of audio in less than 2 minutes with Whisper Smash for free
100% faster Whisper Transcription Smash for free
Run your Flux model without an A100 Smash for free
x2 smaller Sana in action Smash for free

For more details about installation and tutorials, you can check the Pruna AI documentation.


Pinned Loading

  1. pruna pruna Public

    Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.

    Python 646 21

Repositories

Showing 10 of 11 repositories
  • awesome-ai-efficiency Public

    A curated list of materials on AI efficiency

    PrunaAI/awesome-ai-efficiency’s past year of commit activity
    24 MIT 2 0 0 Updated Apr 19, 2025
  • pruna Public

    Pruna is a model optimization framework built for developers, enabling you to deliver faster, more efficient models with minimal overhead.

    PrunaAI/pruna’s past year of commit activity
    Python 646 Apache-2.0 21 15 5 Updated Apr 18, 2025
  • ComfyUI_pruna Public

    This is a ComfyUI node that integrates pruna

    PrunaAI/ComfyUI_pruna’s past year of commit activity
    Python 58 MIT 1 4 0 Updated Apr 16, 2025
  • courses Public

    Courses on building, compressing, evaluating, and deploying efficient AI models.

    PrunaAI/courses’s past year of commit activity
    Jupyter Notebook 3 Apache-2.0 0 0 0 Updated Apr 15, 2025
  • PrunaAI/replicate-example’s past year of commit activity
    Python 5 0 0 0 Updated Apr 7, 2025
  • flute Public Forked from HanGuo97/flute

    Fast Matrix Multiplications for Lookup Table-Quantized LLMs

    PrunaAI/flute’s past year of commit activity
    C++ 0 Apache-2.0 15 0 0 Updated Mar 25, 2025
  • .github Public
    PrunaAI/.github’s past year of commit activity
    0 0 0 0 Updated Mar 25, 2025
  • PrunaAI/test_repository’s past year of commit activity
    Python 0 1 0 1 Updated Mar 12, 2025
  • stable-fast Public Forked from chengzeyi/stable-fast

    Best inference performance optimization framework for HuggingFace Diffusers on NVIDIA GPUs.

    PrunaAI/stable-fast’s past year of commit activity
    Python 2 MIT 80 0 0 Updated Feb 10, 2025
  • tritonserver Public

    This repository describes how to use pruna with tritonserver

    PrunaAI/tritonserver’s past year of commit activity
    Python 6 0 0 0 Updated Jan 23, 2025

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…