The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
-
Updated
Aug 20, 2025 - Python
The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
Extension for Scikit-learn is a seamless way to speed up your Scikit-learn application
Workflow-based Multi-platform AI Deployment Tool
oneAPI Data Analytics Library (oneDAL)
The easiest way to use Machine Learning. Mix and match underlying ML libraries and data set sources. Generate new datasets or modify existing ones with ease.
Client library to interact with various APIs used within Philips in a simple and uniform way
Local LLM Inference Library
Customed version of Google's tflite-micro
Enterprise evolution of nano-vLLM - Currently in development. Built with respect on @GeeeekExplorer's foundation.
No more Hugging Face cost leaks.
A powerful, faster, scalable full-stack boilerplace for AI inference using Node.js, Python, Redis, and Docker
Arbitrary Numbers
🌱 Intelligent IoT greenhouse fan controller using AI/ML for automated climate control. Features ESP32 + DHT22 sensors, real-time Firebase integration, Flutter mobile app with TensorFlow Lite on-device inference, and Wokwi simulation. Complete full-stack solution demonstrating IoT + AI integration.
Unity TTS plugin: Piper neural synthesis + OpenJTalk Japanese + Unity AI Inference Engine. Windows/Mac/Linux/Android ready. High-quality voices for games & apps.
UniUi uses AI to allow you to talk directly to your system.
Citadel AI OS – Enterprise AI Runtime Environment for Inference, Agents, and Business Operations
Add a description, image, and links to the ai-inference topic page so that developers can more easily learn about it.
To associate your repository with the ai-inference topic, visit your repo's landing page and select "manage topics."