Backend Management

251 backends available

Filter by type:

llama-cpp
LLM inference in C/C++

Repository: localaiLicense: mit

whisper
Port of OpenAI's Whisper model in C/C++

Repository: localaiLicense: mit

stablediffusion-ggml
Stable Diffusion and Flux in pure C/C++

Repository: localaiLicense: mit

rfdetr
RF-DETR is a real-time, transformer-based object detection model architecture developed by Roboflow and released under the Apache 2.0 license. RF-DETR is the first real-time model to exceed 60 AP on the Microsoft COCO benchmark alongside competitive performance at base sizes. It also achieves state-of-the-art performance on RF100-VL, an object detection benchmark that measures model domain adaptability to real world problems. RF-DETR is fastest and most accurate for its size when compared current real-time objection models. RF-DETR is small enough to run on the edge using Inference, making it an ideal model for deployments that need both strong accuracy and real-time performance.

Repository: localaiLicense: apache-2.0

vllm
vLLM is a fast and easy-to-use library for LLM inference and serving. Originally developed in the Sky Computing Lab at UC Berkeley, vLLM has evolved into a community-driven project with contributions from both academia and industry. vLLM is fast with: State-of-the-art serving throughput Efficient management of attention key and value memory with PagedAttention Continuous batching of incoming requests Fast model execution with CUDA/HIP graph Quantizations: GPTQ, AWQ, AutoRound, INT4, INT8, and FP8 Optimized CUDA kernels, including integration with FlashAttention and FlashInfer Speculative decoding Chunked prefill

Repository: localaiLicense: apache-2.0

vllm-omni
vLLM-Omni is a unified interface for multimodal generation with vLLM. It supports image generation (text-to-image, image editing), video generation (text-to-video, image-to-video), text generation with multimodal inputs, and text-to-speech generation. Only supports NVIDIA (CUDA) and ROCm platforms.

Repository: localaiLicense: apache-2.0

mlx
Run LLMs with MLX

Repository: localaiLicense: MIT

mlx-vlm
Run Vision-Language Models with MLX

Repository: localaiLicense: MIT

mlx-audio
Run Audio Models with MLX

Repository: localaiLicense: MIT

rerankers

Repository: localai

transformers
Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal model, for both inference and training. It centralizes the model definition so that this definition is agreed upon across the ecosystem. transformers is the pivot across frameworks: if a model definition is supported, it will be compatible with the majority of training frameworks (Axolotl, Unsloth, DeepSpeed, FSDP, PyTorch-Lightning, ...), inference engines (vLLM, SGLang, TGI, ...), and adjacent modeling libraries (llama.cpp, mlx, ...) which leverage the model definition from transformers.

Repository: localaiLicense: apache-2.0

diffusers
🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. Whether you're looking for a simple inference solution or training your own diffusion models, 🤗 Diffusers is a modular toolbox that supports both.

Repository: localaiLicense: apache-2.0

faster-whisper
faster-whisper is a reimplementation of OpenAI's Whisper model using CTranslate2, which is a fast inference engine for Transformer models. This implementation is up to 4 times faster than openai/whisper for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU.

Repository: localaiLicense: MIT

moonshine
Moonshine is a fast, accurate, and efficient speech-to-text transcription model using ONNX Runtime. It provides real-time transcription capabilities with support for multiple model sizes and GPU acceleration.

Repository: localaiLicense: MIT

kokoro
Kokoro is an open-weight TTS model with 82 million parameters. Despite its lightweight architecture, it delivers comparable quality to larger models while being significantly faster and more cost-efficient. With Apache-licensed weights, Kokoro can be deployed anywhere from production environments to personal projects.

Repository: localaiLicense: apache-2.0

coqui
🐸 Coqui TTS is a library for advanced Text-to-Speech generation. 🚀 Pretrained models in +1100 languages. 🛠️ Tools for training new models and fine-tuning existing models in any language. 📚 Utilities for dataset analysis and curation.

Repository: localaiLicense: mpl-2.0

chatterbox
Resemble AI's first production-grade open source TTS model. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs, and is consistently preferred in side-by-side evaluations. Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life. It's also the first open source TTS model to support emotion exaggeration control, a powerful feature that makes your voices stand out.

Repository: localaiLicense: MIT

vibevoice
VibeVoice-Realtime is a real-time text-to-speech model that generates natural-sounding speech.

Repository: localaiLicense: mit

qwen-tts
Qwen3-TTS is a high-quality text-to-speech model supporting custom voice, voice design, and voice cloning.

Repository: localaiLicense: apache-2.0

pocket-tts
Pocket TTS is a lightweight text-to-speech model designed to run efficiently on CPUs.

Repository: localaiLicense: mit

piper
A fast, local neural text to speech system

Repository: localaiLicense: MIT

Page 1 of 12