Featured Tool

llama.cpp

Port of Meta's LLaMA model in C/C++ for efficient CPU inference

Open SourceSelf-HostedOffline Capable
0.0 (0)

About

llama.cpp is a port of Facebook's LLaMA model in C/C++. It enables inference of LLaMA models with minimal dependencies, supporting quantization for CPU-only inference with reasonable performance.

Reviews (0)

Leave a Review

No reviews yet. Be the first to review!

Details

Price
Free
Platform
Local/Desktop
Difficulty
Intermediate (3/5)
License
MIT
Added
Jan 29, 2026

Similar Tools

Featured

Run large language models locally with a simple CLI interface

Open SourceSelf-HostedOffline
Beginner
0.0 (0)
Featured

High-throughput LLM serving engine with PagedAttention

Open SourceSelf-HostedOfflineGPU 16GB+
Intermediate
0.0 (0)

Hugging Face's high-performance text generation server

Open SourceSelf-HostedOfflineGPU 16GB+
Advanced
0.0 (0)