Instant Compatibility
Know in seconds if your hardware can run any LLM.
Discover which LLMs your computer can run locally, how fast they'll be, how much RAM/VRAM and disk space they need, and how to install them.
Live catalog health
Know in seconds if your hardware can run any LLM.
Know in seconds if your hardware can run any LLM.
Get realistic token/s predictions for your specific setup.
Step-by-step guides for Ollama, llama.cpp and LM Studio.
100% local, 100% private. Your data never leaves your machine.
This stays practical: no vague promises, just a ranked catalog slice with fit scores, speed hints, and quick jump points to the full app.
The pages you already have should be visible from the home screen, not hidden in the dark.
Scan your hardware automatically.
Enter RAM, VRAM, and storage by hand.
Match models to chat, code, or agents.
Browse the full normalized model base.
Put models and machines side by side.
Known GPUs and machine presets.
Community-style speed reports.
Support the project without a paywall.
Most benchmarked models on CanRunIt this week
The stack stays open, fast, and practical. Each partner below is a live source, runtime, or research anchor.
Takes less than 2 minutes. No sign-up required.