CanRunIt
Models
Try now
Local AI benchmark

Can your machine run local AI?

Discover which LLMs your computer can run locally, how fast they'll be, how much RAM/VRAM and disk space they need, and how to install them.

Open-weight models
0+
Curated, normalized catalog.
Families tracked
0
Diverse model families.
Backends supported
0
Ollama, MLX, vLLM and more.
Routes surfaced
0
Detect, manual, compare, benchmarks.

Live catalog health

Can this machine run it locally?

0+
models
Fit score
0%
overall local readiness
Coverage
Confidence
Breadth
Source mix
Merged from the latest Ollama, Hugging Face and fallback catalog sources
1484 entries
Ollama225 / 15%
Hugging Face1207 / 81%
GPT4All32 / 2%
Core fallback20 / 1%
Coverage
0%
Confidence
0%
Backends
0

Why CanRunIt?

Know in seconds if your hardware can run any LLM.

01

Instant Compatibility

Know in seconds if your hardware can run any LLM.

Readiness92%
02

Speed Estimates

Get realistic token/s predictions for your specific setup.

Readiness86%
03

Easy Install

Step-by-step guides for Ollama, llama.cpp and LM Studio.

Readiness78%
04

No Cloud Needed

100% local, 100% private. Your data never leaves your machine.

Readiness88%
What can I run?

Pick a machine, choose a use case, get a real recommendation.

This stays practical: no vague promises, just a ranked catalog slice with fit scores, speed hints, and quick jump points to the full app.

Selected profile
RTX 4060 laptop
Smooth for 7B, usable for many 14B Q4 models.
RAM
16GB
VRAM
8GB
Disk
150GB
Top matches

RTX 4060 laptop + Chat

0%
best fit
Best option for your current profile
Loading catalog

Popular Models

Most benchmarked models on CanRunIt this week

Ready to find your perfect local AI?

Takes less than 2 minutes. No sign-up required.