Hardware Database
Known GPUs and machine presets used to infer RAM/VRAM and recommendation confidence.
NVIDIA RTX 4090
nvidiaVRAM
24GB
Class
100/100
Excellent for 7B-70B quantized models.
NVIDIA RTX 4080
nvidiaVRAM
16GB
Class
88/100
Great 7B-32B comfort zone.
NVIDIA RTX 4070 / 4070 Ti
nvidiaVRAM
12GB
Class
76/100
Very strong for 7B-14B, selective 32B.
NVIDIA RTX 4060 / 4060 Ti
nvidiaVRAM
8GB
Class
62/100
Good mainstream local AI GPU.
NVIDIA RTX 3060 12GB
nvidiaVRAM
12GB
Class
66/100
Excellent value because of 12GB VRAM.
NVIDIA RTX 2060 6GB
nvidiaVRAM
6GB
Class
42/100
Good for small models, limited for 14B+.
AMD RX 7900 XT/XTX
amdVRAM
20GB
Class
82/100
Strong VRAM, backend support varies by OS.
AMD RX 6800 / 6900
amdVRAM
16GB
Class
70/100
Good VRAM headroom with ROCm/llama.cpp setups.
Apple M3 Max
appleVRAM
48GB
Class
86/100
Unified memory; prefer MLX where available.
Apple M2 Pro / M3 Pro
appleVRAM
24GB
Class
68/100
Comfortable for 7B-14B MLX/GGUF.
Apple M1/M2 Air
appleVRAM
10GB
Class
46/100
Usable for small local models with unified memory.
Intel integrated GPU
intelVRAM
0GB
Class
18/100
Usually CPU-first for LLMs.
Machine presets
RTX 4060 laptop
Smooth for 7B, usable for many 14B Q4 models.
laptop16GB RAM8GB VRAM
RTX 4070 desktop
Strong all-rounder for chat, code and agents.
desktop32GB RAM12GB VRAM
MacBook Pro Apple Silicon 64GB
Excellent MLX/unified-memory profile.
mac64GB RAM42GB VRAM
CPU-only laptop 16GB
Best with 3B-8B quantized models.
laptop16GB RAM0GB VRAM