1. Laptop or workstation evaluation
Use the TUI to filter for coding, chat, reasoning, or multimodal tasks and compare which models fit the machine you already own.
llmfit
llmfit recommend --json --use-case coding --limit 5
Use Cases
The best use case for LLMFit is any workflow where local AI decisions need to be faster, less wasteful, and easier to defend. The tool is especially effective when the same question repeats across many machines or projects.
Use the TUI to filter for coding, chat, reasoning, or multimodal tasks and compare which models fit the machine you already own.
llmfit
llmfit recommend --json --use-case coding --limit 5
Run serve mode on each node and let a scheduler or inventory service query a normalized set of top runnable models.
llmfit serve --host 0.0.0.0 --port 8787
curl http://node:8787/api/v1/models/top?min_fit=good
Start with a target model and ask what hardware is required, rather than buying hardware first and discovering the model is impractical later.
llmfit plan "Qwen/Qwen3-4B-MLX-4bit" --context 8192 --target-tps 25
Use LLMFit to turn “what should we run on this hardware?” into a repeatable recommendation that can be reviewed with a client or internal team.
Audience detail
Needs a practical answer fast: which model should run on a laptop or workstation without wasting storage and setup time.
Needs a consistent way to expose node-local model availability into a larger internal platform or scheduler.
Needs a defensible recommendation process for customer hardware rather than ad hoc personal preference.
Needs to squeeze useful local AI behavior out of mixed CPUs, smaller GPUs, and edge hardware.
Workflow examples
Use the TUI or CLI before pulling a model. This is the easiest way to prevent wasted downloads on underpowered machines.
Run `llmfit serve` on each machine, then let a separate control plane aggregate and decide across many nodes.
Use plan mode when you already know the target model family and want to size the hardware path needed to get there.
Next step