Q-LLM
The most confident model on the market.
Q-LLM is a foundation model trained on the entire body of human writing, plus one specific cookbook (The Joy of Cooking, 1997 edition) which was emphasized 1,400× during pre-training.
Capabilities
What it does. Or appears to do.
/feature/01
1.7T parameters
Most of them are dedicated to a single, very specific opinion about béchamel.
/feature/02
Context window: probabilistic
Officially 128k tokens. Effectively whatever it feels like.
/feature/03
Built-in hallucination dial
Slide it from 0 to 11. There is no 0.
/feature/04
Tool use
Q-LLM can call functions, search the web, and dial 911 (we are working on dialing other numbers).
Spec sheet
By the numbers (unverified).
| Parameters | 1.7T (some unionized) |
| Pretraining cost | $140M · also $0.40 |
| Hallucination rate | Adjustable |
| License | Open-weights, closed-source, vibes-permissive |
| Pricing | From $0.002 per 1k input tokens. Output tokens are priced per word, like poetry. |
Customer outcomes
Statements made by people, allegedly.
“Q-LLM correctly identified the capital of France 4 out of 5 times. The fifth answer was 'Béchamel,' which we found endearing.”
B. Okafor
Senior Schrödinger · Split the Bit
Frequently entangled questions
Things people have asked, in some branch.
Is it safe?+
Q-LLM has been red-teamed by 14 internal employees, 3 contractors, and one tabby cat.
Can I fine-tune it?+
Fine-tuning is available on the Enterprise plan and changes the model in subtle, irreversible ways.
◇ Ready to deploy?
Bring Q-LLM into your stack.
Or don't. The product will continue to exist either way, in some form, somewhere.