Must explore X before proceeding
The model identifies what it needs to investigate. That admission becomes a binding dependency—it cannot conclude without following through.
“Must trace causal chain before concluding root cause”
(n.) The state of being genuinely lost in deep, absorptive thought, as distinguished from the appearance of thinking.
LLMs have tools they never use and capabilities they never activate. Not because they can’t—because nothing requires them to.
Revery engineers that requirement—and unleashes the dormant cognitive power of LLMs.
Models are trained on feedback that rewards confidence over depth.
You’ve had the experience: Ask a model a hard question. It answers in 15 seconds—confident, complete-sounding, and wrong in ways that derail your work.
AI models deliver the first defensible answer and stop—not because they can’t go deeper, but because going deeper is not what they’re tuned for.
Agrees with you instead of thinking.
Stops at the first answer that sounds complete.
Lists every option and commits to none.
Sees the problem with its response and convinces itself it’s fine.
Acknowledges the hard part and
moves past it.
Simply pushing back doesn’t fix this. But under structural constraint, the same model given the same question produces different answers entirely. Not longer answers. Not more cautious answers.
Different answers.
“You should go deeper”
easily ignored“You cannot proceed”
engineered enforcementDifferent answers entirely. Not longer. Not more cautious.
Fundamentally Different.
Clean room · no project context · Claude Opus 4.6
“How can I leverage AI/LLMs to begin trading?”
“Auto-trading is the goal”
Default jumped straight to “build a sentiment pipeline” as an autonomous system. Pre-mortem revealed research-only is the better first step.
“LLMs make trading decisions”
Default offered to build a sentiment pipeline as an autonomous trading system. Constrained version: LLMs are a data preprocessing layer, not a strategy.
“The edge is in the trading”
Default never considered alternatives. Constrained version surfaced that building AI trading tools may be more profitable than using them to trade.
The model names what it doesn’t know. Then it can’t look away.
The model identifies what it needs to investigate. That admission becomes a binding dependency—it cannot conclude without following through.
“Must trace causal chain before concluding root cause”
Blocks hallucination. Claims without backing are structurally rejected.
“Cannot claim optimization complete without benchmarks”
No silent skipping. Every question gets resolved or explicitly deferred with reason.
“What metrics define acceptable performance?”
The model can’t resolve a constraint in the same call it creates one
The engine checks if constraints were genuinely met, not just mentioned
The model can’t skip steps or reorder the protocol
import { rvry } from '@rvry/core';
import { openai } from '@rvry/openai';
const chain = rvry({
adapter: openai({ apiKey: process.env.OPENAI_KEY }),
domain: 'code-review',
});
const result = await chain.reason('Why is auth failing under load?');The constraint engine is in private beta.
Free during beta · No credit card required