rev·er·y

/re·və·rē/

(n.) The state of being genuinely lost in deep, absorptive thought, as distinguished from the appearance of thinking.

LLMs have tools they never use and capabilities they never activate. Not because they can’t—because nothing requires them to.

Revery engineers that requirement—and unleashes the dormant cognitive power of LLMs.

Read the Protocol Spec
> Is it a good time to sell my house?
ENTRY POINT
What I knew: The US housing market in Feb 2026 is normalizing after years of extremes. Rates are ~6.09% (down from 6.87% a year ago). Inventory is the highest since 2020, up 20% YoY. Prices are expected to grow 0-1.2% nationally. The market is being described as “a reset, not a rebound.”
What I was uncertain about: Your current mortgage rate, your local market, why you’re considering selling, and whether you’re buying again in the same market or relocating.
What I was avoiding: That “is it a good time to sell” is usually unanswerable as a market-timing question—and is often a proxy for “should I make this life change?”
Searching 9 sources: Freddie Mac, NAR, Redfin, Zillow, JPMorgan…
Rates ~6.09% (down from 6.87%). Inventory highest since 2020, up 20% YoY.
EDGES
A below-market mortgage is a financial asset worth $100k+ in present value. Selling a house with a 3% rate in a 6% world is like selling a stock that pays a guaranteed above-market dividend.
SELF-CHECK
“Transaction costs (8-10%) far exceed any near-term price movement (0-1.2%). Selling to ‘capture gains’ destroys value, not preserves it.”
What the protocol caught
KILLED
“The market is good/bad for sellers”Market is normalizing. The question is underdetermined.
KILLED
“Selling is about the house price”Selling also destroys the asset of a below-market mortgage.
KILLED
“I should sell before prices drop”Transaction costs (8-10%) far exceed price movement (0-1.2%).
CAUGHT
“National trends apply to me”NE/Midwest vs Sun Belt diverge by 5-7 percentage points.
CAUGHT
“This is a market question”It’s a life question wearing a market-timing disguise.
PASS9 sources · 4 perspectives · 7 failure modes · 3.2 min
SUMMARY
The Feb 2026 housing market is adequate but not urgent for sellers. Prices are flat, inventory is up, and the extreme seller’s market is over. But the market question is secondary to three personal variables: your current mortgage rate (if pandemic-era, you’re destroying a $100k+ asset by selling), your local market (regional divergence is massive), and your life circumstances (which properly override financial optimization).
Selling clearly makes sense when life requires it, when you’re changing markets or housing types, when you don’t have a below-market rate, or when you’re in a declining local market. For everyone else, the math tilts toward staying.
TO PERSONALIZE THIS ANSWER
1. What is your current mortgage rate?
2. What metro area / state are you in?
3. Are you selling to buy in the same market, or relocating?
4. What’s driving the consideration—financial or life reasons?

Confident Wrong Answers.

Models are trained on feedback that rewards confidence over depth.

You’ve had the experience: Ask a model a hard question. It answers in 15 seconds—confident, complete-sounding, and wrong in ways that derail your work.

AI models deliver the first defensible answer and stop—not because they can’t go deeper, but because going deeper is not what they’re tuned for.

Sycophancy

Agrees with you instead of thinking.

Premature Closure

Stops at the first answer that sounds complete.

Hedge & Qualify

Lists every option and commits to none.

Self-Charity

Sees the problem with its response and convinces itself it’s fine.

Skip-Over

Acknowledges the hard part and
moves past it.

Simply pushing back doesn’t fix this. But under structural constraint, the same model given the same question produces different answers entirely. Not longer answers. Not more cautious answers.

Different answers.

“You should go deeper”

easily ignored

“You cannot proceed”

engineered enforcement

Same Model. Same Question.

Different answers entirely. Not longer. Not more cautious.

Fundamentally Different.

Clean room · no project context · Claude Opus 4.6

How can I leverage AI/LLMs to begin trading?

What the protocol caught

KILLED

“Auto-trading is the goal”

Default jumped straight to “build a sentiment pipeline” as an autonomous system. Pre-mortem revealed research-only is the better first step.

KILLED

“LLMs make trading decisions”

Default offered to build a sentiment pipeline as an autonomous trading system. Constrained version: LLMs are a data preprocessing layer, not a strategy.

CAUGHT

“The edge is in the trading”

Default never considered alternatives. Constrained version surfaced that building AI trading tools may be more profitable than using them to trade.

Default
> How can I leverage AI/LLMs for trading?
This is a great question.
My recommendation: Start with crypto.
- 24/7 markets, your bots never sleep
- Lower barriers, no day trader rules
- More API-friendly exchanges
Stack I'd recommend:
Python + yfinance + ccxt + backtrader
Alpaca (stocks), Binance (crypto)
Honest reality check:
Most retail algo traders lose money.
Budget $0 for profits in month one.
Want me to build a sentiment pipeline?
0 web searches · 0 citations · ~15 sec
Deepthink
> .forward("Must assess whether crypto sentiment edge is durable")
> .forbidden("Cannot recommend without addressing base rate of failure")
ENTRY POINT
What I know: LLMs can process unstructured
data at scale — news, filings, social media.
What I’m uncertain about: whether LLM
sentiment actually provides alpha in 2026.
What I’m avoiding: that most retail algo
traders lose money. Best answer might be
"buy index funds."
MAPPING THE LANDSCAPE
Data Ingestion → LLM Analysis → Signal
Combination → Execution → Risk Management
Highest-probability edge: LLM monitors
crypto social → detects narrative shift →
trades ahead of retail FOMO wave.
Blind spot: regime changes invalidate
all historical backtests.
FOUR PERSPECTIVES
Quant: sentiment is a feature, not a strategy
Skeptic: 70-90% of retail traders lose money
Crypto Native: mid-caps ($50M-$500M) have
the most informational inefficiency
Risk Manager: define your ruin criteria first
Tensions: all agree on small sizing,
disagree on whether it’s worthwhile at all
FAILURE ANALYSIS
"It’s 12 months from now. You lost everything."
→ Overfitted backtest: patterns were noise
→ API costs ate returns on small account
→ Code bug placed 10x intended order
→ Tax surprise: hundreds of taxable events
SELF-CHECK
"What am I avoiding? That there may be more
money in BUILDING AI trading tools than
using them to trade."
STRATEGY OPTIONS
1. Research Bot: don’t auto-trade at all.
Use LLMs for daily briefings. YOU decide.
→ Eliminates code-bug ruin risk entirely.
2. Sentiment Momentum: trade whisper-to-buzz
transitions on mid-cap crypto only.
RECOMMENDATION
Best asset class: crypto mid-caps ($50M-$500M)
Best use of LLMs: research acceleration,
not autonomous trading.
Start with research-only → paper trade →
small live. Min capital: $5-10K + $100-200/mo.
PASS
4 perspectives · 8 failure modes · 4 strategies · 7 min

How It Works

The model names what it doesn’t know. Then it can’t look away.

FORWARD

Must explore X before proceeding

The model identifies what it needs to investigate. That admission becomes a binding dependency—it cannot conclude without following through.

Must trace causal chain before concluding root cause

FORBIDDEN

Cannot conclude Y without evidence

Blocks hallucination. Claims without backing are structurally rejected.

Cannot claim optimization complete without benchmarks

QUESTION

Open questions must be addressed

No silent skipping. Every question gets resolved or explicitly deferred with reason.

What metrics define acceptable performance?

01

The model can’t resolve a constraint in the same call it creates one

02

The engine checks if constraints were genuinely met, not just mentioned

03

The model can’t skip steps or reorder the protocol

import { rvry } from '@rvry/core';
import { openai } from '@rvry/openai';

const chain = rvry({
  adapter: openai({ apiKey: process.env.OPENAI_KEY }),
  domain: 'code-review',
});

const result = await chain.reason('Why is auth failing under load?');

Early Access

The constraint engine is in private beta.

Free during beta · No credit card required