Blog
Long-form essays on AI labor economics — the operational math beneath the headlines, what the cases the market has already lived through actually cost, and where the human-advantage layer holds.
- May 13, 2026·7 min read·role read · product design
AI exposure for product designers in 2026 — the 10% wedge that isn't the role
Visual-asset generation is the only Replaceable cell at capability 85 across the role. Wireframes and handoff are AI-augmented; user research and stakeholder reviews are Human-critical. The displaced 10% is real — and the other 90% is gaining importance.
- May 13, 2026·7 min read·role read · data engineering
AI exposure for data engineers in 2026 — augmented, never replaced
Six tasks across SQL, ETL, schema design, debugging, infrastructure architecture, and stakeholder reviews. Zero Replaceable cells. The economic frontier is augmentation, not substitution — and the role's structure tells you exactly why.
- May 13, 2026·8 min read·role read · machine learning
AI exposure for ML engineers in 2026 — the unusual case
ML engineers build the systems that automate other work. The recursion question is real but mostly the wrong question. The actual exposure read is more interesting — and the operational AI cost line for ML-engineering tasks is uniquely high.
- May 13, 2026·8 min read·role read · product management
AI exposure for product managers in 2026 — capability ≠ ownership
Product management decomposes into research synthesis, writing, prioritization, stakeholder communication, roadmap maintenance, and strategy. AI can do some of these well. Two are Human-critical, and not for the reason most PM-vs-AI hot takes assume.
- May 13, 2026·8 min read·role read · software engineering
AI exposure for software engineers in 2026 — where the economics actually flip
Cell-level read across writing production code, tests, docs, code review, system design, on-call, mentoring. Where the AI-augmented frontier actually lands for software engineering — and why no task in the role clears the Replaceable bar in v1.
- May 13, 2026·13 min read·original research · methodology
We modeled AI substitution economics for 15 knowledge-worker roles. Here's where it gets uncomfortable.
Cross-role read across software engineering, product, design, data, ML, ops, support, sales engineering, finance, and engineering management. Five findings — including the one most AI-risk frameworks systematically miss.
- May 6, 2026·9 min read·finance · methodology
NPV, IRR, Payback — why corporate AI decisions sit on this spreadsheet
Three numbers — Net Present Value, Internal Rate of Return, Payback Period — decide whether an AI replacement project ships. Plain-language walk-through of how each works, why inference-cost decline is the input most analyses skip, and a worked example across three substitution-share scenarios.
- April 22, 2026·10 min read·case study · AI economics
The Klarna reversal, with the numbers
Klarna's Feb-2024 AI rollout and May-2025 reversal, with an illustrative reconstruction of the operational math. What capability without economic viability looks like when the workload is bimodal-complexity — and what the framework predicts about the failure mode.
- April 8, 2026·8 min read·methodology · taxonomy
The 4 substitution classes, explained
Jobs aren't the unit of analysis — tasks are. And tasks fall into four economically distinct classes. The canonical Wagecore framework, with the math behind the partition and why most roles are a mix.
- April 1, 2026·9 min read·methodology · AI economics
Why operational AI cost is 3–10× what the demo shows
A practical framework for pricing AI deployments past the token line — oversight, retries, error cost, integration overhead — and why a 10-cent inference often costs a dollar.
Methodology is published and versioned at /methodology.