Inside companies of any size, the decision to replace or augment a role with AI doesn't get made on the back of a model card. It gets made on a spreadsheet with three numbers across the top: Net Present Value, Internal Rate of Return, and Payback Period. If those numbers don't land in the range the CFO will sign off on, the deck doesn't make it past the strategy meeting, regardless of how promising the demo looked.
Most knowledge workers haven't worked through this math since business school, if at all. The result is a public discourse about AI and work where the operational economics are discussed in “will it cost less?” terms while the actual decision is made in “does it clear the hurdle rate?” terms. These are not the same question. A deployment can save money on a per-ticket basis and still fail the spreadsheet — because the upfront integration cost amortizes too slowly, or the savings curve decays as inference prices fall, or the discount rate the CFO applies makes year-4 dollars worth much less than the demo implied.
This post lays out the three numbers in plain language, shows the framework Wagecore's Investment View uses to compute them, and argues why one input most analyses skip — the rate at which AI running costs decline over the projection horizon — is the difference between a five-year “clear win” and a five-year wash.
The shape of the decision is capex, not opex
An AI replacement project looks like a capital investment, even though it produces opex savings. There's an upfront chunk of spend in year zero: integration engineering, change management, retraining, evaluation infrastructure, termination costs. Against that, you get a stream of labor-cost savings stretching out whatever-many years the deployment runs. The numbers that summarize a capex investment with a stream of savings are NPV, IRR, and Payback.
Worth noting up front: the decision is capex-shaped, but the running cost stays opex-shaped — and that's precisely why most AI rollouts that miss their NPV target miss it on the opex side. The integration cost is bounded; the audit, retries, error-cost, and oversight running cost is not, and it compounds with workload complexity. See the operational-cost framework for the line-item breakdown.
NPV: what does a dollar in year four actually weigh
Net Present Value sums the dollar-savings each future year produces, discounted back to today by a rate that reflects the company's cost of capital. A $100 saving in year one is worth $100. A $100 saving in year five at a 10% discount rate is worth $62. A negative NPV means the discounted savings don't cover the upfront cost even over the full horizon. A positive NPV means they do.
The choice of discount rate matters more than people expect. US public-company corporate WACC typically sits in the 7–12% range, with 9–10% the most common single number. Wagecore's Investment View defaults to 10% when the user hasn't supplied one. A higher discount rate punishes long-payback deployments harder; a 12% rate applied to a five-year stream makes year-five savings worth 57 cents on the dollar instead of 62.
For internal modeling we recommend running NPV at two rates: the company's actual WACC for the base case, and a 14–15% rate as a risk-adjusted stress test. If the deployment is still positive at 15%, it's a robust win. If it's positive only at 7%, it's sensitive to assumptions that may not hold.
IRR: the implicit rate of return
Internal Rate of Return inverts NPV — it asks “at what discount rate would this deployment's NPV equal zero?” That rate is the implicit return the project earns. If the IRR is 25% and your WACC is 10%, the deployment is creating value above the cost of capital. If the IRR is 8% and your WACC is 10%, you'd be better off putting the money in a treasury index fund.
IRR is the metric CFOs use to compare AI deployments against other uses of the same capital — a marketing campaign, an acquisition, a new product line. The strategic question is rarely “is this AI rollout profitable” (most look profitable on a per-ticket basis); it's “is this AI rollout more profitable than the next best thing we could do with the same engineering and change- management budget.” IRR is the comparison metric.
Payback: the gut-check executives actually want
Payback Period is the time, in months, until the cumulative savings equal the upfront cost. It ignores the time value of money — which is why it's technically inferior to NPV and IRR — and it's what executives ask for anyway, because it answers a question NPV and IRR don't: “how long until this thing has paid for itself, in case the world looks completely different by then.”
Common corporate hurdles look like this, though they vary substantially by industry and risk appetite: under 18 months tends to be a comfortable greenlight at most companies; 18 to 36 months triggers scrutiny on the assumptions; over 36 months usually loses out to alternatives unless the deployment has strategic value beyond the financial return. These are rules of thumb, not Wagecore thresholds — the Investment View reports the actual months and leaves the hurdle call to the user.
The input most analyses skip: inference-cost decline
Most AI ROI projections assume that the inference and infrastructure cost in year one holds across the horizon. They shouldn't. Inference price per token has dropped on the order of an order of magnitude every 18–24 months for comparable capability over the last three years, and analyst estimates (Gartner is the most-cited public source) project continued declines through 2030. If the cost line falls 35% per year and the model assumed it stayed flat, the five-year savings stream is understated by roughly a factor of 1.6.
The Wagecore Investment View takes inference-decline rate as an explicit input. The default is 35% per year (conservative against Gartner's more aggressive curves; an upper-bound case at 50% would put the projection more in line with current analyst central estimates). When the user runs the engine, they see how sensitive NPV is to that input. Often the difference between a borderline- negative and a clear-positive case is the inference-decline curve.
Important caveat: inference-decline shifts the model-cost line down the projection, but it does not reduce audit cost, retry cost, or error cost. Those scale with workload complexity and regulatory regime, not with model price. A deployment where the operational cost is dominated by audit time will see modest benefit from inference declines — the spreadsheet still rides mostly on whether eval and orchestration tooling let you cut the audit rate.
A worked example: 50-person support team, three scenarios
Take a hypothetical SaaS company with a 50-person support team, fully loaded labor cost $80k/year per agent (so $4M annual labor), upfront AI integration cost $250k treated as a Year 0 outflow, and a 35% annual inference-cost decline assumption. The discount rate is 10%. Run three scenarios for the share of work AI handles:
Scenario A — AI handles 30% of work. Annual labor displaced: $1.2M. Year-1 operational AI cost: $120k. Year-1 net cash flow: $1.08M (labor saved minus AI cost). Year-0 outflow: the $250k transition cost. Five-year NPV at 10% discount and 35% inference decline: ~$4.1M. IRR: well above any plausible WACC (over 400%, but at that range the engine is effectively saying “return rate is enormous because the upfront cost is tiny relative to the cash flow it unlocks”). Payback: ~3 months. The deployment is solidly positive even before you fold in productivity gains.
Scenario B — AI handles 50% of work. Annual labor displaced: $2M. Year-1 operational AI cost rises to $260k as the deployment now touches more complex tickets that drive up audit rate and error cost. Year-1 net cash flow: $1.74M against the same $250k Year-0 transition. Five-year NPV: ~$6.8M. IRR: very high but again less informative than NPV here. Payback: ~2 months. Still clearly positive, but the per-percentage-point lift in “share of work handled” is starting to slow down because the marginal task AI takes on is more expensive to oversee.
Scenario C — AI handles 70% of work. Annual labor displaced: $2.8M. Year-1 operational AI cost: $580k — at this share of work, audit rate cannot stay low without quality dropping. Year-1 net cash flow: $2.22M. Year-0 outflow: $250k. Five-year NPV: ~$9.2M. Now run the sensitivity: if the team's actual quality-driven audit rate runs 10 percentage points higher than the model assumes and persists, year-1 operational AI cost rises to $920k and the elevated cost curve runs through years 2–5 as well. Recomputed NPV: ~$8.4M. The deployment is still positive, but the margin of safety has narrowed enough that a prudent CFO would gate it on a one-quarter pilot before committing the full team transition.
The pattern: NPV grows with substitution share, but it grows sublinearly because the operational cost curve bends upward as AI moves from simple to complex work. The decision shape is almost-always “deploy aggressively in the bottom half of the complexity distribution, gate carefully on the top half.”
Why this is the spreadsheet executives actually use
The macro evidence supports running this kind of analysis before committing. BCG's 2025 AI Radar reported that of the firms they studied, only 5% were capturing value at scale from AI and roughly 60% had reported no material value yet (cited on /methodology ). MIT CSAIL's industry-grounded analysis (Svanberg et al., 2024) of vision-task deployment economics found that AI “passed the spreadsheet” on only about 23% of the wage-share of vision tasks where it was technically capable. The capability frontier is far ahead of the economic-viability frontier, and the gap is closing slowly enough that the spreadsheet is doing real work.
Wagecore's contribution is to make the spreadsheet runnable per-role and per-org against the current capability matrix, with explicit inputs for the assumptions that matter — substitution share, discount rate, inference decline, transition cost — and confidence bands on the underlying capability-matrix data so the user can see where the model is operating on weak signal.
Try it
The Investment View ships with Wagecore Pro and runs this projection on every Wagecard you compute — your inputs, your discount rate, your assumed inference-decline curve. If you'd rather see the corp-level version first, /org/preview is a non-persistent demo with no signup — paste your roles plus headcount and see the org-level heatmap and 5-year projection rolled up.
Methodology open at /methodology . The financial-projection engine details, with the conservative defaults named explicitly, live there. If the conservative model still shows positive NPV, that's a real signal. If it doesn't, the deal probably isn't one.