top of page

Jevon's Paradox: Will AI Efficiency Drive the GPU Gold Rush?

Andrew Riggs


Jevon's Paradox: Will AI efficiency spark a more lucrative gold rush?

In an unexpected twist that rattled tech markets at the start of 2025, Chinese AI lab DeepSeek announced breakthrough efficiency in training advanced AI models. The news sent Nvidia's stock tumbling, wiping nearly $600 billion in market value practically overnight. But Microsoft CEO Satya Nadella remained unfazed, confidently tweeting: "Jevons' Paradox strikes again."


This historical economic principle might hold the key to understanding the future of AI infrastructure and GPU demand. But what exactly is Jevons' Paradox, and can it really predict where the AI computing market is headed?


How Jevon's Paradox (an 1865 Coal Theory) Applies to Modern AI


When British economist William Stanley Jevons studied coal consumption in the 19th century, he discovered a counterintuitive pattern: as steam engines became more fuel-efficient, coal usage actually increased rather than decreased. The improved efficiency made coal power more economical, expanding its applications and driving record consumption.


The parallel to today's AI landscape is striking. As companies develop more efficient training methods and models, the overall demand for computing resources might explode rather than contract. Each efficiency breakthrough doesn't necessarily reduce infrastructure needs—it often expands the universe of economically viable AI applications.


When AI becomes more affordable, entirely new use cases emerge across industries, creating a compounding effect on infrastructure demand.


The GPU Equation: Efficiency vs. Expansion


Nvidia has dominated the AI chip market with its specialized GPUs that power most major AI developments. When news broke that DeepSeek could train comparable models at a fraction of the computing cost, investors panicked, fearing decreased demand for expensive GPU infrastructure.


Yet historical patterns suggest the opposite outcome is equally possible. Consider these factors driving AI compute demand:


  • Wider accessibility: More efficient models mean smaller companies can enter the AI space

  • New applications: Lower computing costs unlock previously impractical use cases

  • Consumer AI: Efficiency gains allow more AI features on everyday devices

  • Model scaling: History shows that available compute tends to be fully utilized


Learning from Other Industries: Efficiency Doesn't Guarantee Profits


While Jevons' Paradox suggests AI efficiency will drive explosive growth in computing needs, this doesn't automatically translate to sustainable profits. Three comparable industry examples highlight the potential pitfalls:


1. The Fracking Revolution


Hydraulic fracturing technology dramatically reduced oil and gas extraction costs, propelling the U.S. to become the world's largest producer. But when supply outpaced demand, prices collapsed, forcing 42 companies with $26 billion in debt into bankruptcy in 2019 alone.


2. Solar Panel Efficiency Gains


Solar energy costs have plummeted by 40% in the past decade, driving a tenfold increase in global capacity. Despite record shipments, intense competition has squeezed margins, with the top 10 manufacturers reporting near-zero operating profits in 2024.


3. Genetic Sequencing's Cost Curve


The cost of sequencing a human genome has fallen from $100 million in 2001 to just $200 today. Market leader Illumina maintains 90% market share, yet its projected revenue of $4.7 billion by 2026 reflects demand that hasn't scaled proportionally with efficiency gains.


The AI Infrastructure Outlook


For AI to sustain long-term profitability while driving GPU and computing infrastructure demand, three conditions must be met:


  1. Broad utility across diverse industries and applications

  2. Accessible pricing that enables widespread adoption

  3. Differentiation potential that prevents pure commodity competition


The good news for GPU manufacturers and AI infrastructure providers is that we're still in the early stages of AI adoption. Unlike mature industries where efficiency gains can lead to market saturation, artificial intelligence continues to find novel applications at an astonishing pace.


When compute costs drop, we don't just do the same things more efficiently—we do entirely new things that weren't possible before.


Whether this translates to sustained demand for high-end GPUs or shifts toward specialized AI accelerator chips remains uncertain. But one thing is clear: in the world of artificial intelligence, Jevons' Paradox suggests that making AI more efficient won't reduce our computing appetite—it will only make us hungrier.

bottom of page