YOU ARE AT:AI InfrastructureBookmarks: DeepSeek (of course), but Jevons’ Paradox really

Bookmarks: DeepSeek (of course), but Jevons’ Paradox really

Editor’s note: I’m in the habit of bookmarking on LinkedIn and X (and in actual books) things I think are insightful and interesting. What I’m not in the habit of doing is ever revisiting those insightful, interesting bits of commentary and doing anything with them that would benefit anyone other than myself. This potentially recurring column is an effort to correct that. 

Every single person on the internet, me included, has put out a take on DeepSeek this week. The general direction of travel was from, “This Chinese startup just beat the West at AI, and NVIDIA is cooked” to “As all of us have obviously known this whole time, there’s been too much attention on brute force compute and not enough on algorithmic optimizations” then finally settling on something like, “I’ve always known this exact thing DeepSeek did would be done, I just didn’t do it myself or tell anyone about it.” But mixed into the navel gazing were a few good ideas that, for whatever reason, all seemed to relate to Jevons’ Paradox.

In brief, Jevons’ Paradox suggests that as technological advancements improve the efficiency of a resource’s use, the total consumption of that resource may actually increase instead of decrease. This occurs because increased efficiency lowers costs, which in turn drives greater demand for the resource. William Stanley Jevons put this idea out in the world in an 1865 book that looked at the relationship between coal consumption and efficiency of steam engine technology. Additional modern examples are energy efficiency and electricity use, fuel efficiency and driving, and AI.

Which brings us to DeepSeek. I refuse to summarize what they did, how they did it, how much they said it cost, what it did to tech stocks, and what every tech CEO had to say about it. To the bookmarks! 

Mustafa Suleyman, co-founder of DeepMind (later acquired by Google) and now CEO of Microsoft AI, wrote on X on Jan. 27: “We’re learning the same lesson that the history of technology has taught us over and over. Everything of value gets cheaper and easier to use, so it spreads far and wide. It’s one thing to say this, and another to see it unfold at warp speed and epic scale, week after week.” 

Microsoft reported Q2 fiscal year 2025 financials this week. During the Q&A portion of the call, CEO Satya Nadella also touched on this. “When token prices fall, inference computing prices fall, that means people can consume more.” More on Microsoft’s Q2 here

Now to AI luminary Andrew Ng, fresh off an interesting AGI panel at the World Economic Forum’s annual meeting in Davos. Posting Jan. 30, on LinkedIn, he posited that the DeepSeek of it all “crystallized, for many people, a few important trends that have been happening in plain sight: (i) China is catching up to the U.S. in generative AI, with implications for the AI supply chain. (ii) Open weight models are commoditizing the foundation-model layer, which creates opportunities for application builders. (iii) Scaling up isn’t the only path to AI progress. Despite the massive focus on and hype around processing power, algorithmic innovations are rapidly pushing down training costs.”

More on open weight, as opposed to closed or proprietary, models. Ng commented that, “A number of US companies have pushed for regulation to stifle open source by hyping up hypothetical AI dangers such as human extinction.” This very, very much came up on that WEF panel. “It is now clear that open source/open weight models are a key part of the AI supply chain: many companies will use them.” If the US doesn’t come around, “China will come to dominate this part of the supply chain and many businesses will end up using models that reflect China’s values much more than America’s.”

Ng wrote that OpenAI’s o1 costs $60 per million output tokens, whereas DeepSeek’s R1 costs $2.19 per million output tokens. “Open weight models are commiditizng the foundation-model layer…LLM token prices have been falling rapidly, and open weights have contributed to this trend and given developers more choice.” 

Now to Pat Gelsinger, the former CEO of Intel and VMware, posting to LinkedIn on Jan. 27. The DeepSeek discussion “misses three important lessons that we learned in the last five decades of computing,” he wrote.

  1. “Computing obeys the gas law…It fills the available space as defined by available resources (capital, power, thermal budgets, [etc…]…Making compute available at radically lower price points will drive an explosive expansion, not contraction, of the market.” 
  2. “Engineering is about constraints.”
  3. “Open wins…we really want, nay need, AI research to increase its openness…AI is much too important for our future to allow a closed ecosystem to ever emerge as the one and only in this space.” 

And our final bookmark is from a book, Insull, a biography of Thomas Edison’s right-hand man, Samuel Insull, by historian Forrest McDonald. Summarizing Insull’s approach to making electricity a mass-market product, McDonald described the largely forgotten titan’s philosophy: “Sell products as cheaply as possible—not because price competition dictated it; far from it. Rather, it stemmed from Insull’s radical belief, which Edison usually shared, that lower prices would bring greater volume, which would lower unit costs of production and thus yield greater profits.” 

That’s it for now. Happy reading. 

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.