Yesterday was a busy one with quarterly earnings calls from major AI infrastructure players Microsoft and Meta. And, as expected, the CEOs fielded questions about Chinese AI startup DeepSeek’s splashy market entry and the implications of its software optimizations that reduced LLM training costs and inference techniques compared to what we’re seeing from U.S. companies. Long story short, Meta and Microsoft are bullish; they’re both still ramping investment in AI infrastructure, and they generally see DeepSeek as a catalyst rather than a development that would dampen investment in all things AI. We’ll get into the DeepSeek of it all in a separate post.
Microsoft Q2 capex was $22.6 billion with Q3 and Q4 expected to remain at similar levels
Let’s start with Microsoft. CEO Satya Nadella said enterprises “are beginning to move from proof-of-concepts to enterprise-wide deployments to unlock the full ROI of AI.” The company’s AI business passed an annual revenue run-rate of $13 billion, up 175% year-over-year in Q2 for fiscal year 2025.
Nadella talked through the “core thesis behind our approach to how we manage our fleet, and how we allocate our capital to compute. AI scaling laws are continuing to compound across both pre-training and inference-time compute. We ourselves have been seeing significant efficiency gains in both training and inference for years now. On inference, we have typically seen more than 2X price-performance gain for every hardware generation, and more than 10X for every model generation due to software optimizations.”
He continued: “And, as AI becomes more efficient and accessible, we will see exponentially more demand. Therefore, much as we have done with the commercial cloud, we are focused on continuously scaling our fleet globally and maintaining the right balance across training and inference, as well as geo distribution. From now on, it is a more continuous cycle governed by both revenue growth and capability growth, thanks to the compounding effects of software-driven AI scaling laws and Moore’s law.”
Click here for more on the AI scaling laws, including test-time training, and to Nadella’s point around cost/demand, a friendly reminder of Jevons’ paradox which basically says that as tech improves the efficiency of resource use, overall consumption of that resource, in this case AI, counterintuitively increases rather than decreases; efficiency gains drop costs which drives demand.
On datacenter investment, Nadella said Microsoft’s Azure cloud is the “infrastructure layer for AI. We continue to expand our datacenter capacity in line with both near-term and long-term demand signals. We have more than doubled our overall datacenter capacity in the last three years. And we have added more capacity last year than any other year in our history. Our datacenters, networks, racks and silicon are all coming together as a complete system to drive new efficiencies to power both the cloud workloads of today and the next-gen AI workloads.”
In dollars and cents, Microsoft reported Q2 capex at $22.6 billion with more than half of cloud and AI spend on “long-lived assets that will support monetization over the next 15 years and beyond,” according to Amy Hood, Microsoft’s chief financial officer. AI spend, she said, was focused on servers, CPUs and GPUs. Hood said she expects quarterly capex in Q3 and Q4 “to remain at similar levels as our Q2 spend.
Here’s the full Q2 earnings release.
Meta anticipates $60 billion-plus 2025 capex
Reporting on its Q4 2024, Meta CEO Mark Zuckerberg highlighted adoption of the Meta AI personal assistant, continued development of its Llama 4 LLM, and ongoing investments in AI infrastructure. “These are all big investments,” he said. “Especially the hundreds of billions of dollars that we will invest in AI infrastructure over the long term. I announced last week that we expect to bring online almost 1 [Gigawatt] of capacity this year, and we’re building a 2 [Gigawatt] and potentially bigger AI datacenter that is so big that it’ll cover a significant part of Manhattan if it were placed there. We’re planning to fund all this by at the same time investing aggressively in initiatives that use these AI advances to increase revenue growth.”
CFO Susan Li said Q4 capex came in at $14.8 billion “driven by investments in servers, datacenters and network infrastructure. We’re working to meet the growing capacity needs for these services by both scaling our infrastructure footprint and increasing the efficiency of our workloads. Another way we’re pursuing efficiencies is by extending the useful lives of our servers and associated networking equipment. Our expectation going forward is that we’ll be able to use both our non-AI and AI servers for a longer period of time before replacing them, which we estimate will be approximately five and a half years.” She said 2025 capex will between $60 billion and $65 billion “driven by increased investment to support both our generative AI efforts and our core business.”
Zuckerberg called out that Meta AI “reaches” north of 1 billion people, and “is already used by more people than any other assistant.” He said an existing user base of that scale “usually” comes with a durable market advantage. He called out Meta’s focus on personalization. “People want their AI to be personalized to their context, their interests, their personality, their culture, and how they think about the world…I continue to think that this is going to be one of the most transformative products that we’ve made.”
Discussing Meta’s Llama family of open-weight AI models, Zuckerberg said, “I think this will very well be the year when Llama and open source become the most advanced and widely used AI models as well.” He said Llama 4, the upcoming iteration of the Llama family of models, “is making great progress in training.” Generally, he said, Llama 3 was meant to make open source models competitive with closed models, “and our goal for Llama 4 is to lead.” He said Llama 4 will be an “omni-model” and include agentic capabilities. “So it’s going to be novel and it’s going to unlock a lot of new use cases.”