For operators, Xeon 6 with performance cores supports vRAN acceleration and edge AI use cases
As Intel goes through a leadership transition and rumors of a potential split between the foundry and products businesses make the rounds, the company made a major step in developing its AI data center offerings with the release of the Xeon 6 series of CPUs with performance cores (P-cores). The company’s Interim co-CEO and CEO of Intel Products Michelle Johnston Holthaus said in a statement that the Xeon 6 series “delivers the industry’s best CPU for AI and groundbreaking features for networking, while simultaneously driving efficiency and bringing down the total cost of ownership.”
The Xeon 6 series comes in two models, the 6700 and 6500, and brings a 1.4x performance improvement compared to its predecessor, according to Intel. The company highlighted the applicability of Xeon 6 as a “foundational” CPU for AI systems, “pairing exceptionally well with a GPU as a host node CPU.” Specific to use in a virtualized radio access network (vRAN), Intel has integrated acceleration (vRAN Boost is the branding) for compute-intensive workloads into the system-on-chip. The latest CPUs “deliver up to 2.4x the RAN capacity and a 70% improvement in performance-per-watt compared to previous generations.”
In its announcement of Xeon 6, Intel spotlighted work it has done with major network equipment providers Ericsson, HPE and Samsung, and AT&T, Verizon and Vodafone, operators all committed to Open RAN. Ericsson has ported its Cloud RAN software to Xeon 6 and the two firms “are amplifying their collaboration to bring Intel Xeon 6-based Cloud RAN solutions to market.”
HPE and Intel are partnered on “driving the advancement of [vRAN] with integrated platforms…In 2025, the companies will continue their collaboration to grow telco edge and Open RAN based on Intel Xeon 6 SoCs.”
Samsung has also integrated its own vRAN software with Intel’s Xeon 6 to “enable operators to dramatically improve [TCO] by consolidating RAN workloads from multiple servers to just a single server.” The companies are working on AI-enabled use cases for the RAN, including “enhanced energy efficiency, traffic steering and improved spectral efficiency.”
On the operator side, Intel’s launch came with commentary from AT&T, Verizon and Vodafone, all operators who have embraced multi-vendor Open RAN architectures. AT&T’s Rob Soni, vice president of RAN technology, called out work with Ericsson and Intel “to build the world’s most open, programmable and reliable RAN network.” He said AT&T would start deploying Xeon 6 this year.
Verizon, which this week announced the deployment of an Open RAN RIC using a Qualcomm platform to host a Samsung energy management app, is “working with Intel to develop the next-gen high-compute-density vRAN server…that doubles our RAN compute capacity and enables greater energy efficiency, multitenancy and a lower [TCO],” SVP of Global Networks and Technology Adam Koeppe said in a statement. He also noted that more than 40% of Verizon’s 5G RAN footprint is virtualized, “in addition to our entire 5G core and edge.”
Vodafone Head of Open RAN Paco Martin referenced Voda’s UK network as demonstrating “that open and virtualized networks built on Intel Xeon can compete with advanced legacy radio access networks. We look forward to continuing our close collaboration with Intel.”
Holthaus spoke at length about Xeon and the AI data center opportunity during Intel’s most recent earnings call. She said 2025 “is all about improving Xeon’s competitive position as we fight harder to close the gap to competition…The world’s data center workloads still primarily run on Intel silicon, and we have a strong ecosystem, especially within enterprise. We’re going to leverage these strengths as we work to stabilize our market share in 2025.”
AI data centers, she continued, represent “an attractive market for us over time, but I am not happy with where we are today. On the one hand, we have a leading position as the host CPU for AI servers. And we continue to see a significant opportunity for CPU-based inference on-prem and at the edge as AI-infused applications proliferate.” Edge inference is a hot topic in the larger AI discourse given the cost, performance and scalability benefits that come with localized inference, including on-device, on enterprise premises and at the edge of cellular networks.
Important to note that this latest round of Xeon 6 sports the P-cores; a related Xeon 6 launch last year focused on SoCs with efficiency cores (E-cores). The latter debuted in June last year and ahead of Mobile World Congress in Barcelona next week, Intel gave an update on how Xeon 6 processors with E-cores have supported 5G core network deployments.
Intel’s Alex Quach, vice president and general manager of the Wireline and Core Network Division, said “infrastructure efficiency, power savings and uncompromised performance are essential criteria for” operators. He specifically noted how Intel Infrastructure Power Manager software “is showing tremendous progress in reducing server power in [operator] environments on existing and new infrastructure.”