Dell Technologies talks the device-edge-datacenter-cloud AI continuum, the rise of agentic AI and the outlook for federated learning
For years, AI discourse has been dominated by massive cloud-based models, trained on enormous datasets and running in centralized data centers. But as AI deployment scales, the next frontier isn’t just in the cloud—it’s at the edge. AI is increasingly embedded in factories, hospitals, energy grids, and countless other real-world environments where immediate, data-driven decision-making is critical.
RCR Wireless News recently spoke with Pierluca Chiodelli, vice president of engineering technology, edge portfolio product management and customer operations at Dell Technologies, about the evolution of AI at the edge, the rise of agentic AI, and how federated learning is reshaping AI deployment models. His insights underscored a fundamental shift happening in AI infrastructure: businesses are moving away from a cloud-first mindset and embracing a hybrid AI model that connects a continuum across devices, the edge and the cloud.
Agentic AI and the need for real-time adaptation
One of the most significant trends shaping AI’s future is agentic AI, a concept that envisions autonomous, interconnected AI agents that can operate dynamically without human intervention. For edge environments, this could be a game-changer.
Chiodelli used computer vision in manufacturing as an example. Today, quality control AI models run inference at the edge—detecting defects, deviations, or inefficiencies in a production line. But if something unexpected happens, such as a subtle shift in materials or lighting conditions, the model can fail.
The process to retrain the model takes forever, Chiodelli explained. You have to manually collect data, send it to the cloud or a data center for retraining, then redeploy an updated model back to the edge. That’s a slow, inefficient process.
Agentic AI could fundamentally change this. Instead of relying on centralized retraining cycles, AI agents at the edge could autonomously detect when a model is failing, collaborate with other agents, and correct the issue in real time. “Agentic AI, it actually allows you to have a group of agents that work together to correct things.
For industries that rely on precision, efficiency, and real-time adaptability, such as manufacturing, healthcare, and energy, agentic AI could lead to huge gains in productivity and ROI. But, Chiodelli noted, the challenge lies in standardizing communication protocols between agents—without that, autonomous AI systems will remain fragmented. He predicted an inter-agent “standard communication kind of API will emerge at some point.” Today, “You can already do a lot if you are able to harness all this information and connect to the AI agents.”
Edge AI, federated learning and the economics of AI at scale
Beyond agentic AI, Chiodelli sees another major trend driving the decentralization of AI: federated learning and edge AI. “It’s clear [that] more and more data is being generated at the edge,” he said. “And it’s also clear that moving that data is the most costly thing you can do.”
The traditional AI pipeline—where data is collected at the edge, transferred to a cloud or data center for training, and then deployed back in an edge environment—is simply not sustainable at scale. Not only is it expensive, but it also introduces security, privacy, and latency concerns—especially in regulated industries like healthcare, finance and critical infrastructure.
Federated learning offers a compelling solution. Instead of moving massive datasets, federated learning trains models locally at the edge and only transmits updates back to a central model, dramatically reducing data transfer costs and improving privacy. Federated learning minimizes network usage, reduces privacy risks, and enables real-time adaptability at the edge, Chiodelli explained. “It’s the next wave of AI, allowing us to scale AI across millions of devices without centralizing data.”
Smart utility meters serve as a real-world example. Traditionally, energy providers collect data from thousands or millions of meters, process it centrally, and use it for demand forecasting, outage detection and operational efficiency. But what if each meter could contribute to a federated AI system, analyzing data locally and only sharing essential insights?
“Instead of transferring raw data, AI models at the edge can process it, extract insights, and send back only what’s necessary,” Chiodelli said. “That reduces costs, improves response times, and ensures compliance with data privacy regulations.”
The AI infrastructure challenge—building and orchestrating the device-edge-datacenter-cloud continuum
AI is no longer just a cloud workload. Businesses are realizing that their most valuable AI insights often come from the edge—where operations happen in real-time. But the challenge is infrastructure. Companies must balance compute distribution across thousands (or millions) of devices, edge infrastructure, on-prem datacenters and centralized clouds. That’s why Dell Technologies is focused on creating AI-ready infrastructure that spans the entire enterprise IT ecosystem.
Everyone talks about AI training in the cloud, Chiodelli said. But the reality is, once you’ve spent all the money training a model in a data center, the next step is making AI work for the business. “That means bringing AI to where the action is—at the edge.”
“I think more and more we see a shift from what we see today…to more edge…at the edge, ML was a reality 20 years ago, and also if you can add agent that controls other agents and have more generative AI for the edge use cases, you can have the perfect mix of things… then you can do all the things that I said in the same location without needing to move things around.”
A true AI ecosystem, he argues, requires:
- Seamless integration between devices, edge AI infrastructure, datacenters and the cloud.
- Interoperability between AI agents, models, and enterprise applications.
- AI infrastructure that minimizes costs, optimizes performance and scales efficiently.
This is where Dell Technologies is positioning itself in the AI space. The company provides end-to-end AI solutions, from GPU-optimized training infrastructure in data centers to high-performance edge computing platforms that can run inference in real time.
Final thoughts from Chiodelli writing in Forbes: “We should expect the adoption of hybrid edge-cloud inferencing to continue its upward trajectory, driven by the need for efficient, scalable data processing and data mobility across cloud, edge, and data centers. The flexibility, scalability, and insights generated can reduce costs, enhance operational efficiency, and improve responsiveness. IT and OT teams will need to navigate the challenges of seamless interaction between cloud, edge, and core environments, striking a balance between factors such as latency, application and data management, and security.”
AI is evolving beyond its early cloud-centric roots and becoming deeply embedded in real-world environments. Businesses that understand this shift—and invest in AI architectures built to span the device-edge-datacenter-cloud continuum—will be the ones that truly harness AI’s full potential.
For additional reading, check out Chiodelli’s recent piece in Forbes, “The edge of AI: Predictions for 2025,” and a piece from Dell Technologies CTO John Roese, “Tech’s big bang in 2025: AI is the spark igniting a new era”.