YOU ARE AT:AI InfrastructureAgentic AI—will 2025 be a breakout year?

Agentic AI—will 2025 be a breakout year?

So you want to turn you data and abstracted business logic into use cases that carry a clear ROI? And you want to do it autonomously? Well, good news because 2025 could be a breakout year for agentic AI. 

What is agentic AI? 

The name is apt—agentic AI refers to an artificial intelligence system that has the agency (within guardrails) to go beyond augmenting a process or workflow to automating a process or workflow by accessing data and an application or applications to take your intent and turn it into an outcome. Also worth noting the concept of embodied cognition wherein an AI system can only truly have agency if it can physically interact with the world around it. NVIDIA CEO Jensen Huang charted the company’s course to physical AI during a recent CES keynote. More on that here and here

In the keynote, Huang also described agentic AI as “AIs that can perceive, reason, plan and act.” He called agentic AI “one of the most important things that’s happening in the world of enterprise…Agentic AI basically is a perfect example of test-time scaling.” 

He gave an example of an agentic AI system working on your behalf to retrieve information from storage, go on the internet, study a PDF, use a calculator, tap into generative AI tools to generate a chart, and so on. “It’s taking the problem you gave it, breaking it down step-by-step, and it’s iterating through these different models.” 

Huang said NVIDIA’s go-to-market strategy for agentic AI is to work with enterprise software providers like ServiceNow and SAP. “In the future,” he said, “these AI agents are essentially digital workforce that are working alongside your employees, doing things for you on your behalf. In a lot of ways, the IT department of every company is going to be the HR department of AI agents in the future…They’ll maintain, nurture, onboard and improve a whole bunch of digital agents and provision them to the companies to use. And so your IT department is going to become kind of like AI agent HR.” 

Dell Technologies Chief Technology Officer John Roese, writing in a 2025 prediction piece published in Forbes, said “‘Agentic’ will be the word of the year in 2025. The birth of agentic AI architecture marks a new chapter in human-AI interaction…Gen AI tools are evolving to enable AI agents, which are poised to revolutionize how we engage with AI systems.” 

For consumers, this looks like increasingly advanced virtual assistants and chatbots. “These agents will operate autonomously, communicate in natural language and interact with the world around them, including working in teams of other agents and humans,” Roese wrote. 

For enterprises, Roese said agentic AI will optimized for specific tasks, including code generation and review, infrastructure administration, business planning and cybersecurity. But, to achieve this efficiency boost, “Enterprises must upgrade infrastructure—everything from data centers to AI PCs. This distributed infrastructure optimized for agentic AI can address security, sustainability and capacity considerations by distributing the AI workload across the entire IT infrastructure (cloud, data center, edge and device).” 

The role of the edge in agentic AI

Edge AI is a growing area of interest. Processing AI workloads on-device or one-hop from the device at an access point or on-prem datacenter or radio access network basestation or some other piece of distributed infrastructure kitted out with the necessary compute overhead delivers numerous benefits to the user, including contextual-awareness, latency, personalization and privacy. Aside from that, thinking of AI as a continuum that spans the device, the edge and the cloud is critical to making the economics of the whole thing work. Running AI inferencing—or post-training and test-time training, or even federated learning—on the edge saves the time and cost of piping data back to a centralized cloud. 

Qualcomm Chief Executive Officer Cristiano Amon took on edge AI this week during the World Economic Forum’s annual meeting in Davos, Switzerland; video here. Amon traced the history of human-computer interactions from command line interfaces, to graphical interfaces controlled with a mouse and onto natural language processing and AI which, he said, enables “computers [to] now understand human language…You can now communicate with the computer.” 

Amon tracked trends around increasingly effective small language models with multiple modalities capable of processing and taking action on different types of inputs. “You combine that with the computing you have on the edge, you see an incredible transformation especially as you move from training to production. 

“If you look at different directions AI is going, and you have now very effective, compressed SLM…you have mix of experts coming up…you  have multi-modal ability to process different types of inputs. You combine that with the computing you have on the edge, you see an incredible transformation especially as you  move from training to production…We have done a number of demonstrations of combining a small model into a device at the edge with the big model in the cloud…I think this will accelerate…the use of AI and the use of computing at the edge broadly.” 

Amon’s colleague Durga Malladi, senior vice president and general manager of technology planning and edge solutions, talked up edge AI at the recent Consumer Electronics Show. Specific to on-device agents, he said they will co-evolve with the user. “Over time there is a personal knowledge graph that evolves. It defines you as you, not as someone else.” Localized context, made possible by edge AI, will continuously improve in efficacy. “Lots of work to be done in that space though,” he acknowledged.

What happens to SaaS with the rise of agentic AI? 

Back to this idea of agentic AI accessing the databases and business logic of multiple apps, whether enterprise or consumer, and using that other functionality to perform some task autonomously—in this scenario, the agent is essentially using a computer like you’d use a computer. From the perspective of a SaaS provider, if the agent is from a third-party, this could relegate your offering to nothing more than a database. And if the agent is from an established SaaS provider and working across other SaaS providers’ applications, that would likely require a good deal of backend technological integration and a hard business discussion. The operating system provider may also have something to say about a full-on agentic AI system. 

This topic came up in a recent interview of Microsoft Chief Executive Officer Satya Nadella. Asked by Bill Gurley and Brad Gerstner about how this all plays out, Nadella said, “The SaaS applications…the approach at least we are taking is, I think the notion that business applications exist, that’s probably where they’ll collapse in the agent era…They are essentially cloud databases with a bunch of business logic. The business logic is all going to these agents…They’re not going to discriminate what the backend is…All the logic will be in the AI tier so to speak. Once the AI tier becomes the place where all the logic is, people will start replacing the backends.” 

Bottomline, Nadella said, “I think there will be disruption.” Using AI for reasoning and collaborating with colleagues will be the “new workflow,” he said. 

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.