Mainstreaming spatial compute, and giving AI the context it needs for hyper-personalization, is a “Qualcomm-sized problem”
MAUI—Qualcomm is the leader in on-device gen AI; the company effectively defined the concept, brought it to life from a hardware, software and ecosystem perspective, and took it to market with its OEM partners. One of the next steps, if you subscribe to the idea that agentic AI systems will decompose app-centric, handset-focused experiences into something completely new, will rely heavily on glasses.
Qualcomm Senior Vice President and General Manager of XR and Spatial Computing Ziad Asghar has been in his current role for about six months; before that he focused on gen AI product roadmap and strategy. Speaking with media at the company’s annual Snapdragon Summit, he said the intersection of AI and XR is an area “where we can do a lot.” Reflecting on the history of XR device adoption, he acknowledged some false starts, but reiterated that a turning point in terms of device performance, power, price and adoption is coming in one to two years.
To illustrate the types of experiences that are in the relatively near future, Asghar gave the example of coming into a business meeting wearing smart glasses which render in your field of vision information about the people in the room–their names, titles, your history of interactions with them, etc…That’s possible today and could probably scale up and out fairly easily through an integration of glasses and retrieval augmented generation applied to LinkedIn, for example. And that’s the point, really. Most of the pieces are in place today; it’s just a matter of assembling them.
But it is a work in progress; there’s more work to be done and it’s progressing, although there are still some clear gaps that, fortunately, seem readily bridged. To wit, I was wearing the Meta Ray Ban glasses during the meeting and told Asghar that I didn’t think the built-in Meta AI assistant would be able to recognize him. And this isn’t a matter of the glasses taking a picture of him then running it up to a cloud-based model to compare the picture I took to all pictures ever taken and posted online. He was wearing a badge with his name on it. In theory, this is easy to crack because the AI only needs to read his name and repeat it back to me. But I use the AI on those glasses very regularly and had a hunch they’d whiff on this. They did. I digress…
The vision Asghar articulated involves a “personal constellation” of devices—glasses, handset, headphones, watch, others—all running right-sized, multi-modal AI models that leverage a constant flow of context to create “a personalized model to each one of us. What I think is to be able to get the best gen AI experience on the device, you need these devices to be working together.”
He continued: “It’s the right time for AI to get into XR to be able to create some of these amazing use cases…Gen AI and XR working together is the best scenario.” He said the scale of innovation is broad and the pace is brisk. The outlook is “very, very promising” and putting those pieces together is “a Qualcomm-sized problem.”