YOU ARE AT:5GFog computing: A new architecture to connect the edge to the cloud

Fog computing: A new architecture to connect the edge to the cloud

What’s the difference between fog computing and edge computing?

“I think edge is to fog as…apple is to fruit,” Helder Antunes, chairman of the OpenFog Consortium explained. “We look at fog computing as an end-to-end architecture from the cloud to the very thing connected. It encompasses bits and pieces from the cloud. Edge means different things to different people. Edge is of course a big part of it. When you talk about fog nodes and fog gateways, aggregating sensory devices at the edge, it is in essence an aspect of edge computing.”

Matthew Vasey, director of IoT business development for Microsoft, an OpenFog Consortium board member and OPC Foundation board member, said, “When I think about edge, you have edge devices, a bunch of transport, then a cloud. We think that there needs to be additional topologies that include gateways and compute that go deeper into the network.”

How does this fit into 5G?

Network infrastructure vendors are pushing hard to develop the underlying 5G technologies that will support a broad range of applications that leverage the promised ultra high capacity and very low latency promised by the emerging 5G New Radio specification. From hardware to software, a big piece of this is cloud-native design. But, in some cases, transporting data to the cloud isn’t enough. That means compute and processing power has to be moved closer to the end user. Enter edge computing.

Case in point: Ericsson’s investment arm Ericsson Ventures has invested in a company called Realm, which has developed a mobile platform that focuses on providing real-time support for instantaneous delivery of applications supported by iOS, Android and a variety of cloud environments.

Ericsson Ventures VP Paul McNamara called it the “edge cloud.” Realm CEO Alexander Stigsen discussed the investment as helping to “solve the biggest mobile development challenges, and fulfill the potential of ultrafast 5G networks, powerful mobile devices and limitless developer imagination.”

The big picture here is the simultaneously centralized and decentralized architecture 5G will likely require. Think of a mobile augmented or virtual reality experience. Given the latency requirements of providing that experience in a manner that keeps up with a human’s ability to process imagery–it’s got to be fast otherwise lag can make a user feel nauseous–means a centralized cloud isn’t enough. That processing power has to be at the network edge.

In an industrial context, maybe a remote mining location, powerful edge gateways can collect, process and analyze data more quickly than if that same data were transported back to an off-premise cloud. This speeds up the delivery of actionable data insight, and cuts down on transport costs.

From an operator perspective, whether it’s autonomous driving, AR/VR-type applications or myriad data-intensive smart city initiatives, moving compute, storage and processing power closer to the network edge will be key to delivering on 5G use cases. That was the message from Alicia Abella, AT&T VP of Advanced Technology Realization, during the Fog World Congress in November.

“In order to achieve some of the latency requirements of these [5G] services, you’re going to need to be closer to the edge. Especially when looking at say the autonomous vehicle where you have mission critical safety requirements. When we think about the edge, we’re looking at being able to serve these low latency requirements for the application.”

She listed a number of benefits to operators that can be derived from edge computing:

  • A reduction of backhaul traffic;
  • Cost reduction by decomposing and disaggregating access function;
  • Optimization of central office infrastructure;
  • Improve network reliability by distributing content between the edge and centralized data centers;
  • And deliver innovative services not possible without edge computing.

“We are busy thinking about and putting together what that edge compute architecture would look like,” Abella said. “It’s being driven by the need for low latency.” In terms of where, physically, edge compute power is located “depends on the use case. We have to be flexible when defining this edge compute architecture. There’s a lot of variables and a lot of constraints. We’re actually looking at optimization methods.”

So where does all this compute power come from?

The data processing and storage needs of the internet of things (IoT) will quickly become more than our current model of centralized, data center-based can handle. Additionally, latency-sensitive use cases command geographically decentralized resources as a function of physics. Known as edge computing, this decentralization is seen as an integral part of future networks. But where does all of this equipment go?

To draw a parallel, think of the myriad problems carriers have had deploying small cells at scale. Access to power, backhaul and site acquisition, as well as varying regulatory regimes, have significantly slowed operator ambitions related to network densification. Take that same paradigm and apply it to the servers, gateways and other network equipment that comprise edge computing at scale. Where is all of this stuff supposed to live? And, following a traditional real estate-based deployment model, could it even be deployed with enough velocity to support a rapid commercialization and monetization of 5G?

Meet Robert MacInnis, CEO and co-founder of ActiveAether. This is a nascent venture, but MacInnis has already commercialized AetherStore, a software solution that combines unused space on existing networked computers to create on-premise storage. So, in an enterprise context, say you have 1,000 workstations that are used 10 hours a day. Why not use the pooled, unused storage of those terminals to solve your storage needs and save a hefty, monthly bill from Amazon Web Services? Now, extend that concept beyond storage to compute power.

“Our technology,” MacInnis told RCR Wireless News, “enables the utilization of any computational resource whether it be a laptop, a tablet, even a smart phone, or a work station PC, or a server, or indeed your cloud endpoint, or any technology in the future for that matter, to be utilized as a provider of software services.”

The Search for Extraterrestrial Intelligence (SETI) is a collective-based approach to the monitoring and analysis of electromagnetic radiation for any markets that would suggest communications from aliens. UC Berkeley runs a project called SETI at Home wherein people like you or me can download a program that runs software to analyze data collected by radio telescopes.

“That is grid computing over the top of this end user resource pool,” MacInnis explained. “It has some success, some worldwide uptake, but it’s all donation-based, it’s all people lending their computing resources for free either for fun or for some good will activity.”

ABOUT AUTHOR

Sean Kinney, Editor in Chief
Sean Kinney, Editor in Chief
Sean focuses on multiple subject areas including 5G, Open RAN, hybrid cloud, edge computing, and Industry 4.0. He also hosts Arden Media's podcast Will 5G Change the World? Prior to his work at RCR, Sean studied journalism and literature at the University of Mississippi then spent six years based in Key West, Florida, working as a reporter for the Miami Herald Media Company. He currently lives in Fayetteville, Arkansas.