YOU ARE AT:IoTPostcards from the edge | Cloud is (quite) secure, edge is not...

Postcards from the edge | Cloud is (quite) secure, edge is not (always) – says Factry

Another angle on (postcard from) the critical 5G edge, in support of an upcoming editorial report on the matter; industrial IoT software provider Factry, tackling the challenge of industrial automation as “hard-boiled IT experts”, suggests the entrenched paranoia and control freakery that tends to keep Industry 4.0 workloads on-site is wrongheaded, sometimes. Legacy IT/OT systems in factories are often a mish-mash of antiquated and proprietary technologies, which are hardly managed at all, it says, and cloud security is already good, and getting better. 

Jeroen Coussement, chief executive at the Belgium-based firm, emphasising his experience in process manufacturing primarily, comments: “Security technologies and security awareness have come a long way in modern IT and cloud infrastructure, while a lot of on-site systems barely get updates and security patches on a regular basis. It’s not uncommon to see legacy systems in factories on outdated and unsupported operating systems. These are arguably much more prone to security risks than a well managed cloud environment.”

Coussement – mid- to long-term data has a place in the cloud

Importantly, it should be noted Coussement is discussing “mid- to long-term data”, specifically, where low-latency operational performance is not the deal-breaker, and the urge to retain it on the enterprise edge, close to the action, because of perceived insecurities with public cloud infrastructure. “In practice, this is often an illusion,” he says, going into an example of where “cloud or [MEC-style] in-between” compute engines might be better engaged. “We are talking about optimising process control in real-time with input from AI models,” he explains. 

“Taking data from a process, running it through an AI model, and sending feedback to the process can be pretty resource intensive, and [such] resources are more easily scalable in a cloud setting.” It sounds almost, if not exactly, like the standard OT edge regime where the data processing is performed at the edge, to minimise the payload and latency for round-trips to the cloud, and the insights are pinged back from a cloud dashboard as rapid alerts, and otherwise uploaded for mid-term training of AI models. 

Except Coussement suggests the donkeywork (“the AI model”), in certain circumstances, might be done in the cloud, and not at the edge – so that the backhaul network is required to take the strain. He clarifies: “The real-timeness is a concern here, but… less and less of one – because these are supporting and not controlling applications, [and so] short interruptions should be manageable.” He likens the case of game streaming platforms, where “input is taken from the user and graphics are rendered on the cloud – and streamed back in a matter of milliseconds”. 

Equally, and absolutely, he concurs that critical industrial workloads must reside at the edge. “Most workloads that actually control the process on a sub-second level [latency] – in PLCs and SCADA systems, for example – should stay as close to the process as possible. Unavailability for only a few seconds… might cause a lot of damage and production loss. While cloud computing is evolving fast and latencies are dropping, it is not the real-timeness but the potential consequences of a temporary interruption that are the main driver [for edge over cloud],” he explains.

But he also flip-flops back to the case to engage cloud-based computing in integrated hybrid setups. “When we talk about higher-level applications – like data logging, analytics, MES systems – then this conclusion becomes less clear. Production can typically go on for a while without major hurdles when [workloads] become unavailable. With today’s high stability and availability of internet connections, more and more of this workload is shifted to the cloud, with sometimes on-edge redundancies built-in – such as data buffering in the case of real-time data collection.”

He offers further clarification, in response to a question of whether certain Industry 4.0 workloads will stay on-site, always, separate from the public internet. “In high risk scenarios – think nuclear power plants, military sites – that is almost always the case. The main driver is security” In the end, it is an architectural minefield, he says, which demands a crack multi-disciplinary unit. “You need to balance security, latency, bandwidth with industrial, corporate, and application-specific requirements – to come up with the right solution. Internal as external collaboration is key.”

For more on this topic, tune in to the upcoming webinar on Critical 5G Edge Workloads on September 27 — with ABI Research, Kyndryl, Southern California Edison, and Volt Active Data.

All entries in the Postcards from the Edge series are available below.

Postcards from the edge | Compute is critical, 5G is useful (sometimes) – says NTT
Postcards from the edge | Cloud is (quite) secure, edge is not (always) – says Factry
Postcards from the edge | Rules-of-thumb for critical Industry 4.0 workloads – by Kyndryl
Postcards from the edge | No single recipe for Industry 4.0 success – says PwC
Postcards from the edge | Ultra (‘six nines’) reliability – and why it’s madness (Reader Forum)
Postcards from the edge | Private 5G is reshaping the Industry 4.0 edge, says Nokia

ABOUT AUTHOR

James Blackman
James Blackman
James Blackman has been writing about the technology and telecoms sectors for over a decade. He has edited and contributed to a number of European news outlets and trade titles. He has also worked at telecoms company Huawei, leading media activity for its devices business in Western Europe. He is based in London.