Something strange is occuring in the utilities sector with the introduction of private 5G at the critical edge, says Southern California Edison. The new capabilities of private cellular are unburdening the old grid-edge of low-power IoT, and also liberating it for game-changing high-power IoT. Except it is not strange at all, and it is easy enough to architect, the firm reckons.
Lots of talk (in the Postcards from the Edge series) about the critical 5G edge, and how to architect and orchestrate it; but it is mostly from vendors of some description. Which is not to say it is not valuable; the sales side has broad experience of marrying together edge networking and computing solutions, and intense knowledge of the vagaries of different Industry 4.0 sectors – and how the best laid digital-change plans can come unstuck in a wicked conspiracy of “politics, religion, and budget”. But the counter-narrative is crucial, and does not always go as you would expect.
“Most utilities don’t have a network up yet. Ameren has a few sites, but critical use cases are not really part of the initial deployment. I haven’t seen it at Ameren, I haven’t seen it at Edison. I haven’t seen talk of it at Evergy (in Kansas), or San Diego Gas and Electric (SDGE), or HECO in Hawaii – or any of the utilities deploying LTE right now. And that is partly because they are risk averse… and partly because their thresholds for uptime are much higher. They are trying to hold network vendors and managed service providers contractually to a higher level of service.”
So says Patrick Nolan, enterprise architect and senior advisor at Southern California Edison (SCE), the largest subsidiary of Edison International and the primary grid operator in Southern California – where fire and earthquake, for example, are existential threats to power in the region. If the lights go out, everything stops – darkness falls on the local economy, to an extent. So if you want to know what critical industry really thinks about the ‘critical edge’, then utility networks are the place to go. And Nolan pulls no punches.
“They don’t even know how to define it, yet,” he says, in response to the next logical question, about the kind of service level agreements (SLAs) utilities are looking to impose on vendors for ‘ultra-reliability’ in their new edge setups. “One of the challenges is the scale is completely different. The carriers operate at five-nines (99.999 percent) [reliability]. But they can have an outage in the northeast that doesn’t affect California, say, or vice versa. And that happens; all the big three have had big outages, which haven’t really affected their numbers.”
He goes on: “They’ve got 100,000 sites, or whatever, and so-what if 2,000 of them go down for 48 hours? It gets diluted in these massive numbers. Whereas utility networks are going to be in the 600-800 range. It is a lot harder to hit five nines with fewer sites.” He stops himself to make clear his previous references to SDGE and HECO are anecdotal, gleaned from conversations with peers. Either way, Nolan is well placed; he has been with SCE for 18 months, developing its experiments with LTE and 5G in CBRS airwaves.
He was previously at investor-owned utility Ameren, which runs electricity into Missouri and Illinois, and has just signed a 10-year deal with Ericsson to provide LTE and 5G infrastructure to cover power distribution to 2.4 million customers in a 166,000 square-kilometre service area. The Ameren project, which Nolan helped to deliver, will consolidate “disparate solutions” into a single in the 900 MHz band, leased to utilities in the US by private networking specialist Anterix – which owns about 60 percent of the 900 MHz spectrum in the US.
Its contract with Ameren is worth $48 million over a 30-year term; Anterix has a similarly-priced deal ($50 million) with San Diego Gas & Electric (SDG&E). But SCE is pursuing a different route, currently, to connect its assets and services to privately-managed LTE and 5G infrastructure. In 2020, it went all-in on the key ‘#105’ auction of priority access licences (PALs), for localised chunks of the CBRS band at 3.55-3.7 GHz – spending $118 million for 20 of them. It was joined in the round by certain others, notably Sempra Utility (SDGE; spending $21 million on three) and Alabama Power Company ($18.9 million on 231 licences, representing dirt-cheap rates for back-woods coverage).
Eleven utilities spent $170 million in the round to cover their wide-area service territories; in the end, they were conspicuous as the only Industry 4.0 sector with any real interest. Most enterprises in smaller venues have opted instead to sub-let general authorised access (GAA) instead. But analysis by Burns & McDonnell (quoted by Senza Fili) says only 5.5 percent of utilities claimed PALs, worth just 1.64 percent of all CBRS licences; the auction was dominated completely by traditional telcos, which forked out over 90 percent of the total $4.58 billion proceeds.
Nolan says the spectrum does not matter much, in practical terms. “It is just different spectrum,” he says, before going on to explain the differences. “The 900 MHz propagates far better, and allows you to do things like mobility – which you really shouldn’t try to do in CBRS because the signal is too volatile. And, yes, it is not shared, which is nice. But then the Anterix spectrum – three megahertz one way – doesn’t have the bandwidth of CBRS. If you start putting security camera streams onto it, then it will congest pretty fast.”
But the point for critical utilities in the US is that the use cases going onto these networks do not generally trade on the low- and mid-band characteristics of the host spectrum anyway. For the record, SCE has a 10 MHz slice of private-access (PAL) CBRS across its service area, and a further 10 MHz which is “likely to be deployed for GAA”; there is an expectation, generally, that the Anterix and CBRS bands will be combined by utilities in grander Industry 4.0 schemes; Anterix is actively working with CBRS spectrum access provider Federated Wireless, for example.
But right now, in terms of critical applications for critical utilities, there just ain’t much doing. Nolan explains: “The main use cases so far allow for high latency and low throughput… We are only scraping the surface. It is very aspirational. I mean, ‘distribution automation’ is really more like ‘distribution notification’. Because there is no automation. None of these magical things happen on their own. You get a notification to say, ‘hey, this recloser’s gone down and this other one’s kicked in’. Or, ‘this capacitor bank just died’. But really it’s just an alert to send a technician.”
The “holy grail” for electric utilities, he says, is to string-up the kind of wireless alert system that will shut off power before a 150,000 volt cable hits the ground. “That is what the industry aspires to,” he says. “But it is not achievable yet – and certainly not with LTE, where we are in the 30-40 millisecond range, as a best-case to connect to the servers beyond the gateway and get the directions back. I mean, with 5G we are talking sub-five milliseconds, right? So it is lower latency, but [then] time is literally wasted to… pull the data back to a central [cloud] location.”
This is important, and brings new light to the broader discussion about the critical 5G edge – because it makes clear how the combination of local-area cellular and compute infrastructure is supposed to transform industry. But briefly – as an aside; for the record – Nolan notes that the ‘cloud’, as utilised by utilities, is never a public one. “It is definitely not the public cloud – at least not for any utility I’ve seen. The big vendors, which we’re comfortable using – which means Nokia and Ericsson, and not Huawei – [run] private clouds in our data centres.”
SCE has two data centres in Southern California, he explains; a copy of the core LTE/5G network runs at each site to handle different halves of the whole network, and to kick to pick up the slack if one data centre fails – in the event of “catastrophic failure”, he explains. Further redundancy – in ways to compensate for the shortage of ‘nines’ in region-sized local/wide-area utility networks, as explained before – is provided via fallback to public carrier networks.
“It is hard to measure,” he says of the five-nines target. “So it’s kind of a waste of money to try to do it on your own.”
He goes on: “The sense is to adopt one-way roaming with the carriers as a fallback. So if a site goes down, or coverage is reduced – which is almost a given, considering we are using CBRS spectrum, because it is shared and erratic, meaning the noise flow rises with all the devices in there – then you either power-down or disconnect and attach instead to a roaming partner. So it is somewhat difficult economically to achieve that sort of pristine level of service across the network. And the solution is to roam to a public carrier as a safety net.”
Which is interesting, too, of course – and gets into discussion about how to actually deliver the kind of ‘ultra-reliability’ that private 5G has made into a messy sales promise. But it also gets away from the earlier point, just now, about bringing the compute functions closer to the action – in order to deliver grail-style Industry 4.0 applications, like the millisecond shut-off when a power line is falling. But this almost-imaginary move to the grid edge runs contrary to the more generalised migration of grid IoT into the (private) cloud. Which presents a curious two-way Industry 4.0 trend.
Nolan explains the other-way migration. “Advanced metering infrastructure (AMI) systems, say have driven a lot of the ‘thinking’ to the edge – which is not the most practical way if you have an LTE network. Because these are low [rate] transmissions – and so there is no reason to distribute them into the grid. You might as well just centralise it. There is plenty of capacity in the network – so centralise it; it is cheaper and easier. I mean, it is plain why all they have gone to the edge – because legacy systems have dictated that. But those restraints are lifted with LTE and 5G.”
He adds: “LTE changes the architecture at the edge because it gives options that weren’t available before.” Which is the same as the message from Nokia earlier this week (see Nokia’s postcard from the edge), except the Finnish firm argued that private 5G offers enterprises an edge blade-stack into the bargain, where data lives, to converge distributed legacy edge computing into centralised modern edge computing – to rationalise and optimise Industry 4.0 data in a single edge hub. Nolan is not saying different; just that low-fidelity IoT might do better in the cloud.
He says: “The other aspect is cybersecurity. I mean, some utilities have nuclear power plants, and the rules are stringent – way higher than I’ve seen anywhere else. These edge devices have to have their own firewalls and spy bots to gobble everything up. They are like little servers, which require critical security to be distributed at the edge. It is a cybersecurity nightmare. So when you are trying to predict what the industry will do with a certain use case, you have to consider the limitations of the old system, the capabilities of the new one, and the security around it all.”
We run the opposite scenario past Nolan again, just to be sure: but if you want to put cameras on the line to mitigate against catastrophes in ultra low-latency, then your thinking reverses, right? Surely, then, the compute has to be at the edge, in the network? “Yes. But like I said, most of the use cases right now are extremely low-throughput and high-latency tolerant. So there’s no need for that. But there is no magic to any of this. I mean, like you’ve just deduced, the structure I said for AMI would not work for a fire-mitigation camera system,” he responds.
“At least not without straining the spectrum and the radio interface. But we are not discussing any of that very actively right now; when we do start having those conversations, then it will absolutely make sense to bring those computing resources closer to the edge. And, really, you can look at this very practically from a throughput and capacity perspective – how the application impacts the network – and arrive very easily at the most practical solution.” Okay, so why is this market so slow? Why do mission-critical utility services take so long to modernise?
“One of the sayings in this industry is that it is a race-to-second. No one wants to be first. Nobody wants to be at the bleeding edge. Everyone wants ‘tried-and-trusted’ solutions – which they want to test and try some more. The sector is slow because it is risk averse – because of what it does. And it is slow because of regulatory requirements. And because it has to build the business case internally and convince [deeply entrenched] groups and individuals about the virtues of 5G, for example – who respond: ‘but what’s wrong with what we’ve already got?’
“Plus the business case has to be compiled on a raft of use cases – and then brought before the regulatory board to justify why it should get rate-based treatment; why these companies should be approved to spend customers’ money on it. And then they have to actually get the funding – which usually includes an extremely high price tag right out of the gate for the spectrum. And nothing is even proven yet. It is not like they can test it first, really; they have to shell-out up front. It is a leap of faith. But it will also speed up as utilities catch on and word gets around.”
It will speed up as well, reckons Nolan, because, in the end, the business case is water tight – especially for private cellular, even as the balance between the cloud and edge shifts as Industry 4.0 applications get more demanding.
“There are two things going on. One, a lot more sensing devices are going to be deployed on these networks. Some utilities have 5,000-15,000 devices out there, already – which means 5,000-15,000 accounts with carriers for cellular devices plugged into capacitor banks, or whatever they are monitoring. But the expectation is that number will explode to 80,000-100,000 devices, or even more, over the next 10 years. So you get to the point where you can’t keep paying for these accounts. Because it is ridiculous. At some point it is cheaper to have your own network.
“The other thing is they already have every single telecoms system in there – land mobile radio (LMRS), AMI smart meters. I mean, you name it, they have it. So you have this poor NOC team with 15 or 20 binders in the operations centre, and, depending which system goes down, they have to swivel their chair and pick a binder. I mean, it’s all in software now, but you get the point. It is not realistic that they are experts in everything. So the opportunity is to rationalise the operational burden into a single system – to reduce human error and reduce costs.”
He closes: “There are devices and there are servers but, as far as the telecoms link between them goes, the logic is to have just one – so this NOC-tech doesn’t have to be an expert in 15 different systems. He just needs to understand cellular.”
For more on this topic, tune in to the upcoming webinar on Critical 5G Edge Workloads on September 27 – with ABI Research, Kyndryl, Southern California Edison, and Volt Active Data.
All entries in the Postcards from the Edge series are available below.
Postcards from the edge | Compute is critical, 5G is useful (sometimes) – says NTT
Postcards from the edge | Cloud is (quite) secure, edge is not (always) – says Factry
Postcards from the edge | Rules-of-thumb for critical Industry 4.0 workloads – by Kyndryl
Postcards from the edge | No single recipe for Industry 4.0 success – says PwC
Postcards from the edge | Ultra (‘six nines’) reliability – and why it’s madness (Reader Forum)
Postcards from the edge | Private 5G is reshaping the Industry 4.0 edge, says Nokia
Postcards from the edge | Edison on the see-saw gains between 5G edge and cloud