YOU ARE AT:FundamentalsWhat is the role of CI/CD in cloud-native and edge environments?

What is the role of CI/CD in cloud-native and edge environments?

In a more open and flexible network environment, the attack surface has been ‘opened up’

Emerging technologies like cloud, edge and AI represent serious opportunities for telcos when it comes to network performance, efficiency and potential new revenue streams; however, they also present a host of new security risks in the form of container-based vulnerabilities, unsecured APIs, data breaches, insecure network connections and cloud misconfigurations. To address these rising threats, speakers at the recent Telco Cloud and Edge Forum outlined the importance of a holistic, continuous approach to security testing and patch delivery in the form of CI/CD, or Continuous integration (CI) and continuous delivery/deployment (CD).

CI/CD is a software development principle or method in which automation is introduced into the entire lifecycle of software or application development, from testing to deployment. In this context, the automatic and continuous nature of CI/CD will allow operators to get security updates and fixes more quickly, efficiently and reliably into the network.

As a brief characterization of the main challenge ahead, Spirent’s Senior Product Manager for security test solutions Sashi Jeyaretnam explained that cloud-native and edge environments inherently present a much more “open, flexible, software driven approach to networking, and therefore it opens up the attack surface a lot more.” As such, the potential for bad actors and malicious activities increases because more network layers are introduced. “There are so many moving parts in this environment,” she added.

According to F5’s Senior Solutions Engineer Greg Robinson, telemetry — which collects network traffic data to analyze it for threats — will become more critical in a more complex and disaggregated network. “It’s going to be important for developers to incorporate code that exports statistics and status out to dashboards so that [the] status can be seen regardless of where the service or what the container is running,” he said, adding that the company is also looking into microservice based security and micro-segmentation through service meshes to gain insight into what’s going on between those microservices running in Kubernetes environments.

Amy Zwarico, the director of cybersecurity at AT&T, also mentioned the growing importance of telemetry, recommending that telcos explore various techniques to monitor every layer of their network.  to better determine if “something anomalous [is] happening.”

Cloud-native and edge environments are very much “API-driven,” as Jeyaretnam pointed out, and Spirent is finding that many telcos do not have a secure API and efficient token management. Other notable issues, she continued, include access control like privilege escalation, improper admission controls and role-based access controls, as well as “low-hanging fruit” like misconfigurations and credentials being default credentials.

“And to minimize these risks, it’s all about having a holistic approach,” she said, reiterating the fact that there are a lot of layers in a cloud native infrastructure — the Kubernetes operating system, the compute infrastructure, the network functions, those that validate the network functions themselves, and so on — and each and every layer must be validated and assured to make sure that this network evolution is function properly. “Having that comprehensive, holistic approach that covers all of those layers and can be assessed on a continuous basis is going to be the key to be able to resolve or mitigate those gaps that customers are finding in this environment,” she claimed.

Therefore, all three panelists agreed that testing network security defenses in routine intervals won’t be enough; testing must be a continuous process, and it must be automated.

“Making [testing] part of a complete automated process, you are going to be able to do your testing that used to take you months to do with a little bit of optimization in terms of lab automation and being able to use parallelization for testing and consolidating reports and so forth. You will be able to bring that testing cycles down from months to hours,” said Jeyaretnam.

However, these CI/CD pipelines will become more complicated. That’s because, with hardware and software coming from different vendors, per the promise of Open RAN, each layer of the architecture will require different patching cycles. “Think about the operating system on the cloud itself, then you’ve got a Kubernetes layer, then you have network functions and they’re all separate,” explained Zwarico. “They could all be coming from different places with really different release schedules. Vendors are going to have to address the complexity of aligning their operational and their security practices to be able to provide very frequent security updates and patches. I think this is going to be a big change … Telcos are not used to that rapid model in their mobility networks.”

Jeyaretnam agreed that the CI/CD process will be complex and suggested that telcos to prioritize CI/CD pipeline early on. “They have to start planning for it as they’re designing their networks and making their vendor choices and picking the right solutions,” she argued. “And as they’re developing these networks, they should be building their test cases and test plans around these elements that they’re introducing into their network design.”

ABOUT AUTHOR

Catherine Sbeglia Nin
Catherine Sbeglia Nin
Catherine is the Managing Editor for RCR Wireless News and Enterprise IoT Insights, where she covers topics such as Wi-Fi, network infrastructure and edge computing. She also hosts Arden Media's podcast Well, technically... After studying English and Film & Media Studies at The University of Rochester, she moved to Madison, WI. Having already lived on both coasts, she thought she’d give the middle a try. So far, she likes it very much.