Editor’s Note: In an attempt to broaden our interaction with our readers we have created this Reader Forum for those with something meaningful to say to the wireless industry. We want to keep this as open as possible, but we maintain some editorial control to keep it free of commercials or attacks. Please send along submissions for this section to our editors at: dmeyer@rcrwireless.com.
The virtualization of traditional networks promises vast and enduring benefits — if providers can meet the challenges inherent in the process.
Strategies like network function virtualization and software-defined networking provide powerful flexibility gains and increased agility, while reducing cost and complexity. Technology becomes more open, provisioning more fluid and networks more application-aware.
Ultimately, the virtualization architecture represents a new networking model that fast-tracks delivery of high-value services. But, replacing proven technologies with unproven techniques also renders trusted functions unknown and unproven again.
Migration will take place in stages, with “low-hanging fruit” like CPE, BRAS, load balancing and firewalls going first, followed by IMS and evolved packet core elements, and perhaps ultimately some switching and routing. As the technology evolves, hybrid networks will coexist into the future, introducing temporary cost and complexity challenges of its own.
From the get-go, however, virtualized network functions will be expected to deliver the same or better performance than the traditional network. False starts are likely to impact the brand as well as the budget, so new and old strategies are needed in deciding what to virtualize and when, and overcoming new challenges, like bottlenecks introduced by migration.
Validating the new architecture
In moving forward with NFV — potentially putting subscriber satisfaction at risk — network operators need to carefully evaluate new elements of the virtualization infrastructure as they select and deploy them. Common components like virtual switches (v-switches) and hypervisors not only determine the power of the system as a whole, but can potentially introduce issues and vulnerabilities along the way.
At each layer of the new architectural model, specific aspects of performance must be explored:
–At the hardware level, server features and performance characteristics will vary from vendor to vendor, and many different types will likely be in play. The obvious parameters are CPU brand and type, memory amount, and the like. Driver-level bottlenecks can be caused by routine aspects such as memory read/writes.
Testing must be conducted to ensure consistent and predictable performance as VMs are deployed and moved from one type of server to another. The performance level of NICs can make or break the entire system as well, with performance dramatically impacted by simply not having the most recent interfaces or drivers.
–V-switches vary greatly, with some coming packaged with hypervisors and others sold standalone. V-switches may also vary from hypervisor to hypervisor, with some favoring proprietary technology while others leverage open source. And finally, functionality may vary from v-switch to v-switch with some providing basic L2 bridge functionality while others act as full-blown virtual routers.
In evaluating their options, operators need to weigh performance, throughput and overall functionality carefully against resource utilization. Testing should begin by “baselining” I/O performance, then progress to piling virtual functions on top of the v-switches being compared. During provisioning, careful attention should also be given to resource allocation and the tuning of the system to accommodate the intended workload (data plane, control plane, signaling).
–Moving up the stack, “hypervisors” deliver virtual access to underlying compute resources (memory, CPU and the like), enabling features like fast start/stop of virtual machines, snapshot and VM migration. Hypervisors allow virtual resources to be strictly provisioned to each VM, and also enable consolidation of physical servers onto a virtual stack on a single server.
Again, operators have multiple choices with both commercial and open source options available. Commercial products may have more advanced features, while open source alternatives have the broader support of the NFV community, and most deployments will feature more than one kind.
In making selections, operators should look at both the overall performance of each potential hypervisor, and the requirements and impact of its unique feature set. The ability of its underlying hardware layer (L1) to communicate with upper layers should also be evaluated.
–Management and orchestration is undergoing a profound fundamental shift from managing physical boxes to managing virtualized functionality. This layer must interact with both virtualized server and network infrastructures, often using OpenStack protocols and in many cases SDN.
Ultimately, cloud-based M&O stands to facilitate management of large, highly distributed network infrastructures and innovative service offerings. The shift, however, requires vastly increased automation that must be thoroughly implemented and tested to avoid new bottlenecks.
–Virtual machines and VNFs themselves ultimately impact performance as well. Each requires virtualized resources — memory, storage and virtual NICs — and involves a certain number of I/O interfaces. In deploying a VM, it must be verified that the host operating system is compatible with the hypervisor.
For each VNF, operators need to know which hypervisors the VMs have been verified on, and assess the ability of the host OS to talk to both virtual I/O and the physical layer.
–The ultimate “portability,” or ability of a VM to be moved from one server to another without impacting performance should also be assessed.
To validate functionality and optimize performance along the way, mobile operators are adopting new test and visibility strategies that introduce virtualized versions of traditional systems. Virtualized testing allows multiple engineers to quickly create VMs needed to test functionality, while physical testers continue to address aspects like scalability and performance.
Together, the two approaches provide end-to-end insight into the NFV process. Then, after provisioning each new virtualized function and measuring its impact, new virtual monitoring capabilities also need to be introduced in order to maintain visibility and avert new security risks.
The race is on
Validating NFV is a life-cycle, “lab to live” challenge that’s well worth the time and trouble.
The race is on, so to speak, and those adopting a marathon versus sprint approach are the odds-on favorites.
By demystifying the virtualization process, proactive providers can position themselves to realize and deliver on the promise of SDN an NFV: a steady stream of the flexible new multimedia services today’s users demand.
Joe Zeto serves as a market development manager within Ixia’s marketing organization. He has over 17 years of experience in wireless and IP networking, both from the engineering and marketing sides. Zeto has extensive knowledge and a global prospective of the networking market and the test and measurement industry. Prior to joining Ixia, Zeto was director of product marketing at Spirent Communications running enterprise switching, storage networking and wireless infrastructure product lines. Zeto holds a Juris Doctorate from Loyola Law School, Los Angeles.
Photo copyright: kawing921 / 123RF Stock Photo