A perceived lull in NFV deployment and development can be overcome with the right model in place.
When people run out of things to say in a conversation or a fierce storm dies down, we call that a “lull.” This is the current condition, it seems, of the initial excitement over network functions virtualization. There is an air of disillusionment that NFV hasn’t taken the world by storm as quickly as many had hoped. But is this sentiment justified? Haven’t we achieved a lot already? Aren’t we making progress?
Losing the holistic view
Carriers are primarily worried that the business case for NFV is unsupportable. The first round of NFV solutions that have been tested have not delivered the performance, flexibility and cost efficiencies that were expected by some carriers. This has raised doubts in the minds of some on whether to pursue NFV or not. But do carriers really have a choice?
Carriers don’t have a choice, according to Tom Nolle at CIMI Group. Based on input from major carrier clients, Nolle found the cost-per-bit delivered in current carrier networks is set to exceed the revenue-per-bit generated within the next year. There is an urgent need for an alternative solution, and NFV was seen as the answer. So, what’s gone wrong?
There has been a gold rush-like fervor and flurry of activity regarding NFV since the original white paper on the topic four years ago. Everyone was staking their claim in the new NFV space, often retrofitting existing technologies into the new NFV paradigm. Using an open approach, tremendous progress was made on proof-of-concepts, with a commendable focus on experimentation and pragmatic solutions that worked rather than traditional specification and standardization. But, in the rush to show progress, we lost the holistic view of what we were trying to achieve – namely, to deliver on NFV’s promise of high performance and flexible, cost efficient carrier networks. All three are important, but achieving all three at the same time has proven to be a challenge.
Difficult choices
As exhibit A, consider the NFV infrastructure itself. Solutions such as the Intel Open Network Platform were designed to support the NFV vision of separating hardware from software through virtualization, thereby enabling any virtual function to be deployed anywhere in the network. Using commodity servers, a common hardware platform could support any workload. Conceptually, this is the perfect solution. Yet, performance of the solution is not good enough. It cannot provide full throughput and it costs too many CPU cores to handle data, which means we use more of the CPU resources moving data than actually processing it. It also means a high operational cost at the data center level, which undermines the need for cost-efficient networks.
What was causing the performance problem? It turned out to be the open virtual switch. The solution to the problem was to bypass the hypervisor and OVS and bind virtual functions directly to the network interface card using technologies like peripheral component interconnect express direct attach and single root input output virtualization. These solutions ensured higher performance, but at what cost?
The virtual functions cannot be freely deployed and migrated as needed because the hypervisor is being bypassed and virtual functions are tied directly to physical NIC hardware. We are basically replacing proprietary appliances with NFV appliances. This compromises one of the basic requirements of NFV of flexibility to deploy and migrate virtual functions when and where needed.
The ironic and unfortunate reality is that solutions like this also undermine the cost efficiency NFV was supposed to enable. One of the main reasons for using virtualization in any data center is to improve the use of server resources by running as many applications on as few servers as possible. This saves on space, power and cooling costs. Power and cooling alone typically account for up to 40% of total data center operational costs.
So we must choose between performance with SR-IOV and flexibility with the Intel Open Network Platform approach, with neither solution providing the cost efficiencies that carriers need to be profitable. Is it any wonder that NFV is experiencing a lull?
Delivering on the promise
How are we to overcome these challenges to NFV’s promised benefits? The answer is to design solutions with NFV in mind from the beginning. While retrofitting existing technologies can provide a good basis for proof-of-concepts, they are not finished products. However, we have learned a lot from these efforts – enough to design solutions that can meet NFV requirements.
The question remains: Is it possible to provide performance, flexibility and cost efficiency at the same time? The answer is yes. Best-of-breed solutions are in development that will enable OVS to deliver data to virtual machines at 40 gigabits per second using less than one CPU core. By integrating NFV on a NIC, there is a seven-times improvement in performance compared to the Intel Open Network Platform based on standard NICs with a corresponding eight-times reduction in CPU core usage.
Virtual machines can be freely deployed and migrated in such a way that flexibility is maintained by rethinking the standard NIC. The savings in CPU cores ensure that CPU cores in the server are used for processing, not data delivery, allowing higher virtual function densities per server. Because of this it is possible at the data center to optimize server usage and even turn off idle servers, providing millions of dollars in savings. By redesigning the NIC specifically as a standard for NFV, it is possible to address the overall objectives of NFV, both in this scenario and in other NFV-related areas.
It’s also important to return to the vision that the original NFV white papers espoused. NFV is not a technological evolution, but a business revolution. Carriers need an NFV infrastructure to enable them to do business in a totally different way and virtualization, with all the benefits that entails, such as virtual function mobility, are critical to success. Implementing intelligence in software is more scalable and enables automation and agility, so only those workloads that must be accelerated in hardware, should be accelerated in hardware. When hardware acceleration is used, it should have as little impact on the virtual functions and orchestration as possible.
Keeping hope alive
It has become clear that carriers must pursue NFV given the brewing cost-per-bit crisis. Though it has gotten off to a rocky start, technological tweaks can deliver the high performance, flexibility and cost reductions that made NFV attractive to begin with. Just knowing that the benefits of NFV can be achieved will spur carriers on to find the right solutions.
Daniel Joseph Barry is VP of positioning and chief evangelist at Napatech and has more than 20 years’ experience in the IT and telecom industry in roles ranging from research, development, product management, sales to marketing. Prior to joining Napatech in 2009, Barry was marketing director at TPACK (now Intel), a leading supplier of transport chip solutions to the Telecom sector. From 2001 to 2005, he was director of sales and business development at optical component vendor NKT Integration (now Accelink) following various positions in product development, business development and product management at Ericsson. Barry joined Ericsson in 1995, from a position in the research and development department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.
Editor’s Note: In an attempt to broaden our interaction with our readers we have created this Reader Forum for those with something meaningful to say to the wireless industry. We want to keep this as open as possible, but we maintain some editorial control to keep it free of commercials or attacks. Please send along submissions for this section to our editors at: dmeyer@rcrwireless.com.