It seems that the future is getting faster every day. “The Future of Network Appliances,” a recent survey conducted by Heavy Reading, revealed that in 2018 – a mere two years in the future – 100 gigabits per second will be the dominant speed rate in core, metro and access networks. Great strides have been made over the last three years in proving the viability of network functions virtualization – but is it ready for this challenge?
Dozens of proof-of -concept trials have proven that workloads can be migrated to virtual environments running on standard hardware, and there are even examples of carrier deployments using NFV. This is all thanks to the work of visionary carriers and vendors who recognize the need for a drastic change in how carriers do business if they are to survive and thrive in the future.
These trials have put an end to the question of whether NFV can work. The real question now is how we can make NFV work effectively so it will deliver on its promise. The issue is no longer whether a service can be deployed using NFV, but whether we can manage and secure that service in an NFV environment. In other words, the challenge now is to operationalize NFV.
In order to manage and secure services in an NFV environment, network appliances that can monitor and analyze network behavior must be deployed. The Heavy Reading survey provides insight into how network appliances are being used today, progress on migrating network appliances to virtual environments and insight into the challenges that need to be addressed to ensure the success of this migration.
The survey revealed a broad appreciation for the operational value of appliances. Forty-seven percent of respondents considered network appliances for network management and security as essential, while a further 39% considered them valuable. Survey responses also show that network management and security appliances are broadly deployed, especially for applications like network and application performance monitoring, test and measurement as well as firewalls, intrusion detection and prevention, and data loss prevention.
However, respondents also revealed that progress is being made with respect to migrating network appliances to virtual environments – especially for the most widely deployed applications. Seventy-three percent of carriers indicated that they intend to deploy virtualized appliances over the next two years. Network equipment vendors are responding, with 71% indicating that they intend to deliver virtualized appliances in the same time frame.
Respondents did see challenges in delivering and deploying virtualized appliances. The top three issues were interworking with other vendor solutions (81% concerned or extremely concerned), security (79%) and throughput (80%).
Perhaps the most significant challenge is the widespread deployment of 100G network data rates in not just the core but also the metro and, most surprisingly of all, the access network. Survey respondents were asked to indicate the most common planned data rate for the core, metro and access networks in 2018. The responses showed that 75% of respondents planned for 100G as their most common data rate in the core and 71% planned to use 100G in the metro, while 58% planned to use 100G in the access network.
This could very well be the most significant obstacle in virtualizing network appliances by 2018. The first 100G physical network appliances are just now being introduced to the market. They are based on standard servers, as the majority of physical network appliances are today. However, they rely on high-performance network interface cards capable of providing the throughput required at these data rates.
For applications like this, standard network interface cards cannot provide the performance required even at data rates of 10G. Recent benchmark testing of NFV solutions, which are based on standard NICs, have shown that there are serious performance challenges in using these kinds of products for high-speed applications even when using data plane development kit acceleration. Solutions based on bypassing hypervisors, such as single-root input/output virtualization, provide some relief, but come at the expense of virtual function mobility and flexibility.
Virtualized networks require network management and security solutions. According to the survey, these solutions must be able to operate at 100G data rates by 2018. This suggests a rethink about needing alternative NIC solutions in NFV deployments. Virtual appliances are facing performance challenges at such high data rates, but the problem has been overcome in the physical realm; to make NFV operational even at 100G speeds, the physical solutions must be transferred to virtualized environments.
Daniel Joseph Barry is VP positioning and chief evangelist at Napatech and has over 20 years experience in the IT and Telecom industry. Prior to joining Napatech in 2009, Barry was marketing director at TPACK, a leading supplier of transport chip solutions to the Telecom sector. From 2001 to 2005, he was director of sales and business development at optical component vendor NKT Integration (now Ignis Photonyx) following various positions in product development, business development and product management at Ericsson. Barry joined Ericsson in 1995 from a position in the R&D department of Jutland Telecom (now TDC). He has an MBA and a BSc degree in Electronic Engineering from Trinity College Dublin.
Editor’s Note: In an attempt to broaden our interaction with our readers we have created this Reader Forum for those with something meaningful to say to the wireless industry. We want to keep this as open as possible, but we maintain some editorial control to keep it free of commercials or attacks. Please send along submissions for this section to our editors at: [email protected].