Editor’s Note: Welcome to our weekly Reader Forum section. In an attempt to broaden our interaction with our readers we have created this forum for those with something meaningful to say to the wireless industry. We want to keep this as open as possible, but we maintain some editorial control to keep it free of commercials or attacks. Please send along submissions for this section to our editors at: dmeyer@rcrwireless.com.
When it comes to keeping customers, user experience is the most important consideration for carriers today. However, it is often the case that speeds and bandwidth are not equally shared among customers. It has been said that a very small percentage of users generate the most network load, and heavy users are therefore negatively impacting the quality of experience for other users. A lack of subscriber-level visibility has more often than not led carriers to develop and market “one-size-fits-all” data packages that have little effect on congestion – and more importantly, have a negative impact on the majority of subscribers who are stuck with slower speeds.
And as network providers navigate the transformation to next generation network technologies, such as 4G and beyond, they will have to contend with emerging devices, significantly more data traffic and sudden surges in popularity over the latest devices on the market. This makes the development of cost-effective, high capacity networks inherently difficult. To make matters worse, operators simply cannot optimize the design and management of their networks without fully understanding the drivers of traffic – in terms of applications, devices, subscriber behavior, usage patterns and so on.
Business models at breaking point
In order to ensure that they can cope with this influx of data on the network, operators currently have little option but to install larger pipes and increase the number of monitoring tools on the network. However, these upgrades come at vast expense and this cost cannot be passed on to customers, as an increase in standard pricing promotes churn – something mobile carriers can ill afford in an already temperamental market. As the cost of tools required to monitor and analyze the huge amounts of data continue to rise, average revenue per user decreases and this is causing service providers’ existing business models to break down.
Part of the problem is the fact that many service providers lack essential visibility across their networks, often creating a number of blind spots, which can further impact performance as next-generation services are rolled out. While it is true that most solutions on the market deliver some insight into network activity, they often lack the intelligence to link application usage patterns with individual subscribers for an end-to-end view.
NFV – the solution?
Network operators are therefore looking towards new technology in an attempt to decrease capital expenditure and operating costs, without negatively impacting the QoE for their subscribers. Network function virtualization is one such technology that is making waves within the industry today. NFV provides the ability to implement network functions – such as firewalls, routers and VPN gateways – within software, and consolidate many network equipment types onto industry standard high volume servers, switches and storage. The technology promises to reduce equipment costs and power consumption – thereby decreasing operating costs.
There are, however, a number of obstacles in the way of successful NFV, which center on the difficulty that will arise with monitoring these agile and diverse environments. For example, each NFV vendor will implement the standard in a slightly different way, or implement a different version of the same standard. There is also the challenge of islands of differing topology as it will be some time before all network functions are fully virtualized, which means that networks will be based on multiple different technologies.
Essentially, operators need to reduce costs, maximize ARPU and increase agility but are facing a situation where they cannot cope with monitoring and analyzing the data already present within their infrastructures, while continuing to be profitable – yet it is a seemingly lose-lose situation as the technology available to reduce operating costs will create further monitoring difficulties. To compound this, NFV will create elastic computing environments for entire functions of telecommunications networks and remove the last barrier to big data, allowing it to truly explode and thereby intensifying the problem.
Enabling pervasive visibility
Service providers therefore require a solution that will allow them to monitor the new equipment being deployed as part of NFV enabled networks at the same time as providing an effective way of monitoring and analyzing increasing traffic.
NFV deployments require higher level monitoring capabilities that allow for greater reduction or arbitration of monitored traffic through advanced, granular and multi-threaded filtering, as well as packet manipulation. This in turn allows greater integration with analytic tools, which enables those tools to perform more efficiently by maximizing their analytic throughput. A monitoring network that enables NFV deployments will need to provide a lot of functionality at the packet, flow, and the network wide level, across NFV, traditional and hybrid deployments to be able to truly bring the visibility required.
At the same time, in order to cope with increasing data, the monitoring network will need to be able to connect the right analytical tools to the appropriate large pipes. All the while, the data needs to be conditioned through advanced filtering and data packet manipulation so that the amount of data arriving at each tool is reduced and ensures it is formatted exactly for the tool’s consumption.
From a service provider perspective, NFV is the path forward for several reasons: from removing proprietary hardware, to new ways of controlling services, to reduced operating costs. While there is clearly great value in this technology, without the provision of a monitoring infrastructure, the speed of adoption could be greatly reduced.
The deployment of a pervasive monitoring fabric architecture should ease monitoring headaches. It should deliver pervasive visibility into NFV environments – as well as legacy and hybrid. Pervasive visibility is essential as it enables the unification of data visibility across topologies. With a monitoring fabric in place, network operators will be able to efficiently deploy NFV environments, at the same time as managing their data more efficiently. Only through increased visibility will operators be able to improve on current business models, and, more importantly, existing expense structures, while running the big data services of tomorrow, on the networks of tomorrow.