YOU ARE AT:BSS OSSReader Forum: The carriers’ ‘big data’ dilemma

Reader Forum: The carriers’ ‘big data’ dilemma

Editor’s Note: Welcome to our weekly Reader Forum section. In an attempt to broaden our interaction with our readers we have created this forum for those with something meaningful to say to the wireless industry. We want to keep this as open as possible, but we maintain some editorial control to keep it free of commercials or attacks. Please send along submissions for this section to our editors at: dmeyer@rcrwireless.com.

Carriers are realizing that what was once a problem confined to data centers and enterprise data networks, is now a problem for service providers as well. That problem is “big data.” We hear this term all the time, most frequently in the context of enterprise storage and analytics. Yet big data applications are increasing the volume of data in carriers’ pipes, posing a unique, but not insurmountable challenge.

As subscribers continue to venture outside of the office to work, and as more applications become mobile, the risks posed by the growing volume of data will also grow exponentially. Compounding the problem is the fact that video is becoming increasingly pervasive and taking up orders of magnitude more bandwidth than legacy voice traffic. Unfortunately, network analytic tools don’t get cheaper as network speeds increase. Thus, carriers today must assume the cost for transporting the data and managing the networks that carry it, while lacking the sufficient incremental average revenue per user for this model to work.

So what can they do? The answer is to change the way big data is monitored.

First, carriers require a solution that combines volume, port-density and scale to connect the right analytical tools to the appropriate large or bonded pipes. Second, the data must be conditioned through advanced filtering and packet manipulation, which reduces the amount of data arriving at each tool, while ensuring that the data is formatted precisely for the tool’s consumption. This way, each tool is able to process more data without needing to parse the incoming stream and steal processor cycles from the more important task of data analysis.

Operators are crying out for a solution that won’t result in going bankrupt from tool costs as the sizes of the pipes and the amount of data in those pipes increases. Carriers are looking for ways to realistically keep their business costs in line with what their subscribers are willing to pay for a service, and to provide subscribers with the quality, uptime and reliability they expect. To do so, carriers must understand the nature of the traffic flowing through the pipes, its ingress and egress points and where resources need to be placed on the network to ensure that service-level agreements are met.

Effective monitoring of big data calls for reducing the amount of traffic in a large pipe to make it more suitable to connect to an existing speed tool, at 1G or 10G. Nothing is lost this way – the connected tools will continue to see a representative view of the traffic in the larger pipe and in a session aware and stateful manner. The tools are thereby not merely filtering traffic. They are reducing the amount, while keeping data flows intact, but at a lower speed feed within a smaller pipe. The carrier will then be able to concentrate on specific types of data, or take a look at the entire range of traffic in the larger pipe.

Only through an innovative approach to network visibility can big data be successfully monitored, enabling operators to maintain current business models, and, more importantly, existing expense structures, while running the big data services of tomorrow.

ABOUT AUTHOR