YOU ARE AT:OpinionAchieving QoS and QoE in a SD-WAN environment (Reader Forum)

Achieving QoS and QoE in a SD-WAN environment (Reader Forum)

It’s well accepted that quality of service (QoS) is essential in a network environment, allowing network managers to “provide different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow,” according to the Wikipedia definition of QoS. So while the concept is simple, why is there so much disagreement on how to deliver QoS in the SD-WAN space?

There are three techniques commonly thought of to deliver QoS:

  • Forward error correction (FEC)
  • Path selection
  • Prioritization

In SD-WAN environments, all three can deliver improved performance, but is it really QoS? Probably not. And if they’re not delivering QoS, they’re definitely not delivering quality of experience (QoE), which under ITU-T Recommendation P.10 relates to the perception users have about the services they use that are delivered across a network.

And while QoS may be delivered across the network, it’s the individual, business-critical applications that also must be considered for the organization in a QoS context—can users do what they need to do with appropriate levels of performance? Employee productivity and, therefore, revenue are at risk if not implemented properly.

Let’s look at the three techniques above in more detail to determine where they fail the QoS test:

Forward error correction. FEC is widely used across enterprises, typically with real-time traffic. With FEC, duplicate packets are sent to ensure successful application delivery. However, FEC also increases cost as well as the bandwidth required, leading to inefficiencies. While FEC may mitigate the effects of packet loss, because it doesn’t consider application priority and duplicates all packets, it doesn’t necessarily create QoS and may actually harm QoE because there is no prioritization for business-critical applications. This is something best used in networks where poor-quality, high packet loss rate links are used, but is of little value to performance and user experience otherwise.

Path selection. With path selection, traffic is routed between two or more links, making decisions on path selection based on packet loss and latency. The underlying transport is used to achieve performance while one of the links at the branch site supports the application in question—if it can. Most solutions understand applications at a high level but not with the granularity to meet the needs of certain applications over others as they compete for bandwidth. They may over- or under-allocate resources, creating inefficiencies.

Traffic prioritization. There are several limitations to current traffic prioritization approaches. Since usually just a router configuration is involved, traffic prioritization is applied at the WAN port, which precludes end-to-end delivery. Despite the knowledge that traffic can be variable and unpredictable, most SD-WAN prioritization solutions utilize static queues and therefore cannot handle changes in traffic volume or application mix dynamically. This again creates inefficiencies — not only with unexpected bursts of traffic but also when queues are full, or overflow occurs. Basing a decision merely on capacity, not the nature of the traffic or the conditions of the network link, results either in sub-optimal performance or impact on the performance of other applications unnecessarily. While this may sound like QoS from the perspective of the network because traffic flows are kept smooth, it does not take into consideration the users’ needs.

A different approach

So, if these traditional techniques for delivering QoS are not, in fact, delivering QoS, what can organizations do? It starts by breaking the mold of traditional thinking by:

  • Optimizing delivery from the outset, not by providing inefficient duplication of packets commonly found with FEC. Traffic can be dealt with more efficiently on a session-by-session basis rather than applying a broad brush globally and increasing traffic volume.
  • Delivering a more predictable outcome by understanding at a granular level what each application really needs to achieve QoS and QoE — and by using a broader range of metrics for overall path selection decision criteria. Using this process, each application can be assigned priority based on real business requirements. Different sessions from the same application should be able to be routed to different paths for optimal efficiency. More traffic can thus be delivered, via the optimum path, which ensures higher performance levels.
  • Making QoS — and QoE — independent of the network. To achieve this, several capabilities must be aligned:
    • Dynamic queues implemented as an application overlay that is optimized for each application and enables performance requirements to be mapped to the organization’s chief business objectives
    • The ability to control the size of queues dynamically based on network conditions and offered application traffic load
    • The policy rules defined for each user and each application applied independently to each session

Only when all these elements are in alignment can both QoS and QoE be delivered. As SD-WAN deployments continue to escalate, breaking away from traditional methods and incorporating new ways of thinking is critical.

Ricardo Belmar is the senior director for global enterprise marketing at Infovista. In this role, Ricardo develops market positioning and campaign strategy for Infovista’s enterprise solutions globally leveraging his more than 20 years of IT industry experience.

 

ABOUT AUTHOR

Reader Forum
Reader Forumhttps://www.rcrwireless.com
Submit Reader Forum articles to engageRCR@rcrwireless.com. Articles submitted to RCR Wireless News become property of RCR Wireless News and will be subject to editorial review and copy edit. Posting of submitted Reader Forum articles shall be at RCR Wireless News sole discretion.