Accedian is now part of Cisco  |

Avatar photo
By Mae Kowalke

Analytics for virtualized networks: strategies and challenges

Analytics are expected to play a major role in developing the intelligence to control virtualized networks and more efficiently make use of network resources. What progress has been made deploying real-time analytics and integrating that intelligence into virtualized networks to deliver more fully on the promise of NFV?

At Big Communications Event last month, Michael Rezek, Accedian VP of Business Development and Strategic Partnerships, was on a panel exploring those and related questions. Some of his points are summarized below; the full panel video is also now available for a deep look at integrating analytics with virtualized networks.

How does a virtual environment change customer experience management?
“Customers don’t care whether their service is delivered over a physical or virtual infrastructure,” Rezek stressed. “They only know what they experience.”
So, although there is a great drive toward virtual network functions and SDN, operators neglect the customer experience at their peril. 
Practically, this means that it’s more important than ever to invest in network monitoring, visibility, and pervasive telemetry. The goal is to deliver metrics to northbound analytics and control systems that can take appropriate action to maintain high quality customer experience. 
In a virtual environment, what role do QoS/QoE scorecards play, and how are operators implementing them?
First, operators must take into account that there’s a difference between the network and its performance, and the performance of the network functions virtualization infrastructure (NVFI) host. When using a virtual CPE host, NVFI consists of all the software layers involved—hypervisor, VIM, VNFs, and orchestration.
With this setup in mind, Accedian has put effort into aspects of NVFI to drive development of consistent measurement standards. Two examples:
  1. Making it possible to timestamp collected packets as near the bare metal (server) as possible. 
  2. Using adaptive learning techniques to better coordinate timestamping. 
The goal in both cases is to create uniform, standardized metrics regardless of the testing methodology used (e.g. TWAMP, TWAMP Lite, etc.). 
“In an ideal virtualized world, you’d have a timestamp somewhere near the server,” Rezek explained. “This isn’t a problem in the physical world because almost all routers and other hardware devices have some type of TWAMP reflector—and if they don’t you can add that feature with an SFP.”
He added that virtualized service workloads are not all going to be confined to a single server; it’s likely they’ll be spread across racks or even different data centers. Thus, to get a full and accurate picture of what’s happening, it’s necessary to place performance monitoring agents within the virtualized service chain—not just network connectivity reflection at the server level. 
How does virtualization potentially affect customer churn?

Every vendor working in the virtualization space has access its own data set, which can potentially be quite beneficial to the operator customer–if it’s processed and shared in a meaningful way. Data that can an enhance the end customer experience may come from many sources, including social media, purchase history, and network performance, but it has to be tied together into some kind of northbound system for it to be of use to the operator. 
Rezek said that, when considering the dataset being delivered to the operator, a vendor should:
  1. Expose the data in some kind of open exchange format so it can be easily consumed
  2. Perform all meaningful correlations possible within the dataset (such as correlating QoS/network performance and QoE/application performance
What network monitoring challenges (and opportunities) does virtualization introduce?

In a nutshell, while virtualization potentially introduces new efficiencies and revenue streams, it also introduces new types of complexities that require automation, which relies on instrumentation for effective visibility.

Essentially, Rezek said, here’s what virtualized networking means:

  • Disaggregating many physical elements
  • Extracting layers of software from physical elements
  • Reassembling those layers as software on x86
  • Adding hypervisors, virtual infrastructure managers, and orchestration

Because of this complexity, Rezek noted, “the ability to instrument and have the visibility and the telemetry is going to become more important than ever because every layer of software and the x86 all have different release cycles and different versioning, with more frequent updates than in the physical world.”

Rezek also brought up the concept of a closed-loop control system, where ubiquitous network visibility feeds intelligence into the control plane, which converts policy and analytics into network configuration actions. This moves things into the realm of true network automation. 

How do analytics and security intersect in a virtual environment?
Rezek explained that one reason Accedian has embraced FPGA technology is because of the need to extend security threat recognition deeper into remote parts of the network. 
For example, an infected BYOD smartphone or tablet might introduce a threat within the network. To combat such threats, policy enforcement is made much more effective using granular packet capture at many locations within the network and sending that intelligence to analyzers where, after a threat is detected, an enforcement actions—such as blocking a MAC address—can be quickly performed. 

“I’m amazed at how little focus there is on the security aspect of NFV,” Rezek said. “Everybody thinks in terms of perimeter protection but most of the threats that occur are within the network.” 

Want more insight about analytics and virtualization?