Network Performance: Active Monitoring? Passive Monitoring? How About Both?!

3D style white and gray illustration of two human figures, each holding a giant puzzle piece that are fitted togetherTwo types of monitoring are often discussed and considered to assure the performance of communications networks: active testing and passive testing. It’s common for these to be presented as an either/or choice, one being better than the other. Instead, we think that these are complementary technologies, and that indeed operators should be using both to get a full picture of what’s happening from overall network performance all the way down to application transaction-level activity to understand the end-user experience.

Active monitoring: precise, targeted, real-time… but yeah, it’s simulated

Let’s start with active monitoring, which is becoming more popular because of its “real-time” nature, and the fact that it’s always available; no need to wait for actual user traffic. These features are becoming increasingly important to help operators differentiate on quality of experience (QoE) in the face of increasingly complex mobile networks. Also known as synthetic monitoring, this method simulates the network behavior of end-users and applications, tracking activity at regular intervals (as fast as thousands of times a second) to determine metrics like availability or response time. It’s precise and targeted—well-suited for real-time troubleshooting and optimization.

Active synthetic monitoring is also useful for:

  • Segmenting the network, as well as providing an end-to-end view. (Some passive methods can do this too, but at a very large expense.)  
  • Validate servers and network paths before a service or application is committed to use them, or switched on using them.
  • Selecting alternate servers or network paths best suited to a specific service or application.
  • Discover and report on existing, varying paths taken by service and application traffic.

There are a few potential catches, though:

  1. Administrative burden. Increased network complexity may actually be a deterrent to using active monitoring, because of the management involved. An automated provisioning solution (aka self-provisioning) may be able to make this requirement manageable.
  2. It’s artificial. Because active monitoring uses simulated test traffic, it can never 100% reflect what is happening with real end-user application behavior and therefore the user experience. (Although it’s important to note that applications potentially change their behavior with every release, and active monitoring allows historical consistency by keeping the traffic pattern the same.) This is why operators also need passive testing.

Passive monitoring: low overhead, real traffic…. but yeah, it’s reactive

In contrast to the real-time, simulated nature of active monitoring, passive monitoring tracks actual (not synthetic/”artificial”) traffic over time—using specialized probes or built-in data capture capabilities on switches and other network devices—and reports on network resource usage. The observational nature of passive monitoring makes it ideal for predictive analysis using large volumes of data to identify bandwidth abusers, set traffic and bandwidth usage baselines, and mitigate security threats.

Also, in content delivery networks (CDNs), the ability to extract service names and report usage is paramount. With passive monitoring, it is possible to look into the payload and track a specific application using a given server. In other words, passive monitoring provides service inventory.

As with active synthetic monitoring, passive monitoring has its own set of limitations:

  • As network topology becomes more dynamic, discovery is becoming a necessity. This is only possible using active methods.
  • The more complex a network is, the more likely that passive monitoring, and the data it reports, won’t suffice to determine root causes or locations of performance issues.
  • Every application uses the network differently, reacts differently, and addresses different users with different QoE sensitivity. Any insight extracted for a given application may not be applicable to another application; it’s not a slam dunk.
  • Passive monitoring solutions must be constantly updated to reflect the changing nature of services.

Better together

So, in a nutshell: active synthetic monitoring is great for real-time performance statistics about specific network functions while also putting the overall network environment into context. And passive monitoring, while reactive in nature, is a useful source of historical data for predictive analytics.

Passive monitoring continues to play a role in managing and optimizing wireless networks, and always will. Yet, given the complex nature of mobile networks today and tomorrow, operators also need to use active testing for real-time, proactive, automated QoE optimization. These are complementary technologies; both are necessary for complete performance assurance and competitive differentiation.

A single deployment of Accedian’s SkyLIGHT™ performance monitoring solution can bring both methods into the fold. For example, the VCX component performs active monitoring in the form of TWAMP, service activation testing (SAT), service OAM, and similar methods. Meanwhile, the FlowBROKER™ capture function can feed data to SkyLIGHT PVX™ or another passive analyzer for predictive analytics, and FlowMETER™ provides passive metering and reporting (classifying and counting packets in a very granular fashion). The virtualized (software-based) nature of SkyLIGHT streamlines all of this; a single remote module makes possible both active and passive monitoring.

Claude Robitaille

In his role as Chief Technology Officer at Accedian, Claude leads development of the product architecture, engineering, and technology teams, and serves as senior technologist. Claude began his telecom industry career more than a quarter century ago at EICON Technologies where he was part of the core engineering team, designed a broad datacom access product portfolio, and held the positions of Product Architect, Project Manager, and Team leader. In 2000, Claude co-founded Colubris Networks where he developed a leading edge line of wi-fi access routers and served as Operations Manager and Director of R&D. Claude holds a Bachelor of Engineering degree from the University of Montreal’s the École Polytechnique.

  • Stay In Sync

    Send me periodic emails with news, product updates, and invitations to events.

  • Acronym Guide

    327 Terms, Page 1 of 82

    2G

    Second Generation
    A cellular telecom network that uses second-generation wireless technology. Such networks digitally encrypt phone conversations, and allow data services including SMS text messages.

    3G

    Third Generation
    A cellular network that uses third-generation wireless technology based on standards that support wireless voice telephony, mobile and fixed internet access, video calls, and mobile TV. Such networks are capable of data transfer rates of at least 200 Kbps and as fast as 21 Mbps.

    3GPP

    Third Generation Partnership Project
    International collaboration among telecommunications associations, with the purpose of developing and maintaining the Global System for Mobile Communications (GSM) specification for 3G mobile networks.

    4G

    Fourth Generation
    A cellular network that uses fourth generation wireless technology to deliver mobile broadband internet access in addition to voice and text messaging. Two synonymous 4G systems are commercially deployed: Mobile WiMAX an Long Term Evolution (LTE). LTE is the predominant system in the U.S.

    Page 1 of 82